SAP S/4HANA Architecture (SAP PRESS) [New ed.] 1493220233, 9781493220236

Pop the hood on SAP S/4HANA with this guide to its technical and application architecture! Understand the new data and p

341 72 33MB

English Pages 495 [813] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Dear Reader
Notes on Usage
Table of Contents
Foreword
SAP S/4HANA: A New ERP Platform
About This Book
Acknowledgments
Part I   Foundation
1   Architecture Challenges of a Modern ERP Solution
1.1   Characteristics of a Modern ERP System
1.1.1   Even Higher Performance and Scalability
1.1.2   Consumer-Grade User Experience
1.1.3   Extensible Architecture
1.1.4   Intelligent ERP Processes
1.1.5   Simplified and Standardized Implementation
1.1.6   Cloud and On-Premise Deployment Models
1.1.7   Security, Privacy, Compliance, and Data Isolation
1.2   SAP S/4HANA Architecture Principles
1.2.1   Stable but Flexible Digital Core
1.2.2   Simplification with the Principle of One
1.2.3   Open for Innovations through Service Orientation
1.2.4   Modularization into (Hybrid) Integration Scenarios
1.2.5   Cloud First, but Not Cloud Only
1.2.6   Semantic Compatibility to Support Evolution with the Least Possible Disruption
1.3   Evolving a Cloud ERP System from the Best Possible Origins
1.4   Summary
2   Technical Architecture Foundation
2.1   The Virtual Data Model
2.1.1   Core Data Services
2.1.2   Naming Conventions
2.1.3   Structure of the Virtual Data Model
2.1.4   Consumption Scenarios
2.2   ABAP RESTful Application Programming Model
2.2.1   Defining and Developing Business Objects
2.2.2   Defining Business Services
2.2.3   Runtime Architecture
2.3   Summary
3   Simplified Experience
3.1   User Experience
3.1.1   SAP Fiori
3.1.2   User Experience Adoption Strategy
3.1.3   SAP Fiori Launchpad
3.1.4   SAP Fiori Apps
3.1.5   SAP Fiori Elements Apps
3.2   Search
3.2.1   Search Architecture
3.2.2   Enterprise Search Extensibility
3.3   Summary
4   Intelligence and Analytics
4.1   Analytics
4.1.1   SAP S/4HANA Embedded Analytics Architecture Overview
4.1.2   Embedded Analytical Applications
4.1.3   Modeling Analytical Artifacts
4.1.4   Analytics Extensibility
4.1.5   Enterprise Analytics Applications
4.2   Machine Learning
4.2.1   Machine Learning Architecture
4.2.2   Embedded Machine Learning
4.2.3   Side-By-Side Machine Learning Architecture
4.2.4   Machine Learning in SAP S/4HANA Applications
4.3   Intelligent Situation Handling
4.3.1   Example: Contract is Ready as Source of Supply
4.3.2   Technological Background
4.3.3   Intelligent Situation Handling Concept
4.3.4   User Experience
4.3.5   Use Case Examples
4.3.6   Intelligent Situation Automation
4.3.7   Message-Based Situation Handling
4.4   Summary
5   Extensibility
5.1   Key User Extensibility
5.1.1   Stability Criteria for Extensibility
5.1.2   Principles of In-App Key User Extensibility
5.1.3   Field Extensibility
5.1.4   Integration Extensibility
5.1.5   Custom Business Logic
5.1.6   Custom Business Objects
5.1.7   Custom CDS views
5.1.8   Lifecycle Management
5.2   Side-by-Side Extensions
5.2.1   Introduction to Cloud-Native Applications
5.2.2   SAP Cloud Platform and Programming Models
5.2.3   Cloud Foundry Environment
5.2.4   Kyma Runtime
5.2.5   Integrating with SAP S/4HANA Using the SAP Cloud SDK
5.3   Summary
6   Integration
6.1   SAP S/4HANA Integration Interface Technologies
6.1.1   OData Services
6.1.2   SOAP Services
6.1.3   Remote Function Call
6.1.4   BAPIs
6.1.5   IDoc
6.1.6   SAP S/4HANA API Strategy
6.2   SAP API Business Hub
6.3   Interface Monitoring and Error Handling with SAP Application Interface Framework
6.4   Communication Management in SAP S/4HANA Cloud
6.4.1   Communication Scenario
6.4.2   Communication User
6.4.3   Communication System
6.4.4   Communication Arrangement
6.4.5   Calling Inbound Services with User Propagation
6.5   Cloud Connector
6.5.1   Cloud Connector Principles
6.5.2   RFC Communication with SAP S/4HANA Cloud
6.6   Integration Middleware
6.7   Event-Based Integration
6.7.1   SAP Cloud Platform Enterprise Messaging
6.7.2   Business Events Architecture in SAP S/4HANA
6.7.3   Business Events in SAP S/4HANA
6.7.4   Event Channels and Topic Filters
6.8   Data Integration
6.8.1   CDS-Based Data Extraction
6.8.2   Data Replication Framework
6.8.3   Master Data Integration Services
6.9   Summary
7   Data Protection and Privacy
7.1   Compliance Base Line
7.2   Definitions and Principles
7.2.1   Basics in SAP S/4HANA
7.2.2   Data Subject Rights
7.2.3   Technical and Organizational Measures
7.3   Summary
Part II   Application Architecture
8   Master Data
8.1   Product Master
8.1.1   Product Master Data Model
8.1.2   Product Hierarchy
8.1.3   Data Migration
8.1.4   Product SOAP Service API
8.1.5   Product Master Extensibility
8.1.6   Self-Service Configuration
8.2   Bill of Materials, Characteristics, and Configurations
8.2.1   Bill of Materials (BOM)
8.2.2   Classification System
8.2.3   Variant Configuration
8.2.4   Variant Classes
8.2.5   Super BOM
8.2.6   BOM with Class Items
8.2.7   Variant Configuration Profiles
8.2.8   Object Dependencies in Variant Configuration
8.2.9   User Interface and Grouping
8.2.10   Extensibility
8.2.11   High-Level and Low-Level Configuration
8.2.12   Embedded Analytics for Classification and Configuration Data
8.3   Business Partner
8.3.1   Architecture of Business Partner Master Data
8.3.2   SAP S/4HANA System Conversion Scenarios
8.3.3   Data Protection and Policy
8.3.4   Extensibility
8.3.5   Business Partner APIs
8.4   Summary
9   Sales
9.1   Architecture Overview
9.2   Sales Documents Structure
9.3   Authorizations
9.4   Sales Inquiries and Sales Quotations
9.5   Sales Order Processing
9.6   Sales Contracts
9.7   Sales Scheduling Agreements
9.8   Claims, Returns, and Refund Management
9.9   Billing
9.10   Sales Monitoring and Analytics
9.11   Pricing
9.12   Integration
9.13   Summary
10   Service Operations
10.1   Architecture Overview
10.2   Business Objects and Processes in Service Operations
10.2.1   Field Service
10.2.2   In-House Repair
10.2.3   Service Contracts
10.2.4   Solution Business
10.2.5   Interaction Center
10.3   Master Data and Organizational Model
10.3.1   Business Partner
10.3.2   Service Products
10.3.3   Organizational Units
10.3.4   Service Teams
10.3.5   Technical Objects
10.4   Data Model and Business Transactions Framework
10.4.1   Business Transactions Framework
10.4.2   Data Model
10.4.3   Transaction Type and Item Category
10.4.4   Common Functions for Service Transactions
10.4.5   Virtual Data Model
10.4.6   Public APIs
10.5   Integration
10.5.1   Data Exchange Manager
10.5.2   Backward Integration
10.5.3   Integration with SAP Field Service Management
10.5.4   User Interface Technology
10.6   Summary
11   Sourcing and Procurement
11.1   Architecture Overview
11.2   Procurement Processes
11.2.1   Direct Procurement
11.2.2   Indirect Procurement
11.3   Architecture of a Business Object in Procurement
11.4   Central Procurement
11.4.1   Backend Integration
11.5   APIs and Integration
11.5.1   SAP S/4HANA Procurement Integration with SAP Ariba and SAP Fieldglass
11.6   Analytics
11.7   Innovation and Intelligent Procurement
11.8   Summary
12   Logistics and Manufacturing
12.1   Architecture Overview
12.2   Organizational Units
12.3   Master Data Objects
12.4   Transactional Business Objects
12.5   Calculated Business Objects, Engines, and Process Control
12.5.1   Inventory
12.5.2   Available-to-Promise
12.5.3   Material Requirements Planning
12.5.4   Demand-Driven Material Requirements Planning
12.5.5   Kanban
12.5.6   Just-In-Time Processing
12.5.7   Predictive Material and Resource Planning
12.5.8   Capacity Planning
12.5.9   Production Planning and Detailed Scheduling
12.6   Cross Functions in Logistics and Manufacturing
12.6.1   Batch Management
12.6.2   Quality Management
12.6.3   Handling Unit Management
12.6.4   Serial Number Management
12.6.5   Inter-/Intracompany Stock Transport
12.7   Logistics Integration Scenarios
12.7.1   Warehouse Management
12.7.2   Manufacturing Execution Systems
12.8   Summary
13   Extended Warehouse Management
13.1   Architecture Overview
13.2   Organizational Structure
13.3   Master Data
13.4   Stock Management
13.5   Application Components
13.6   Monitoring and Reporting
13.7   Process Automation
13.8   User Interface
13.9   Technical Frameworks
13.10   Warehouse Automation
13.11   Summary
14   Finance, Governance, Risk, and Compliance
14.1   Finance Architecture Overview
14.2   Accounting
14.2.1   General Ledger
14.2.2   Fixed Asset Accounting
14.2.3   Inventory Accounting
14.2.4   Lease Accounting
14.2.5   Service and Sales Accounting
14.2.6   Group Reporting
14.2.7   Financial Closing
14.3   Tax and Legal Management
14.4   Enterprise Contract Management and Assembly
14.5   Financial Planning and Analysis
14.5.1   Budgetary Accounting
14.5.2   Predictive Accounting
14.5.3   Financial Planning
14.5.4   Margin Analysis
14.5.5   Overhead Cost
14.5.6   Production Cost
14.6   Payables Management
14.6.1   Supplier Invoicing
14.6.2   Open Payables Management
14.6.3   Automatic Payment Processing
14.7   Receivables Management
14.7.1   Open Receivables Management
14.7.2   Credit Evaluation and Management
14.7.3   Customer Invoicing
14.7.4   Dispute Resolution
14.7.5   Collection Management
14.7.6   Convergent Invoicing
14.7.7   Contract Accounting
14.8   Treasury Management
14.8.1   Advanced Payment Management
14.8.2   Bank Integration Using SAP Multi-Bank Connectivity
14.8.3   Connectivity to Payment Service Providers and Payment Gateways
14.8.4   Cash Management
14.8.5   Treasury and Risk Management
14.9   Central Finance
14.9.1   Replication
14.9.2   Mapping
14.9.3   Accounting Views of Logistics Information
14.9.4   Central Payment
14.9.5   Cross-System Process Control
14.10   Finance Extensibility
14.11   SAP Governance, Risk, and Compliance
14.11.1   Overview of SAP GRC Solutions
14.11.2   SAP GRC Solutions and SAP S/4HANA Integration
14.11.3   SAP S/4HANA Integration with Enterprise Risk and Compliance
14.11.4   SAP S/4HANA Integration with International Trade Management
14.11.5   SAP S/4HANA Integration with Access Governance
14.11.6   SAP S/4HANA Integration with SAP Privacy Governance
14.12   Summary
15   Localization in SAP S/4HANA
15.1   Advanced Compliance Reporting
15.2   Document Compliance
15.2.1   Motivation
15.2.2   Architecture Overview
15.2.3   Recent Developments and Future Outlook
15.3   Localization Toolkit for SAPS/4HANA Cloud
15.3.1   Components of the Toolkit
15.3.2   Extensibility Scenario Guides and the Community
15.4   Summary
Part III   SAP S/4HANA Cloud-Specific Architecture and Operations
16   Scoping and Configuration
16.1   Configure Your Solution: Scoping and Configuration Today
16.1.1   Content Model of SAP Solution Builder Tool
16.1.2   Scoping and Deployment
16.2   Outlook: SAP Central Business Configuration
16.2.1   The Business Adaptation Catalog
16.2.2   The Ten Business Adaptation Catalog Commandments
16.2.3   Business Processes
16.2.4   Constraints
16.2.5   From Scoping to Deployment
16.3   Summary
17   Identity and Access Management
17.1   Architecture Concepts of Identity and Access Management
17.1.1   ABAP Authorization Concept
17.1.2   Authentication
17.1.3   Identity and Access Entities and Their Relationships
17.1.4   Identity and Access Management Tools
17.1.5   SAP Fiori Pages and Spaces
17.2   Managing Users, Roles, and Catalogs
17.2.1   Communication Arrangements
17.2.2   User Types
17.2.3   SAP PFCG Roles and Business Catalogs
17.2.4   Management of Users, Roles, and Catalogs by Customers
17.2.5   Auditors
17.3   Summary
18   Output Management
18.1   Architecture Overview
18.2   Printing
18.3   Email
18.4   Electronic Data Interchange
18.5   Form Templates
18.6   Output Control
18.7   Summary
19   Cloud Operations
19.1   SAP S/4HANA Cloud Landscape
19.2   Data Centers
19.3   Multitenancy
19.3.1   The System Architecture of SAP S/4HANA
19.3.2   Sharing the SAP HANA Database System
19.3.3   Sharing of ABAP System Resources
19.3.4   The Table Sharing Architecture in Detail
19.4   Software Maintenance
19.4.1   Maintenance Events
19.4.2   Blue-Green Deployment
19.5   Built-in Support
19.5.1   Support Journey without Built-in Support
19.5.2   Built-in Support Architecture
19.6   Summary
20   Sizing and Performance in the Cloud
20.1   Performance-Optimized Programming
20.1.1   Minimal Number of Network Round Trips and Transferred Data Volume
20.1.2   Content Delivery Networks
20.1.3   Buffers and Caches
20.1.4   Nonerratic Performance
20.2   Sizing
20.2.1   Example for Greenfield Sizing
20.2.2   Brownfield Sizing
20.3   Elasticity and Fair Resource Sharing
20.3.1   Dynamic Capacity Management
20.4   Sustainability
20.5   Summary
21   Cloud Security and Compliance
21.1   Network and Data Security Architecture
21.1.1   Access Levels
21.1.2   Resource and Data Separation
21.1.3   Resource Sharing
21.1.4   Data Security and Data Residency
21.1.5   Business Continuity and Disaster Recovery
21.2   Security Processes
21.2.1   Security Configuration Compliance Monitoring
21.2.2   Security Event Management and Incident Response
21.2.3   Infrastructure Vulnerability Scanning
21.2.4   Malware Management
21.2.5   Security Patch Management
21.2.6   User Management
21.2.7   Hacking Simulation
21.3   Certification and Compliance
21.3.1   SAP Operations
21.3.2   SAP Software Development
21.4   Summary
22   Outlook
The Authors
Editors
Authors
Index
Service Pages
Legal Notes
Recommend Papers

SAP S/4HANA Architecture (SAP PRESS) [New ed.]
 1493220233, 9781493220236

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Thomas Saueressig, Tobias Stein, Jochen Boeder, Wolfram Kleis

SAP S/4HANA® Architecture

Imprint This e-book is a publication many contributed to, specifically: Editor Will Jobst Copyeditor Melinda Rankin Cover Design Graham Geary iStockphoto.com: 97970762/© Martin Barraud Production E-Book Hannah Lane Typesetting E-Book Satz-Pro (Germany) We hope that you liked this e-book. Please share your feedback with us and read the Service Pages to find out how to contact us. The Library of Congress has cataloged the printed edition as follows: 2020946677

ISBN 978-1-4932-2023-6 (print) ISBN 978-1-4932-2024-3 (e-book) ISBN 978-1-4932-2025-0 (print and e-book) © 2021 by Rheinwerk Publishing Inc., Boston (MA) 1st edition 2021

Dear Reader, It’s wise to start with the foundation. For a consultant curious about SAP S/4HANA, it’s the architectural underpinnings of SAP S/4HANA. For us, the publisher, it’s a table of contents. It all begins with a book concept: what is this book about, and who’s the audience? For SAP S/4HANA Architecture, the topic is self-explanatory: the go-to guide for those interested in understanding the architecture of SAP S/4HANA. For audience, we’re aiming for technical and foundational consultants, architects, administrators, programmers, IT managers, and more. Then, the acquisitions editor develops a table of contents—a structure that defines the book—with the authors and takes it to the SAP PRESS Editorial Board for approval. Once we’ve laid our foundation, it’s time to build up. For you, that’s working through each piece of core application architecture, developing a holistic understanding of SAP S/4HANA architecture. In other words, getting to know the whole by analyzing its parts. For SAP PRESS, that’s the authors and editors working together to create a great manuscript, based on the building blocks defined in the table of contents. What did you think about SAP S/4HANA Architecture? Your comments and suggestions are the most useful tools to help us make our books the best they can be. Please feel free to contact me and share any praise or criticism you may have. Thank you for purchasing a book from SAP PRESS! Will Jobst Editor, SAP PRESS [email protected] www.sap-press.com Rheinwerk Publishing • Boston, MA

Notes on Usage This e-book is protected by copyright. By purchasing this e-book, you have agreed to accept and adhere to the copyrights. You are entitled to use this e-book for personal purposes. You may print and copy it, too, but also only for personal use. Sharing an electronic or printed copy with others, however, is not permitted, neither as a whole nor in parts. Of course, making them available on the Internet or in a company network is illegal as well. For detailed and legally binding usage conditions, please refer to the section Legal Notes. This e-book copy contains a digital watermark, a signature that indicates which person may use this copy:

Notes on the Screen Presentation You are reading this e-book in a file format (EPUB or Mobi) that makes the book content adaptable to the display options of your reading device and to your personal needs. That’s a great thing; but unfortunately not every device displays the content in the same way and the rendering of features such as pictures and tables or hyphenation can lead to difficulties. This e-book was optimized for the presentation on as many common reading devices as possible. If you want to zoom in on a figure (especially in iBooks on the iPad), tap the respective figure once. By tapping once again, you return to the previous screen. You can find more recommendations on the customization of the screen layout on the Service Pages.

Table of Contents Dear Reader Notes on Usage Table of Contents

Foreword SAP S/4HANA: A New ERP Platform

Part I Foundation 1 Architecture Challenges of a Modern ERP Solution 1.1 Characteristics of a Modern ERP System 1.1.1 1.1.2 1.1.3 1.1.4 1.1.5 1.1.6 1.1.7

Even Higher Performance and Scalability Consumer-Grade User Experience Extensible Architecture Intelligent ERP Processes Simplified and Standardized Implementation Cloud and On-Premise Deployment Models Security, Privacy, Compliance, and Data Isolation

1.2 SAP S/4HANA Architecture Principles 1.2.1 1.2.2 1.2.3 1.2.4 1.2.5 1.2.6

Stable but Flexible Digital Core Simplification with the Principle of One Open for Innovations through Service Orientation Modularization into (Hybrid) Integration Scenarios Cloud First, but Not Cloud Only Semantic Compatibility to Support Evolution with the Least Possible Disruption

1.3 Evolving a Cloud ERP System from the Best Possible Origins 1.4 Summary 2 Technical Architecture Foundation 2.1 The Virtual Data Model 2.1.1 2.1.2 2.1.3 2.1.4

Core Data Services Naming Conventions Structure of the Virtual Data Model Consumption Scenarios

2.2 ABAP RESTful Application Programming Model 2.2.1 Defining and Developing Business Objects 2.2.2 Defining Business Services 2.2.3 Runtime Architecture

2.3 Summary 3 Simplified Experience 3.1 User Experience 3.1.1 3.1.2 3.1.3 3.1.4 3.1.5

SAP Fiori User Experience Adoption Strategy SAP Fiori Launchpad SAP Fiori Apps SAP Fiori Elements Apps

3.2 Search 3.2.1 Search Architecture 3.2.2 Enterprise Search Extensibility

3.3 Summary 4 Intelligence and Analytics 4.1 Analytics 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5

SAP S/4HANA Embedded Analytics Architecture Overview Embedded Analytical Applications Modeling Analytical Artifacts Analytics Extensibility Enterprise Analytics Applications

4.2 Machine Learning 4.2.1 4.2.2 4.2.3 4.2.4

Machine Learning Architecture Embedded Machine Learning Side-By-Side Machine Learning Architecture Machine Learning in SAP S/4HANA Applications

4.3 Intelligent Situation Handling 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5 4.3.6 4.3.7

Example: Contract is Ready as Source of Supply Technological Background Intelligent Situation Handling Concept User Experience Use Case Examples Intelligent Situation Automation Message-Based Situation Handling

4.4 Summary 5 Extensibility 5.1 Key User Extensibility 5.1.1 5.1.2 5.1.3 5.1.4 5.1.5 5.1.6 5.1.7 5.1.8

Stability Criteria for Extensibility Principles of In-App Key User Extensibility Field Extensibility Integration Extensibility Custom Business Logic Custom Business Objects Custom CDS views Lifecycle Management

5.2 Side-by-Side Extensions 5.2.1 5.2.2 5.2.3 5.2.4 5.2.5

Introduction to Cloud-Native Applications SAP Cloud Platform and Programming Models Cloud Foundry Environment Kyma Runtime Integrating with SAP S/4HANA Using the SAP Cloud SDK

5.3 Summary 6 Integration 6.1 SAP S/4HANA Integration Interface Technologies 6.1.1 6.1.2 6.1.3 6.1.4 6.1.5 6.1.6

OData Services SOAP Services Remote Function Call BAPIs IDoc SAP S/4HANA API Strategy

6.2 SAP API Business Hub 6.3 Interface Monitoring and Error Handling with SAP Application Interface Framework 6.4 Communication Management in SAP S/4HANA Cloud 6.4.1 6.4.2 6.4.3 6.4.4 6.4.5

Communication Scenario Communication User Communication System Communication Arrangement Calling Inbound Services with User Propagation

6.5 Cloud Connector 6.5.1 Cloud Connector Principles 6.5.2 RFC Communication with SAP S/4HANA Cloud

6.6 Integration Middleware 6.7 Event-Based Integration 6.7.1 6.7.2 6.7.3 6.7.4

SAP Cloud Platform Enterprise Messaging Business Events Architecture in SAP S/4HANA Business Events in SAP S/4HANA Event Channels and Topic Filters

6.8 Data Integration 6.8.1 CDS-Based Data Extraction 6.8.2 Data Replication Framework 6.8.3 Master Data Integration Services

6.9 Summary 7 Data Protection and Privacy 7.1 Compliance Base Line 7.2 Definitions and Principles 7.2.1 Basics in SAP S/4HANA 7.2.2 Data Subject Rights 7.3 7.2.3Summary Technical and Organizational Measures

Part II Application Architecture 8 Master Data 8.1 Product Master 8.1.1 8.1.2 8.1.3 8.1.4 8.1.5 8.1.6

Product Master Data Model Product Hierarchy Data Migration Product SOAP Service API Product Master Extensibility Self-Service Configuration

8.2 Bill of Materials, Characteristics, and Configurations 8.2.1 Bill of Materials (BOM) 8.2.2 Classification System 8.2.3 Variant Configuration 8.2.4 Variant Classes 8.2.5 Super BOM 8.2.6 BOM with Class Items 8.2.7 Variant Configuration Profiles 8.2.8 Object Dependencies in Variant Configuration 8.2.9 User Interface and Grouping 8.2.10 Extensibility 8.2.11 High-Level and Low-Level Configuration 8.2.12 Embedded Analytics for Classification and Configuration Data

8.3 Business Partner 8.3.1 8.3.2 8.3.3 8.3.4 8.3.5

Architecture of Business Partner Master Data SAP S/4HANA System Conversion Scenarios Data Protection and Policy Extensibility Business Partner APIs

8.4 Summary 9 Sales 9.1 Architecture Overview 9.2 Sales Documents Structure 9.3 Authorizations 9.4 Sales Inquiries and Sales Quotations 9.5 Sales Order Processing 9.6 Sales Contracts 9.7 Sales Scheduling Agreements

9.8 Claims, Returns, and Refund Management 9.9 Billing 9.10 Sales Monitoring and Analytics 9.11 Pricing 9.12 Integration 9.13 Summary 10 Service Operations 10.1 Architecture Overview 10.2 Business Objects and Processes in Service Operations 10.2.1 10.2.2 10.2.3 10.2.4 10.2.5

Field Service In-House Repair Service Contracts Solution Business Interaction Center

10.3 Master Data and Organizational Model 10.3.1 10.3.2 10.3.3 10.3.4 10.3.5

Business Partner Service Products Organizational Units Service Teams Technical Objects

10.4 Data Model and Business Transactions Framework 10.4.1 10.4.2 10.4.3 10.4.4 10.4.5 10.4.6

Business Transactions Framework Data Model Transaction Type and Item Category Common Functions for Service Transactions Virtual Data Model Public APIs

10.5 Integration 10.5.1 10.5.2 10.5.3 10.5.4

Data Exchange Manager Backward Integration Integration with SAP Field Service Management User Interface Technology

10.6 Summary 11 Sourcing and Procurement 11.1 Architecture Overview 11.2 Procurement Processes 11.2.1 Direct Procurement

11.2.2 Indirect Procurement

11.3 Architecture of a Business Object in Procurement 11.4 Central Procurement 11.4.1 Backend Integration

11.5 APIs and Integration 11.5.1 SAP S/4HANA Procurement Integration with SAP Ariba and SAP Fieldglass

11.6 Analytics 11.7 Innovation and Intelligent Procurement 11.8 Summary 12 Logistics and Manufacturing 12.1 Architecture Overview 12.2 Organizational Units 12.3 Master Data Objects 12.4 Transactional Business Objects 12.5 Calculated Business Objects, Engines, and Process Control 12.5.1 12.5.2 12.5.3 12.5.4 12.5.5 12.5.6 12.5.7 12.5.8 12.5.9

Inventory Available-to-Promise Material Requirements Planning Demand-Driven Material Requirements Planning Kanban Just-In-Time Processing Predictive Material and Resource Planning Capacity Planning Production Planning and Detailed Scheduling

12.6 Cross Functions in Logistics and Manufacturing 12.6.1 12.6.2 12.6.3 12.6.4 12.6.5

Batch Management Quality Management Handling Unit Management Serial Number Management Inter-/Intracompany Stock Transport

12.7 Logistics Integration Scenarios 12.7.1 Warehouse Management 12.7.2 Manufacturing Execution Systems

12.8 Summary 13 Extended Warehouse Management 13.1 Architecture Overview

13.2 Organizational Structure 13.3 Master Data 13.4 Stock Management 13.5 Application Components 13.6 Monitoring and Reporting 13.7 Process Automation 13.8 User Interface 13.9 Technical Frameworks 13.10 Warehouse Automation 13.11 Summary 14 Finance, Governance, Risk, and Compliance 14.1 Finance Architecture Overview 14.2 Accounting 14.2.1 14.2.2 14.2.3 14.2.4 14.2.5 14.2.6 14.2.7

General Ledger Fixed Asset Accounting Inventory Accounting Lease Accounting Service and Sales Accounting Group Reporting Financial Closing

14.3 Tax and Legal Management 14.4 Enterprise Contract Management and Assembly 14.5 Financial Planning and Analysis 14.5.1 14.5.2 14.5.3 14.5.4 14.5.5 14.5.6

Budgetary Accounting Predictive Accounting Financial Planning Margin Analysis Overhead Cost Production Cost

14.6 Payables Management 14.6.1 Supplier Invoicing 14.6.2 Open Payables Management 14.6.3 Automatic Payment Processing

14.7 Receivables Management 14.7.1 Open Receivables Management 14.7.2 Credit Evaluation and Management

14.7.3 14.7.4 14.7.5 14.7.6 14.7.7

Customer Invoicing Dispute Resolution Collection Management Convergent Invoicing Contract Accounting

14.8 Treasury Management 14.8.1 14.8.2 14.8.3 14.8.4 14.8.5

Advanced Payment Management Bank Integration Using SAP Multi-Bank Connectivity Connectivity to Payment Service Providers and Payment Gateways Cash Management Treasury and Risk Management

14.9 Central Finance 14.9.1 14.9.2 14.9.3 14.9.4 14.9.5

Replication Mapping Accounting Views of Logistics Information Central Payment Cross-System Process Control

14.10 Finance Extensibility 14.11 SAP Governance, Risk, and Compliance 14.11.1 14.11.2 14.11.3 14.11.4 14.11.5 14.11.6

Overview of SAP GRC Solutions SAP GRC Solutions and SAP S/4HANA Integration SAP S/4HANA Integration with Enterprise Risk and Compliance SAP S/4HANA Integration with International Trade Management SAP S/4HANA Integration with Access Governance SAP S/4HANA Integration with SAP Privacy Governance

14.12 Summary 15 Localization in SAP S/4HANA 15.1 Advanced Compliance Reporting 15.2 Document Compliance 15.2.1 Motivation 15.2.2 Architecture Overview 15.2.3 Recent Developments and Future Outlook

15.3 Localization Toolkit for SAPS/4HANA Cloud 15.3.1 Components of the Toolkit 15.3.2 Extensibility Scenario Guides and the Community

15.4 Summary

Part III SAP S/4HANA Cloud-Specific Architecture and Operations 16 Scoping and Configuration 16.1 Configure Your Solution: Scoping and Configuration Today 16.1.1 Content Model of SAP Solution Builder Tool 16.1.2 Scoping and Deployment

16.2 Outlook: SAP Central Business Configuration 16.2.1 16.2.2 16.2.3 16.2.4 16.2.5

The Business Adaptation Catalog The Ten Business Adaptation Catalog Commandments Business Processes Constraints From Scoping to Deployment

16.3 Summary 17 Identity and Access Management 17.1 Architecture Concepts of Identity and Access Management 17.1.1 17.1.2 17.1.3 17.1.4 17.1.5

ABAP Authorization Concept Authentication Identity and Access Entities and Their Relationships Identity and Access Management Tools SAP Fiori Pages and Spaces

17.2 Managing Users, Roles, and Catalogs 17.2.1 17.2.2 17.2.3 17.2.4 17.2.5

Communication Arrangements User Types SAP PFCG Roles and Business Catalogs Management of Users, Roles, and Catalogs by Customers Auditors

17.3 Summary 18 Output Management 18.1 Architecture Overview 18.2 Printing 18.3 Email 18.4 Electronic Data Interchange 18.5 Form Templates 18.6 Output Control

18.7 Summary 19 Cloud Operations 19.1 SAP S/4HANA Cloud Landscape 19.2 Data Centers 19.3 Multitenancy 19.3.1 19.3.2 19.3.3 19.3.4

The System Architecture of SAP S/4HANA Sharing the SAP HANA Database System Sharing of ABAP System Resources The Table Sharing Architecture in Detail

19.4 Software Maintenance 19.4.1 Maintenance Events 19.4.2 Blue-Green Deployment

19.5 Built-in Support 19.5.1 Support Journey without Built-in Support 19.5.2 Built-in Support Architecture

19.6 Summary 20 Sizing and Performance in the Cloud 20.1 Performance-Optimized Programming 20.1.1 20.1.2 20.1.3 20.1.4

Minimal Number of Network Round Trips and Transferred Data Volume Content Delivery Networks Buffers and Caches Nonerratic Performance

20.2 Sizing 20.2.1 Example for Greenfield Sizing 20.2.2 Brownfield Sizing

20.3 Elasticity and Fair Resource Sharing 20.3.1 Dynamic Capacity Management

20.4 Sustainability 20.5 Summary 21 Cloud Security and Compliance 21.1 Network and Data Security Architecture 21.1.1 21.1.2 21.1.3 21.1.4 21.1.5

Access Levels Resource and Data Separation Resource Sharing Data Security and Data Residency Business Continuity and Disaster Recovery

21.2 Security Processes 21.2.1 21.2.2 21.2.3 21.2.4 21.2.5 21.2.6 21.2.7

Security Configuration Compliance Monitoring Security Event Management and Incident Response Infrastructure Vulnerability Scanning Malware Management Security Patch Management User Management Hacking Simulation

21.3 Certification and Compliance 21.3.1 SAP Operations 21.3.2 SAP Software Development

21.4 Summary 22 Outlook The Authors Editors Authors Index Service Pages Legal Notes

Foreword Dear Reader, By using business software applications, organizations around the globe can plan, control, and steer their cash resources, production capacities, and material supply, as well as track their sales and purchase processes. The enterprise resource planning (ERP) story is a success story for businesses. SAP is both a pioneer and a leader in providing enterprises with real-time transparency into their business processes. Since the birth of business software applications, we have moved from the mainframe to client-server, mobile, and in-memory computing. The technology leap of the past few years has been leading businesses into the cloud, and SAP has been a trusted partner to its customers in delivering software that has always supported all the technology shifts of the past. With nearly 50 years of experience, we have gained deep insights into how businesses run. Together with our customers, we have defined best practices that have become standard procedures designed to help our customers succeed in the future. To date, SAP has more than 440,000 customers and 21,000 partners worldwide, and, according to a 2018 Oxford Economics SAP analysis, 77% of the world’s transaction revenue touches an SAP software system. This means that the global and hyperconnected economy relies on ERP applications from SAP and their foundation: a stable technological architecture that still allows for business flexibility. Technological evolution goes hand in hand with business transformation. With new technology, business models are changing at an ever-increasing pace, and business processes must be designed accordingly, which makes agility key for companies. At the same time, customer expectations are changing too, and we see an increasing demand for hyperpersonalized, intelligent products manufactured in a fair and sustainable way. This requires an ERP architecture to enable built-in agility, allowing companies to adapt to rapidly changing technological and market opportunities. However, reality shows us that many companies have large and highly customized ERP systems today – covering very complex business processes that have evolved over the past few decades. That is why we often see scattered IT landscapes that cannot be moved easily to the cloud. Adding to the complexity is the challenge of delivering an integrated experience across ERP systems and line-of-business solutions through consistent data models and a seamless user experience. This reality demands a new architecture for modern ERP systems, and it requires companies to take a big step toward modernizing both their IT landscapes and the way they run their businesses to keep pace with the changing customer requirements and market conditions around them. It also requires a modern ERP system to be fully integrated. Today, we have taken the best practices from the existing ERP architecture to provide companies with future-proof solutions. SAP S/4HANA is our next-generation, intelligent ERP system, using real-time data with SAP HANA, embedded analytics, and machine learning scenarios. It embraces artificial intelligence (AI) since the future of ERP is AI driven. It is leaner, faster, and more agile, helping enterprises predict and adapt to business disruptions and market changes. As a modern ERP system, SAP S/4HANA

supports companies in scaling their businesses with the agility needed to remain resilient and competitive and to deliver the ultimate customer experience. From a business process perspective, customers in today’s hyperconnected business world are looking for solutions that cover end-to-end business processes, rather than specific capabilities within a solution. SAP solutions bring finance, procurement, engineering, manufacturing, logistics, sales, and services together in an orchestrated way – from the edge to the core, and from the shop floor to the top floor. Our vision for SAP S/4HANA is to provide a business-process-as-a-service ERP system with out-ofthe-box integration to other cloud-native SAP solutions and open interfaces to third-party products. From the perspective of society, SAP believes that tomorrow’s ERP systems will enable companies to drive their businesses with efficiency and use resources wisely. Over the past few years, the public discourse about enterprise software has focused on optimization, simplification, and efficiency. In the future, ERP should help organizations focus even more on the responsible use of any resource, which is an integral part of its name and purpose. And today, this is more relevant than ever. SAP has introduced a solution providing transparency on greenhouse gases, and as we move forward, our goal is to anticipate the environmental footprint of any business decision and its impact on society. While one company alone cannot solve the world’s environmental and societal problems, SAP can enable enterprises to drive end-to-end production and logistics processes, from sourcing materials to delivering goods in a more sustainable and fair manner—across organizations, industries, and the entire value chain. Today, end-to-end business processes span entire value chains and will extend beyond company borders. In the future, enterprises will collaborate in business networks, enabling organizations to gain more transparency, increase agility, and react to changing environments faster. We all face huge challenges posed by climate change, and we at SAP are fully committed to making our contribution to a better future and a more sustainable world. I believe that our generation has a duty to find answers and solutions to climate change and to make those solutions work. And ERP plays a key role in scaling the impact of those solutions. We owe it to the generations who come after us. To make all these aspects a reality, modern ERP systems must be fully integrated, agile, AI driven, and, above all, sustainable. Thomas Saueressig Member of the Executive Board of SAP SE SAP Product Engineering

SAP S/4HANA: A New ERP Platform Written natively for the SAP HANA platform, SAP S/4HANA is an entirely new generation of SAP Business Suite that is characterized by simplifications, massively increased efficiency, and compelling features such as planning and simulation options in many conventional transactions (SAP press release from March 2015)

What makes a software application successful? After its start as a five-person company in 1972, SAP became an international producer of enterprise resource planning (ERP) software with SAP R/2. With its successor, SAP R/3 software, later evolving into the SAP ERP application, SAP became one of the largest vendors for ERP and enterprise software worldwide. In 2015, SAP announced the new SAP S/4HANA suite. Each of these ERP software solutions – SAP R/2, SAP R/3 and SAP ERP, and SAP S/4HANA – is built with a different software architecture. Maybe this is no surprise considering there has been about 20 years between their specific starts of development. Computer science and software development practices changed significantly during that time. The interesting fact is that the software architecture of each of the solutions has been one of its important success factors. You may think that ERP software is all about features and functions, but this is only half the truth. Here are just two examples: SAP R/2 was built for mainframe computers. In the early 1970s, this hardware was so expensive that the newly founded SAP company could not afford to buy a mainframe. Instead, the young software engineers developed their ERP software directly on the hardware of their customers. Being on-site helped them understand the needs and business requirements of the dedicated customers. Thus, the mainframe architecture led to the software getting the right features. The architecture of SAP R/3 is different: it leverages the client-server pattern with relational databases and implements a uniform graphical interface. Its scalable architecture made SAP R/3 attractive for companies of all sizes. SAP R/3 included a development environment for the ABAP programming language. In addition, all business application program code was delivered together with the system. Thus, companies could extend the included functionality and add special feature variants. Moreover, SAP S/4HANA has a different software architecture. While reading this book about the architecture of SAP S/4HANA, you will learn about many new concepts that are revolutionary compared to the former architecture of SAP ERP, such as with the new data models or the combination of transactional and analytical processing. These innovations and concepts may be technically interesting and exciting, but as the examples of SAP R/2 and SAP R/3 have shown, they are only relevant if they really add business value. We have all seen revolutionary concepts and ideas that caught our attention and interest but never became enterprise-grade technologies. For the concepts and technologies introduced in this book, things are different. By serving the largest companies across the globe, SAP has collected significant experience in building enterprise business software and knows how to turn innovative ideas into business

value. And most important, SAP developed these innovations with a clear purpose: SAP S/4HANA should become the next-generation ERP platform for enterprises to run their companies for the next decade or two.

About This Book In the famous book Software Architecture in Practice, Len Bass, Paul Clements, and Rick Kazman define software architecture as “the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them” (Bass et al, Software Architecture in Practice, 2nd Edition, Pearson Education, 2006). Within this book, we describe the software architecture of SAP S/4HANA and focus on the structure, software elements, and relationships that make its business functionality work. We will explain, for example, the interaction of different software parts to enable financial planning or the way different software components are integrated when processing a quotation, which becomes a sales order, which then leads to production and delivery. The book is split into three parts: Part I introduces the architecture challenges of modern ERP software and describes the technical foundation of SAP S/4HANA. You will get to know the virtual data model and how it is leveraged by analytics, search, user interface, and extensibility architecture. Part II explains the business architecture of the core applications of SAP S/4HANA, most prominently, sales, service, procurement, production, warehouse management, and financial accounting. Part III focuses on SAP S/4HANA Cloud. It describes the specific architecture concepts designed to achieve the expected cloud qualities and outlines several aspects of operating SAP S/4HANA Cloud. We certainly cannot cover each functionality in depth, nor all the details of these applications that embrace 237 million lines of code in the latest release of the onpremise version of SAP S/4HANA. Nevertheless, you will learn a lot about the core business processes of today’s companies and how the software architecture of SAP S/4HANA addresses them. This book is written for everyone interested in the conceptual architecture of SAP S/4HANA: Chief information officers (CIOs) and IT managers who already run SAP S/4HANA and have always wanted to know what it does and how it works. This book is not a user manual explaining how to work with a specific screen, but it introduces the basic functions of SAP S/4HANA and how they are executed in the system. Consultants who are working on SAP software implementation projects and need to get insight into how SAP S/4HANA works. This book does not tell you how to set relevant customizing settings to adapt business processes, but it does explain the structure of SAP S/4HANA in principle and how the different components work together to execute business processes. Students and scientists who want to know how the complex world of current business processes is mapped to and executed by software systems. This may be interesting for students of computer science and for students of economics.

This book includes a large number of architecture diagrams visualizing structures, components, and interfaces. All architecture diagrams show models following the Technical Architecture Modeling notation, which is SAP’s Unified Modeling Language (UML)-compliant standard for visualizing software architecture. For more information on the Technical Architecture Modeling notation, visit www.fmc-modeling.org/fmc-and-tam.

Acknowledgments The documentation of software architecture has a long tradition at SAP. An impressive number of internal-architecture concept documents and technical reports about products and technology from SAP were written by skilled writers with deep technical and architectural backgrounds. The numerous authors of this book follow their tradition. We thank all authors for their contributions: Dr. Erich Ackermann, Akhil Agarwal, Martin Alt, Tom Anderson, Mathias Beck, Markus Berg, Christoph Birkenhauer, Renzo Colle, Dr. Andreas Daum, Bastian Distler, Dr. Georg Dopf, Dr. Andrey Engelko, Kolja Evering, Dr. Harald Evers, Holger Faber, Chakram Govindarajan, Dr. Benjamin Heilbrunn, Torsten Heise, Sandra Herden, Michael Herrmann, Gabi Hoffmann, Thomas Hoffmann, Rudolf Hois, Christian Hoppe, Jan Hrastnik, Dr. Dietmar Kaiser, Marlene Katzschner, Andreas Kemmler, Dr. Joachim Kenntner, Dr. Thomas Kern, Markus Klein, Ralf Kuehner, Pradeep Kumar, Volker Lehnert, Dr. Roland Lucius, Dr. Knut Manske, Hristo Matev, Dr. Petra Meyer, Divya Chandrika Mohan, Dr. Klaus G. Mueller, Aalbert de Niet, Ingrid Nikkels, Birgit Oettinger, Till Oppert, Dr. Bernd Roedel, Dr. Siar Sarferaz, Dr. Carsten Scheuch, Wolfram Schick, Josef Schmidt, Dr. Erich Schulzke, Vitor Eduardo Seifert Bazzo, Akshay Sinha, Dr. Uwe Sodan, Weijia Sun, Philipp Stotz, Jochen Thierer, Detlef Thoms, Kumar Vikas, Dr. Martin von der Emde, Helena Vossen, Qiang Wang, Klaus Weiss, and Felix Wente. We owe a debt of gratitude to the many experts and reviewers who helped shape this book: Sarma Adithe, Tobias Adler, Nicole Baranov, Robin Bau, Michael Conrad, Ralf Dinkel, Marei Dornick, Bernhard Drittler, Stefan Elfner, Bernd Ernesti, Holger Faber, Thomas Fleckenstein, Ludek Frey, Heike Funk, Matthias Gruenewald, Andreas Gruenhagen, Gerhard Hafner, Jan-Klaas Heinsohn, Sandra Herden, Gabriele Hoffmann, Marc Hoffmann, Michael Hohendorf, Helmut Holthoff, Xiang Hu, Amit Hundekari, Dirk Joachim, Thomas Juergensen, Torsten Kamenz, Markus Klein, Christian Klensch, Arndt Koester, Andreas Krause, Carsten Kreuels, Christiane Kubach, Sushil Kumar, Hongliang Li, Lawrence Liang, Eric Liu, Hendrik Lock, Michel Loehden, Sandro Lovisa, Victor Ma, Erich Maier, Henrik Malecha, Dr. Martin Mayer, Roman Mayer, Arno Meyer, Helge Meyer, Klaus Meyer, Joerg Michaelis, Klaus Mueller, Ralf Mueller, Khaled Musilhy, Stephane Neufcourt, Jens-Christoph Nolte, Praveen Kumar P, Peter Paschert, Praharshana Perera, Carsten Pluder, Aruna A R, Christof Rausse, Michael Redmann, Stephan Rehmann, Thorsten Refior, Steffen Riemann, Alexander Roebel, Jan Roenner, Florian Roll, Janet Dorothy Salmon, Frank Samuel, Ulrich Schlueter, Michael Schmidt, Jannis Schoenwald, Heidi Schultz, Swetta Singh, Christian Stadler, Renee Steinbrecher, Pierre-Francois Tchakedjian, Ralf Vogt, Boris Wagner, Eva Wang, Shiguo Wang, Joachim Welte, Daniel Welzbacher, Thomas Werth, Volker Wiechers, Markus Wolf, and Lily Xiao.

Without the support of others as well, this book would not have been possible. These include our project steering team—Rudi Hois, Ralf Kau, Stefan Batzdorf, Jörn Keller, and Mira Wohlhueter; our in-house editors—Amy Funderburk, Christopher Hellmann, Roseann McGrath Brooks, and Sina Moser; and our friendly counterparts from Rheinwerk Publishing—Hareem Shafi and Will Jobst. Thomas Saueressig, Tobias Stein, Jochen Boeder, and Wolfram Kleis Walldorf, October 3, 2020

Part I Foundation

1

Architecture Challenges of a Modern ERP Solution The software architecture of a modern ERP solution must ensure a dedicated set of qualities that enable enterprises to run their business processes in an efficient and flexible way. This chapter introduces the challenges and principles that shape the architecture of SAP S/4HANA.

In general, software architecture and technology innovations serve to provide business value and to best fulfill customer requirements. In the digital era, such requirements have changed dramatically. However, to gain an understanding of the architecture challenges of modern enterprise resource planning (ERP) solutions, it is helpful to start by looking back at what made SAP’s ERP software so successful. While addressing the needs of a constantly changing market, these qualities of the past are ones that an ERP solution needs to retain. And certainly, they must be advanced to cope with emerging challenges and enable the business processes of the future. Clearly, globalized and localized business process coverage and industry variants are a main driver behind choosing ERP software from SAP. But there are architecture and technology drivers that are also at the foundation of SAP’s success and that led to the implementation of its systems by thousands of companies. The following three are the most striking: Performance, robustness, and scalability There is no doubt that SAP’s ERP software supports mission-critical core business processes with the highest availability and robustness. When SAP R/3 was introduced in 1992, its three-tier architecture (frontend, ABAP application server, and database) was revolutionary for business applications at that time. It allowed businesses to implement each tier on dedicated hardware that could be scaled independently to cope with the highest load and volume requirements. Adaptability to custom business process needs Although SAP’s implemented best practice business processes served as templates for business process reengineering, almost all customers adapted their ERP implementations to their specific needs. The extensibility and flexibility mechanisms of SAP’s ABAP-based system, allowing enterprises to adapt standard ERP processes, were unprecedented when they were introduced and have been the centerpiece of this endeavor. Many companies thus have used SAP’s ERP software as a platform to develop their own solutions instead of consuming a software package out of the box. A nondisruptive journey Upgrade efforts are the major cost driver in IT projects. SAP customers rely on the system for their most business-critical processes. Therefore, stability during operation and update cycles is key. However, markets and businesses demand adaptations and innovations without disruption. SAP’s ERP software introduced mechanisms to keep business disruption as low as possible very early on. Now, the architecture of SAP S/4HANA has to translate these success criteria into presence. It must meet scalability expectations while controlling total cost of operations

in the cloud. It has to balance adaptability with business process standardization which is the prerequisite for short implementation projects. Finally, it shall deliver innovation without business disruption.

1.1

Characteristics of a Modern ERP System

Current ERP implementations are often failing to deliver the speed, flexibility, and intelligence needed in the digital era. Companies need innovative digital approaches to achieve scalability and real organizational success. An integrated user experience (UX) with modern usability on any device plus extensibility and flexibility are key capabilities for digital-era ERP solutions. Experience-driven processes based on artificial intelligence (AI) are becoming more and more foundational. Plus, cloud-deployment options require standardization to provide the desired cloud qualities and to support a successful business model. 1.1.1

Even Higher Performance and Scalability

The foundation of a modern ERP system is future-proof architecture for constantly growing system loads as enterprises collect sentiment data, sensor data, spatial data, market feeds, and experience data in an unprecedented way. The digital era demands even more speed and adaptability. In ERP implementations of the past few decades, companies often ran into issues with multiple-batch run dependencies limiting scalability and overall system performance. By 2009, SAP’s cofounder, Dr. Hasso Plattner, had already started to rethink how business applications are built and what would be required from an underlying data platform to support them. This led him, together with his research team, to design the foundations of SAP’s breakthrough in-memory platform, SAP HANA. SAP HANA was designed to enable a new generation of business applications that combine transactions, analytics, machine learning, and more, without the historic complications of persisted aggregates, redundancy, inconsistency, or latency imposed by standard database technology. The elimination of aggregate tables and indexes and the implementation of insert only models are the foundation for significantly reduced processing time and highly increased parallel throughput and scalability in modern ERP applications. With the launch of SAP Business Suite powered by SAP HANA in 2014, SAP enabled customers of SAP Business Suite software to bring together transactional and analytical processing in a single, in-memory platform. This innovation was extremely successful in the market: several thousand enterprises implemented SAP Business Suite powered by SAP HANA in only the first two years of its existence to run their businesses in real time, making it one of the fastest-growing products in SAP’s history. SAP HANA represented a new database alternative for existing customers, with a database migration to get there. The product approach was primarily focused on porting the applications and optimizing the code to allow customers to gain significant performance in their missioncritical business processes and reporting activities. Larger parts of the business processes remained unchanged, and the applications were primarily optimized to run on SAP HANA in a nondisruptive way for customers.

With the introduction of SAP S/4HANA as a new, next-generation ERP offering, the success of SAP Business Suite powered by SAP HANA has been vastly extended with a completely new and redefined suite. SAP S/4HANA is largely being adopted by the majority of the Fortune 500 customers across more than 25 industries. The huge benefits of SAP HANA for modern enterprise applications have been laid out in detail in a book: The In-Memory Revolution: How SAP HANA Enables Business of the Future, by Dr. Hasso Plattner and Bernd Leukert. Huge cost savings and productivity gains arise from the ability to run online transaction processing (OLTP) and online analytical processing (OLAP) applications on the same SAP HANA system without replication. Real-time business processes, planning, and simulation also increase accuracy because reporting and analytics based on replicated, potentially old data can result in poor decisions. In-memory computing allows us to avoid traditional batch programs due to the elimination of application-controlled materialized aggregates, which makes many innovative new business processes possible. The insert only paradigm of SAP HANA allows for lock-free and scalable software with significantly increased throughput. The in-memory capabilities of SAP HANA enabled SAP to finally reimplement and simplify large parts of the application logic and meet scalability and functional requirements expected from a modern ERP. Inventory management is an example of how the aforementioned issues with batch-process scalability were combated with SAP HANA and its capabilities. In SAP ERP, inventory management requires database updates on the aggregate table for quantity-in-stock information (table MARD). In the simplified SAP S/4HANA data model, these database updates can be removed due to SAP HANA’s architecture. Intensive scalability tests have been conducted with inventory management to compare the performance and throughput of SAP Business Suite powered by SAP HANA and SAP S/4HANA. The results showed a significant improvement in throughput, which was raised up to a factor of 25 when a larger number of material movements were posted in parallel. This is a revolutionary innovation— especially for companies in the automotive industry, in which backflush processes have a high number of common parts in the material documents. The column store in SAP HANA allows for data to be searched and analyzed without additional indexes. This has a major impact on the application architecture because it provides full-text searching for end users without replication of search-relevant data into an external instance, reducing the data volume in an SAP S/4HANA system significantly. SAP S/4HANA contains more than 200 enterprise search models, and customers can create their own search models for specific needs (see Chapter 3, Section 3.2). SAP S/4HANA can scale up and scale out to support the largest ERP systems in the world. Many global enterprises are live with huge systems running on SAP HANA instances, with 20 to more than 90 TB of physical memory. 1.1.2

Consumer-Grade User Experience

The digital era has changed how users approach enterprise applications. Members of generation Y and Z especially expect enterprise applications to be visually attractive, accessible from any device at any time, simple to use, responsive, and able to be

personalized easily. Applications that do not provide the same level of comfort these users know from consumer-grade applications or websites are often not accepted and used less. Thus, UX is about meeting a user’s needs in the most effective and enjoyable way. Ideally, ERP systems should be intuitive and easy to understand without extensive training. This requires redesigning toward an intuitive UX, understandable terminology, and context-sensitive user assistance. The SAP Fiori user experience (UX) that is applied in SAP S/4HANA (see Chapter 3, Section 3.1), uses the SAP Fiori launchpad as an entry point to overview pages, work lists, and list reports. SAP Fiori UX provides a modern, web-based, role-based design on multiple devices, with a consistent experience across lines of business. Moreover, an additional driver to using SAP Fiori is improved productivity through simplification. Visual attractiveness is the one goal, but well-designed user interface (UI) applications have many more benefits: they require less time to learn; they reduce errors made by users; they prevent users from looking for workarounds outside of the application, which can lead to replicated, inconsistent data; and through all those benefits, they can significantly improve overall user productivity. In combination with the aforementioned features of SAP HANA, SAP Fiori brings much more responsiveness and flexibility to business users with real-time and contextual information that can be accessed on multiple devices. Moreover, SAP Fiori incorporates UI extensibility options for key users that can be handled without involving the IT department (see Chapter 5, Section 5.1). The SAP Fiori architecture fosters a stateless application paradigm with reduced lock intervals and resilience to failure. This positively impacts efficiency and scalability in IT operations, too. 1.1.3

Extensible Architecture

It is crucial that a modern ERP solution offers flexibility to integrate with and configure the capabilities that companies need today and in future. This requires that the ERP system is built on an architecture that is extensible, which is achieved with flexible integration capabilities based on open standards on the one hand and with powerful inapp extensibility mechanisms on the other. The system architecture needs to separate standard coding, custom extensions, and content to ensure upgrades can happen without breaking extensions and configuration. Of course, ERP products have to offer standardized implementation packages. But at the same time, they have to offer options to tailor the product to the specific needs of an organization without breaking upgradability and long-term maintainability. Especially in the cloud, software architectures are distributed and service-oriented. Although extensions traditionally are often meant to modify the standard system in the context of the ERP application, a modern ERP system embraces more and more extension capabilities through the integration of new cloud services. This strategy is called side-by-side extensibility (see Chapter 5, Section 5.2). With this strategy, the system architecture transitions from less flexible, traditional, monolithic concepts that are developed on a single stack, to federated services that allow for better service-level agreements (SLAs), like scalability, availability, and resilience, and guarantee a maximum of adaptability while ensuring update stability through separation through stable APIs.

Customization of UI and business scenarios with field and process extensions, adaptations or extension of business workflows, and open and flexible integration with third-party products through standardized APIs are key requirements for both in-app extensibility and side-by-side extensibility and must be reflected in the software architecture. 1.1.4

Intelligent ERP Processes

The success of SAP ERP is based on standardized business processes that are performed routinely by business users—for example, invoice matching. The corresponding data is later analyzed to support strategic decisions. These processes involve a large number of routine steps for SAP ERP users, often leading to lower efficiency and satisfaction. In the digital era, there is an ongoing need to reduce operational cost by increased organizational effectiveness. Embedding intelligent algorithms into SAP ERP can help to release users from repetitive tasks and empower them to perform nonroutine, analytical, and creative work focused on customer service and satisfaction. Organizations encounter constantly growing amounts of unstructured data that could be an asset on the one hand; but on the other hand, companies are lacking a clear approach for how to properly transform this data to gain meaningful information, decisions, and actions. Technically, in the digital era, the largely improved processing power, better algorithms, and availability of big data are facilitating the implementation of machine learning for infusing intelligence into back-office processes and to provide what SAP calls the intelligent ERP (see Chapter 4). An additional opportunity exists to embrace more and more experience management data, like user feedback and sentiment data, into intelligent processes. To address this, SAP S/4HANA as today’s intelligent ERP system embraces technology to allow companies to find patterns among the different sources of data and make the right connections between them. SAP S/4HANA has implemented various use cases based on SAP’s machine learning and experience management capabilities—for example, tax compliance or quantity contract consumption. 1.1.5

Simplified and Standardized Implementation

We have to acknowledge that typically companies have created a large amount of custom code, enhancing and partly replacing or even modifying the SAP software implementation. In addition, they have customized their systems intensively. Together this leads to a scattered IT landscape with an SAP ERP system that is difficult to upgrade to the next release. Thus, huge adaptations to the standard software have locked-in enterprises and are hindering them from taking the next step toward digital transformation. Now in the digital era we see another round of standardization, and though enterprises try to avoid individualization wherever possible, they still demand a high level of flexibility. This includes the reduction of complexity through standardization and simplification of business processes. ERP applications of the digital era require the delivery of

preconfiguration packages that enable rapid implementations. For SAP, this requires a deep knowledge of the industries for which the business configuration content packages are built. Especially in the public cloud, ERP configuration must largely fit the needs of an organization out of the box. This leads to the need of a modern ERP to provide selfservice implementation tools that help to significantly lower the costs of the implementation project. Especially necessary are tools that guide users step by step through the implementation and standardized data migration tools. In those cases, in which an organization’s own practices are more beneficial, the aforementioned extensibility options can be used. SAP S/4HANA provides best practices business configuration content for both onpremise and cloud offerings; these are tailored specifically to simplify the adoption and to accelerate the implementation with faster time to value (see Chapter 16). With this, the solution is architected for easy adoption, guided configuration, and easy onboarding from the discovery of the solution through cloud trials to the deployment with preconfigured best practice content. 1.1.6

Cloud and On-Premise Deployment Models

SAP S/4HANA is available in an on-premise version and as a software-as-a-service offering named SAP S/4HANA Cloud. In this section we discuss the specific requirements of a cloud deployment model and why it is important to offer both options. The ambition to standardize business processes fits perfectly with public cloud deployment models, which bring a huge variety of benefits for ERP usage, including the following aspects: Upgradability with minimal disruption to adopt innovations frequently Standardization in the public cloud and reduced complexity make it much easier to apply software updates frequently in an efficient way without impacting consumers. In the digital era, ERP applications need to keep up with changed technology and market requirements. It is crucial that modern ERP applications are designed to support these frequent nondisruptive update processes efficiently to shorten innovation cycles. This is also important for software providers because customer feedback cycles have significantly shortened, allowing for more agile development processes and more market relevance. Fast return on investment and low total cost of ownership Moving to SAP S/4HANA is beneficial in general from a cost-efficiency perspective because its simplified in-memory architecture brings together analytical and transactional capabilities in one instance. But cloud deployments allow for even further cost reductions and improved return on investment (ROI)—for example, by faster implementation cycles for simplified services. Improved operational efficiency Software providers also benefit from software as a service (SaaS) deployment options, mainly through improved operational efficiency that is achieved through software lifecycle automation, elastic scalability, more resilient system behavior, and resource sharing options. All this leads to lower costs, resulting in more attractive subscription-based pricing models with better SLAs for consumers.

Choice of deployment models On-premise deployments will of course remain, and modern ERP solutions in the digital era have to address the fact that an organization’s solution landscape will in most cases be hybrid, comprising public cloud SaaS, private cloud, and on-premise deployments. Modern ERP solutions must be engineered to provide a choice of deployment (on-premise, on a hyperscale provider, in the public or private cloud, or hybrid) out of the box. The choice of deployment model is even necessary to help ensure that today’s SAP ERP customers can move to SAP S/4HANA Cloud. Multinational companies in particular are adopting cloud services in several subsequent waves, which automatically leads to hybrid deployment landscapes. Adopting selected innovative SaaS ERP services in a hybrid deployment together with an on-premise ERP system can also help to significantly speed up the time to value. Of course, these benefits are seen only when integration scenarios are seamless and transition paths to cloud services are easily adoptable. In the following chapters, unless a distinction is made, we use “SAP S/4HANA” to refer to both the on-premise version of SAP S/4HANA and SAP S/4HANA Cloud. 1.1.7

Security, Privacy, Compliance, and Data Isolation

In the past 20 years, the internet has reinvented how organizations conduct business. In times past, an ERP system was typically not exposed to the internet—but such exposure is a prerequisite for today’s modern business models. SAP ERP is a secure system, but it is obvious that new security and privacy challenges have arisen in this digital era, in which applications support open integration standards and build on web technologies. These challenges also demand enhanced or new technical approaches to secure an ERP solution. SAP S/4HANA reflects those demands—for example, with a secure by default approach for the system, helping ensure a system setup with appropriate system security settings when installing, copying, or converting to SAP S/4HANA. In addition, privacy and data protection has become more and more important, and is associated with numerous legal requirements and privacy concerns. SAP S/4HANA explicitly provides features to support compliance with the relevant legal requirements and data protection—for example, consent management, read access logging, information retrieval, deletion of personal data, and change data logging of personal data (see Chapter 7). Additional details can be found in the official Security Guide for SAP S/4HANA on help.sap.com. Those security and privacy aspects are relevant for ERP software independent from its deployment (on-premise or in the cloud). Because administrative access is an onpremise element and no longer possible for cloud ERP consumers, SAP S/4HANA Cloud offers additional dedicated services to provide, for example, tenant decommissioning while ensuring the highest level of customer data isolation and segregation of duties.

1.2

SAP S/4HANA Architecture Principles

SAP S/4HANA is more than an ERP solution. It is the evolution of SAP ERP into a simplified system of engagement, leveraging in-memory capabilities of SAP HANA such as embedded analytics, simulation, prediction, decision support, artificial intelligence, and machine learning in conjunction with mobile, social, Big Data, and choice of deployment to deliver applications with a new user experience for people directly in the context of their daily work. In the previous section, we described the architecture challenges of a modern ERP system in the digital era. Next, we lay out how SAP S/4HANA has transformed its technology and application architecture to perfectly cope with these newer requirements while retaining the capabilities that made SAP ERP successful. 1.2.1

Stable but Flexible Digital Core

Much has been written about global business in the digital era and the increased pace involved with it. To cope with this speed, it is crucial to provide enterprises with a digital core, that is not only stable during software updates to be a solid foundation, but also very flexible to allow for adaptations due to frequently changing requirements. SAP S/4HANA follows the ambition to make software updates ideally become nonevents. A clean and stable core is established that allows faster software deployment and easier adoption of both SAP software innovations and regulatory changes to software (see Chapter 19, Section 19.4). This reduces upgrade projects costs, which also means a reduction in total cost of ownership (TCO). Yet this does not mean jeopardizing flexibility. SAP S/4HANA aims at allowing a level of extensibility and flexibility similar to SAP ERP, but with largely increased stability contracts to best support upgradeability. The new concepts of in-app extensibility and side-by-side extensibility (see Chapter 5) are addressing this important need with the separation of concerns paradigm. A large set of publicly released APIs and objects with guaranteed stability contracts is the basis for these flexibility mechanisms. Building on top of these APIs and objects gives enterprises the guarantee that their digital core, the backbone of their business, will remain stable and clean. 1.2.2

Simplification with the Principle of One

With the move of SAP ERP and SAP Business Suite to private and public cloud deployment models, it has been of high importance to standardize the solution to ease implementation and to further reduce both application management and infrastructure costs. One of the core architecture principles of SAP S/4HANA is the principle of one, which describes the deprecation of redundant frameworks, data models, UIs, and other elements from SAP ERP. In this way, the principle of one achieves simplification and reduced complexity. In SAP ERP, overlapping functionalities still exist for similar processes or business results, which makes implementation projects more difficult and creates confusion with

users. An important step to address this is a significant simplification of the solution offerings through elimination of nonrequired components in the simplified deployment. This is achieved by means of quarantine and deprecation. To achieve this, the solution was separated into the following elements: Mandatory parts that are always available in both on-premise and cloud deployments. These parts are developed in a common code line with a common data model to guarantee sustained compatibility over all deployment options. Quarantined parts that are not needed or wanted in the cloud deployment. These parts can be removed from the cloud version. Deprecated parts to be removed in all deployment options. With SAP S/4HANA Cloud, the number of tables, data elements, and transactions has been drastically reduced by eliminating redundant functionalities. SAP HANA is the perfect data management system for business applications. In particular, ERP applications that require both transactional and analytical processing in the same system benefit tremendously from SAP HANA as a unified data management system because it removes complexity in data processing and system landscapes. The in-memory capabilities of SAP HANA enabled SAP to simplify and optimize major part of the application logic and by that are fundamental to achieve the performance, the scalability and the functionality that companies require from an ERP in the digital era. Persisted aggregates in particular are no longer required due to the in-memory layout and can be removed, leading to a simplified application data model with a significantly reduced number of tables and indexes. SAP HANA-based system landscape simplification in SAP S/4HANA reduces IT costs by centralizing hardware and network resources. In the SAP S/4HANA Finance solution, SAP HANA enabled making all total records tables obsolete, reducing the application footprint significantly. To reduce redundancy, many line item tables in core finance have been consolidated into the Universal Journal as the single source of truth for financial accounting and management accounting (see Chapter 14). There are no longer any database locks used when data is posted. This allows for scaling the throughput efficiently with the number of application servers used. Analytics and transactional processing run in parallel in the same system on the Universal Journal, and every column works as an index. With this, complete components with their respective tables and code became obsolete, such as the special purpose ledger (FI-SL) and profit center accounting (EC-PCA). 1.2.3

Open for Innovations through Service Orientation

In the digital era, technology is evolving rapidly; enterprises must stay ahead of the curve and look for innovations that are ready for the future. Thus, openness of a solution, and open standards, are key. This applies to several aspects of the solution architecture: integration and APIs, extensibility, and analytics. SAP customers, SAP partners, and SAP itself as a software provider benefit from all of this. The foundation of an open architecture and service orientation is a common data layer that makes the application architecture model-driven and enables the capabilities mentioned above, especially extensibility. With the introduction of SAP HANA, a

pervasive Virtual Data Model (VDM) became a central building block in SAP S/4HANA to serve transactional, analytical, and other purposes at the same time (see Chapter 2, Section 2.1). With such a central concept, redundancy is avoided, and efficiency and consistency are ensured across the layers building on top of the VDM: analytical queries, UI services, integration services and APIs, search models, and data extraction. The VDM is SQL-based and builds on top of SAP HANA’s in-memory concepts, ensuring the optimal usage of SAP HANA’s capabilities throughout applications. It exposes all business-relevant data of a modern ERP system in a semantically rich and harmonized way, including appropriate authorizations and extensibility options. It ensures semantically and structurally correct cross-references between business processes and analytical scenarios. The VDM in SAP S/4HANA today offers more than 30,000 entities. It comes with a set of specialized tools that enables SAP customers and partners to extend or create their own models. In a modern ERP solution, the user interface and application integration are based on a service-oriented architecture, leveraging web services built on open standards. The stateless nature of RESTful web services allows for cloud-like scalability and ensures flexibility, but it also fosters important cloud qualities, like the availability and resilience of a solution. It guarantees decoupling and separation of concerns, allowing for delivering new features without interruption of business services and at different speeds of innovation for UX, analytics, integration scenarios, and transactional logic. The VDM in SAP S/4HANA is the stable foundation for both RESTful UI services for SAP Fiori apps and for integration APIs leveraging the Open Data Protocol (OData; see odata.org) open Web services standard that was originally initiated by Microsoft. All new scenarios take advantage of the API first principle, ensuring a high degree of reusability and API quality. API first guarantees that APIs are protected by authentication and authorization and are designed to minimize bandwidth consumption and thus TCO. SAP API Business Hub (api.sap.com) on SAP Cloud Platform is the central repository for all SAP S/4HANA APIs for side-by-side extensibility, as well as for integration of additional digital processes that are built outside of SAP S/4HANA as extensions. 1.2.4

Modularization into (Hybrid) Integration Scenarios

Enterprises expect from a modern cloud-based ERP system to cover end-to-end processes like source-to-pay or recruit-to-retire rather than individual capabilities. Seamless integration of modular services becomes the primary driver, ideally leading to service and product boundaries that are invisible to the user of the solution. Consequently, solutions become distributed and event-driven. From a technology perspective, a highly integrated but modular approach to provide an intelligent suite based on cloud-native technologies is required, allowing for delivering continuous innovation and services. To achieve this, a standard-based cloud platform is fundamental, one capable of flexibly integrating diverse micro and macro cloud services from different business applications. Container technology becomes instrumental to flexibly scale and orchestrate services across end-to-end processes. One logical data model across all services is the basis for smooth integration, implying a semantically tight coupling between those application areas. From a business process perspective,

ERP in the cloud is becoming a process as a service and offers the end-to-end business processes that help companies to run at their best. This modularization of SAP S/4HANA is an important precondition to support hybrid scenarios, in which enterprises are running parts of their SAP S/4HANA functionality in the cloud and other parts on premise. As mentioned earlier, hybrid deployments will be the integration reality for many ERP implementations for years to come, giving enterprises the flexibility to approach both digitalization and migration to the cloud, with public and private cloud and hybrid approaches. The traditional restrictions in document flows and workflows are resolved across those application areas to make the individual transaction atomic within an application area. Workflows in the digital era must be built around the user’s role and need to span end-to-end processes across solution parts. This allows for rapid process changes that are required in the digital era. 1.2.5

Cloud First, but Not Cloud Only

There are many differences between a SaaS solution designed for cloud operations and an on-premise application. Cloud architecture needs to be designed to benefit from shared resources, allowing for high availability and elastic scalability. Applications need to respond to demand levels, growing and shrinking as required, typically with scale-out models instead of scale-up ones as they are mainly used on premise (see Chapter 20). Several on-premise concepts cannot be applied to cloud architectures, including virtual private network (VPN) access to backend systems, business downtimes with manual upgrade steps, and IT support desks. This can only be achieved with a high degree of automation of cloud operations and web-based UIs, especially self-services for support and administrative tasks. SAP S/4HANA Cloud is a pure SaaS solution, implementing all aspects of a modern distributed and service-oriented cloud architecture. All customer tenants are based on an identical software stack. With multitenancy clusters, large parts of the hardware resources, system data, and coding are shared across multiple tenants for scaling effects. The solution has fully dynamic tenant density, zero downtime for maintenance events, and a full automation of lifecycle management events like provisioning, updates, and decommissioning (see Chapter 19). To achieve the highest cloud qualities together with the aforementioned goals of a stable core, simplification, semantic compatibility, and hybrid integration scenarios, SAP has chosen to share application parts between the on-premise and cloud solutions. The VDM serves as a common integrator and ensures compatibility. New, innovative services are built for the cloud first but are made available in the on-premise product subsequently. First and foremost, new services are designed to minimize costs due to bandwidth, CPU, storage consumption, and I/O requests; to behave resiliently in terms of failure and latency; and to incorporate built-in security and encryption for data at rest and in transit. 1.2.6 Semantic Compatibility to Support Evolution with the Least Possible Disruption

As noted before, the VDM spans SAP S/4HANA Cloud and SAP S/4HANA on-premise. It exposes the SAP S/4HANA business semantics to external usage, analytics, search, and key user extensibility. The term virtual indicates that the VDM abstracts from the physical database tables, allowing for another important goal: smooth interoperability and evolution for existing SAP customers with the least possible disruption. The cloud services are composed of components fulfilling a tight compatibility contract—especially with the former data model of SAP Business Suite. With the homogeneous data models and integration points across on-premise and cloud versions SAP S/4HANA supports the market needs of hybrid deployment models. When an organization plans to move to a modern ERP solution like SAP S/4HANA, it will always consider new digital capabilities. The semantic compatibility between SAP ERP and SAP S/4HANA data models and between SAP S/4HANA on-premise and cloud deployments allows for transition paths that are doable for enterprises in a reasonable timeframe. With this compatibility paradigm, many of SAP ERP’s extension points remain available, which reduces the adoption effort for custom coding when moving to SAP S/4HANA.

1.3 Evolving a Cloud ERP System from the Best Possible Origins We are not aware of a solution out there with broader proven business process coverage and industry depth than SAP ERP. SAP made the decision to evolve these business capabilities and make them available in a simplified and optimized way in SAP S/4HANA, helping ensure the goals of semantic compatibility, hybrid integration, and a stable core. In addition, many SAP customers aim at system landscape simplification by using deployment options that unify a plethora of instances ideally in a single deployment. Following the principle of one, earlier SAP Business Suite solutions like SAP ERP, SAP Customer Relationship Management, SAP Supplier Relationship Management, SAP Supply Chain Management, SAP Extended Warehouse Management, and SAP Transportation Management have been brought into a single solution instead of enforcing separate components with the need for different middleware-based integrations. This is made possible through the data model consistency that is fostered by the VDM in SAP S/4HANA on top of SAP HANA. Parts that are taken from SAP Business Suite into SAP S/4HANA implement the aforementioned cloud qualities, leading to only a very small portion of the cloud solution that has not been rewritten or adapted. Simplification of a solution means to abandon redundancy and to replace larger parts with parts that are rebuilt to achieve desired characteristics. This means that some functionality in SAP ERP has no direct replacement in SAP S/4HANA, causing disruption for customers using this legacy functionality when moving to SAP S/4HANA. Because SAP S/4HANA Cloud is a distributed, service-oriented SaaS solution, SAP has replaced larger parts of SAP ERP with dedicated SAP cloud solutions. For example, parts of SAP ERP Human Capital Management are replaced by SAP SuccessFactors solutions, and parts of SAP Supplier Relationship Management (SAP SRM) are replaced by SAP Ariba solutions. In SAP S/4HANA on-premise, this means that parts of the solution will become obsolete but are still available for compatibility within the socalled compatibility scope.

1.4

Summary

In this chapter we have discussed the software architecture challenges of a modern ERP solution. We described a dedicated set of qualities that enable enterprises to run their business processes in an efficient and flexible way. Based on that we introduced the challenges and principles that shape the architecture of SAP S/4HANA. In the next part we start with the technical architecture foundation on which SAP S/4HANA is built. We start with the virtual data model, which we mentioned already in this chapter.

2

Technical Architecture Foundation Source code and data type definitions contain no information about development or architecture decisions. The virtual data model and the ABAP RESTful application programming model break with this paradigm: they focus on the essential parts of a business application and make their semantics transparent.

Now it’s time to get technical. We have mentioned the virtual data model (VDM) already a few times. In this chapter, we explain in detail what it is and how it is built. In particular, we explain how the core entity of transactional ERP applications—the business object— is modeled according to the VDM using CDS views. Then we show how the ABAP RESTful application programming model supports developers in implementing ERP applications using these business objects.

2.1

The Virtual Data Model

The virtual data model represents the semantic data model of the business applications in SAP S/4HANA. It aims at exposing all the business data in a way that eases its understanding and consumption. It is called a virtual model because it abstracts data from the database tables. This way, existing tables can be transformed into an aligned uniform data model where necessary. Besides describing the business semantics of the applications, the VDM is also an executable model that provides access to the corresponding data at runtime. The VDM is implemented using specifically classified Core Data Services (CDS) entities, which comply with the VDM rules (Section 2.1.1). Among others, these VDM rules include fundamental naming rules (Section 2.1.2) and rules for structuring the VDM entities (Section 2.1.3). On the one hand, the rules foster the consistency of the models; on the other hand, they allow efficient development of applications and APIs that cover analytical consumption (see Chapter 4, Section 4.1), transactional processing (Section 2.2), and search-related consumption scenarios (see Chapter 3, Section 3.2). In addition to providing the data model for applications, VDM views are also used for other tasks, such as CDS-based extraction (see Chapter 6, Section 6.8.1). The VDM is not intended only to be used for developing applications at SAP. Instead, released SAP VDM models and released SAP services built on them offer a stable interface with a well-defined lifecycle. As an SAP customer or partner, you can use them to build your own applications and for enhancing SAP applications. 2.1.1

Core Data Services

Core Data Services (CDS) support the definition of semantically rich data models. These models are managed by the Data Dictionary of the ABAP platform and can be executed in the database system. CDS views represent the most important CDS entity type. They capture select statements in a syntax that is closely related to that of structured query language (SQL).

CDS views can be used, for example, as the data source in ABAP select statements. The results of querying a CDS view can be restricted by attaching access control models to the CDS view. This means that the query only returns the data that the current user is authorized to read. These access control models are defined using the CDS data control language (DCL). CDS views support the definition of supplementary metadata by means of annotations. Furthermore, CDS views allow modeling associations, which represent directed relations to other CDS entities. Both the annotations and the associations are interpreted by the various consumers of the CDS models. In specific, the ABAP infrastructure uses the corresponding information for deriving additional functionality and services from the CDS models. For example, a CDS view annotated accordingly can be executed by the analytic engine (Section 2.1.4). The analytic engine provides advanced features such as exemption aggregations and hierarchy handling, which are not modeled by the plain select statement of the CDS view itself. 2.1.2

Naming Conventions

The VDM is based on a set of common naming rules. Applying the following rules ensures consistent and self-explanatory naming: A name is precise and uniquely identifies a subject. Generic names are avoided. For example, the identifier of the sales order document is named SalesOrder, not just ID or Order. A name captures the business semantics of a subject. Names being unique implies that different subjects must have different names. This is specifically important for identifiers and codes. Using the same name for two fields implies that their underlying value lists match. Names are composed from English terms in camel case notation with an uppercase first letter. Underscores are used in predefined cases only. Abbreviations are avoided. The naming rules are applied to all CDS entities and their parts—for example, names of fields, associations, parameters, or CDS views. Figure 2.1 illustrates an example of an SAP Fiori app that uses an OData service, which in turn is based on a VDM view stack. As you can see, the first VDM view (at the bottom) maps the technical field name MATNR from the database table onto the semantic name Product, which is then used on several layers up to the user facing SAP Fiori UI.

Figure 2.1

2.1.3

Field Names in VDM

Structure of the Virtual Data Model

Within the VDM, CDS entities serve distinct purposes. They are classified accordingly by means of VDM annotations. Note that a CDS entity becomes a VDM entity if it uses VDM annotations and if it adheres to the VDM guidelines. VDM views are organized in a hierarchical structure following a layered approach. The upper layers select from and define associations to the same or lower layers, but not vice versa. The VDM views are assigned to layers with a special CDS annotation (@VDM.viewType). Figure 2.2 depicts the admissible dependencies between the views as data sources and the different types of views. Along with the dependencies, the typical prefixes of the names of the CDS views are visible here too. Consumption views have name prefix C_ and remote API views have the prefix A_. Basic views and composite views form an interface layer that can be used for building applications. The views of this interface layer have name prefix I. As we will explain in the following sections, basic views and composite views can also be restricted reuse views, indicated by name prefix R_.

Figure 2.2

View Layering: Select-from Relationships

Basic Views The lowest level of the VDM view stack is defined by basic views. Basic views are the most important constituents of the VDM. They establish what you may call the entity relationship model of the applications, from which they get information about data structures, dependencies, and metadata. The main purpose of the basic views is to serve as reusable building blocks for any other nonbasic VDM views. Because the basic views expose all data, there is no need for any other nonbasic view to directly select from the database tables. In fact, accessing database tables from nonbasic VDM views is forbidden in the VDM. This way, basic views provide a complete abstraction from the database tables to higher layers. Composite Views Composite views comprise additional functionality on top of the basic views. Like the basic views, they are primarily intended to serve as reusable building blocks for other views. However, they can already be defined in such a way that they support a specific consumption domain. For example, they can define analytical cube views, which consolidate data sources for usage in multiple analytical queries. Transactional Views Transactional view s are a special flavor of composite views. Transactional views define the data model of a business object and act as an anchor for defining its transactional

processing-related aspects (Section 2.2.1). Transactional views may contain elements that support the transactional processing logic like additional fields that help preserving user input temporarily. Therefore, transactional views are only used in the context of transactional processing—for example, when consumed by other transactional views or by consumption views, which delegate their transactional processing logic to the transactional views. Consumption Views Regular basic views support any use case. This means they are defined independent of a specific use case. In contrast, consumption views are deliberately tailored for a given purpose. Consumption views are expected to be directly used in a specific consumption scenario. For example, a consumption view can provide exactly the data and metadata (through annotations) that is needed for a specific UI element. Restricted Reuse Views By default, basic and composite views define an interface layer (named with the I_ prefix), which any SAP application may use. Once released, they can also be used by SAP customers and partners. However, development teams sometimes define basic and composite views that are only for local use in their own applications. Such views are not meant to be reused by developers from other application areas. In such a case, a restricted reuse view (named with the R_ prefix) is defined. Examples of restricted reuse views are transactional processing-enabled views, which expose all functions and operations of a business object, including the internal ones. Enterprise Search Views Enterprise search uses special CDS views as search models. Enterprise search is described in Chapter 3, Section 3.2. Remote API Views Remote API views project the functionality of a single business object for external consumption. They decouple the regular, system-internal VDM model, which can evolve over time, from its external consumers, establishing a stable interface. Based on the remote API views, OData services are defined, which can be consumed by remote applications. The corresponding OData services are published on the SAP API Business Hub (https://api.sap.com). 2.1.4

Consumption Scenarios

The most prominent consumption scenarios supported by CDS entities are as follows: Analytics Enterprise search SAP Fiori UI applications Remote APIs

Figure 2.3 shows a typical VDM view stack for analytical and transactional processingenabled applications. Use case-specific views are defined on top of basic and composite reuse views to adapt the data model and functionality to the individual application needs.

Figure 2.3

View Hierarchies

Analytical Applications Analytical applications are based on cube views and a network of associated dimension views, which themselves can associate text and hierarchy views. The actual analytical application is defined by an analytical query view, which projects the envisioned functionality from its underlying cube view. Note that unlike other CDS views, the analytical query views themselves are not executed on the database. Instead, the analytic engine interprets their logic and directly executes selections from the cube and dimension views. Analytical applications are covered in detail in Chapter 4, Section 4.1. Transactional Processing-Enabled Applications The data model is captured by transactional views, which are related through compositional associations to form an entire business object. Therein the actual data selection logic is defined by the CDS view models. The behavior model specifying the supported Create, Read, Update, Delete (CRUD) operations is defined in an attached behavior definition model and implemented in ABAP classes. The consumption view projects relevant functionality and augments the data model with annotations that provide the metadata for rendering the UI application built on the view. Associated interface views can be used to enrich the data model where appropriate. The recommended way to develop transactional applications is using the ABAP RESTful application programming model, which we discuss next in Section 2.2.

2.2

ABAP RESTful Application Programming Model

The ABAP RESTful application programming model is the common foundation on which the programming model of SAP S/4HANA is based. The strategy of SAP S/4HANA is to apply this target architecture across all applications. The model fosters SAP’s general architecture goal to clearly separate the database model and database layer, the application logic (business logic) and the user interface. The SAP Fiori technology approach technically ensures the separation of the user interface (and related UX) from the business logic, leveraging OData as the protocol of choice. The ABAP RESTful application programming model addresses the modeling and implementation of application logic (often called business logic), including aspects that are common for all applications and the service-specific application logic. In addition, the ABAP RESTful application programming model ensures that the complete application implementation is protocol-agnostic and can also be reused when switching to a different protocol or protocol version. In Section 2.2.1, we describe the common application model and logic, which we also refer to as a business object and the business object implementation. The service-specific model and logic we often call a business service and the business service implementation. We explain its concept for OData, the main protocol used, in Section 2.2.2. In Section 2.2.3, we describe the overall runtime architecture of the programming model. 2.2.1

Defining and Developing Business Objects

A business object is defined by a data model and its behavior. Figure 2.4 shows the design-time artifacts for defining and implementing business objects with the ABAP RESTful application programming model. The individual concepts and artifacts are explained ahead.

Figure 2.4

Business Object Definition and Implementation

Data Model

The data model of a business object consists of entities, also known as business object nodes, which are arranged in a hierarchical compositional tree structure. The data model is defined using CDS entities that are based on basic views of the virtual data model, which we introduced in Section 2.1. In Figure 2.4, the CDS entities for defining business object nodes are called transactional view s. In CDS models, compositions are a special kind of association that represent a parentchild hierarchical relationship, in which the child is a part of the parent and cannot exist without it. When defining business objects, compositions are especially important because they are needed to construct the business object model from nodes. An example of a compositional structure is the sales order as a business object that is composed of the sales order header root entity, its sales order item children, and its sales order schedule line grandchildren. Usually we do not distinguish between the business object and its root entity when referring to them, but to avoid misunderstandings, sales order refers to all entities within the composition structure, whereas sales order header refers to the root entity only. Other associations to related entities are also relevant and belong to the business object definition but are not part of the business object itself. Such associations can point to other business objects or business object entities—be it for read access, for transactional access, or for the purpose of value helps. In the example of the sales order, a transactional association within the sales order might be a specialization of the composition, for example toFree GoodsSalesOrderItems. Examples of associations to other business objects include associations to the business partner on the sales order header level or to the product on the sales order item level. Usually, the association to the business partner is also a transactional association; during the sales order entry, a new business partner instance may be created as a customer, or the customer data can be updated—for example, with a new address. The association to the product usually is not transactional because during the sales order entry no products are created or changed. CDS entities can be accessed at runtime with SQL queries. The read authorization for such a query access is defined and modeled in a declarative way with a related access control model defined in CDS data control language (DCL) for each CDS entity. The data model also defines the semantics of the fields based on the data types used: Because the ABAP type system doesn’t support certain types, like Boolean, DateTime, or the Universal Unique Identifier (UUID), we use dedicated annotations on the field level to indicate that, for example, a character field with length 1 is a Boolean. When exposing these entities via OData, this annotation is evaluated because the OData type system is different and supports such corresponding types (i.e., Edm.Boolean, Edm.DateTimeOffset, and Edm.Guid). CDS annotations can add additional semantics on top of the pure data type definition. Such annotations can, for example, define that a field with type DateTime contains the time when the data was created or last changed, or that another field of type Date contains the “from” date on which some business data becomes valid or the “to” date until when business data is valid.

Developers define value helps or references to texts for IDs or codes as part of the data model using CDS annotations too. Behavior Model and Implementation The behavior of a business object is modeled with an artifact called behavior definition. The behavior definition is based on and complements the underlying data model. Technically, it’s assigned to the root view entity that represents the entire business object, and it inherits its name from the root view entity. The behavior definition defines the transactional behavior of the business object by declaring the available operations for each entity of the business object. This could include common operations such as create, update, and delete, or application-specific actions and functions. In addition, the behavior definition specifies runtime behavior with regard to locking, entity tag (ETag) handling in the case of HTTP-based consumption (typically through OData), authorization, feature control for the provided operations, support of draft handling, and much more. Locking, ETag definition, and authorization within a business object usually are not specific to every entity but apply to the complete business object or a compositional subtree. To support this modeling for locking, ETags, and authorization, entities can be defined as MASTER or DEPENDENT. Thus locking, ETags, and authorization are invoked on the MASTER entity even if a DEPENDENT entity is accessed. For example, the sales order header is the LOCK MASTER, and all other entities are LOCK DEPENDENT. If a sales order item in a sales order is changed, the LOCKING is on the sales order header. Feature control refers to the availability of operations (create, update, delete, actions, functions) or the changeability of fields. We distinguish between static and dynamic feature control, while the dynamic feature control is again split into global and instance feature control: For operations, static means that the operation is or is not available for consumers. For example, if creation of an instance is never allowed directly but only through a factory action, the create operation might not be defined at all or marked as internal. For fields, static means, for example, that a field is read-only and thus cannot be changed by a consumer, but only within the business object implementation. Global means that the operation is controlled independent of the instance. This is the only option for noninstance operations like create on the root entity level. Global can be applied to instance operations or fields, too—for example, when the action is deactivated based on certain configuration settings. Instance means that the operation or the field is controlled based on the concrete instance. For example, if a sales order is already released, the release action for that sales order is no longer enabled. At the same time, the delivery address cannot be changed further and is set to read-only dynamically for this sales order. Some of the information specified in the behavior definition is relevant for the consumer of the business object. It defines, for example, the interface of the business object that can be used from ABAP code with the entity manipulation language (EML). EML is integrated into the ABAP language and allows reading and modifying business objects based on the operations defined in the behavior definition. Other aspects of the behavior

definition are just implementation details. The implementation type in particular is an implementation detail that defines how the business object provider is implemented. To ensure that the ABAP RESTful application programming model can be used across all applications in SAP S/4HANA, it supports three different implementation types: Managed implementation BOPF managed implementation Unmanaged implementation Now we’ll explain each type. Managed Implementation Type The managed implementation type is chosen for greenfield development when new applications are implemented or existing applications are refactored. Here, the infrastructure of the ABAP RESTful application programming model provides the main part of the provider implementation. This includes a full CRUD implementation with a transactional buffer, as well as read and write access to the application database tables. The transactional buffer is the buffer that holds the transactional changes that are processed during the transaction. When finalizing the transaction and saving the data, these changes are taken and saved to the database. In a managed implementation, the application only deals with its own application logic. The ABAP RESTful application programming model runtime takes care of buffer handling, locking, and so on. The application logic is defined using determinations and validations that are declared in the behavior definition and can be implemented with application-specific ABAP code: A determination is application logic that changes data triggered by other changes. For example, the sales order item amount is calculated based on changes to the order quantity or the unit price. A determination might run immediately on modification or only later on saving. A validation is application logic that checks the consistency of the data in the transactional buffer to ensure only proper data can be finally stored in the database table. A validation runs only on saving. The infrastructure ensures that only relevant determinations and validations run based on so-called trigger definitions. Available triggers are create, update, delete, and field changes. So, the determination to calculate the sales order item amount only runs if the order quantity or the unit price changes and only for the sales order item that was changed. For a good user interaction, it is sometimes necessary to invoke certain determinations or validations during the interaction phase. To do so, developers define determine actions in the behavior definition and assign the determinations and validations to these actions. The consumer can then call the determine action to invoke the determinations and validations. Developers also use the managed implementation type when they refactor existing code and, for example, existing database tables need to be reused or need to be kept stable for interoperability. In this case, additional exits can be implemented to affect how changes are saved, to add additional save operations—for example, to write change documents—or to implement the lock method to reuse an existing enqueue object.

BOPF-Managed Implementation Type The BOPF-managed implementation type is quite similar to the managed implementation type but provides the option to reuse and integrate an existing CDSbased BOPF implementation. The Business Object Processing Framework (BOPF) is a predecessor of the business object framework of the ABAP RESTful application programming model. Before the ABAP RESTful application programming model was introduced, SAP recommended the CDS-based BOPF programming model. With the BOPF-managed option, the corresponding investments are protected and can continue to be used. The functionality of the BOPF-managed implementation type is comparable to the managed implementation type. The transactional buffer handling and further features are provided by BOPF, while applications focus on application logic in determinations and validations. Note that for the BOPF-managed implementation type, the determinations and validations are declared not in the behavior definition, but in the BOPF model as separate design-time artifacts. The link between the behavior definition and the BOPF model is established by naming, as both artifacts have the name of the business object as their name. In Figure 2.4, the artifacts of the BOPF model and the BOPF implementation are shown as dotted boxes because they are specific to the BOPF-managed implementation type. Note that the BOPF implementation replaces the behavior implementation in this case. Unmanaged Implementation Type The unmanaged implementation type is chosen for brownfield development when an existing application implementation should be reused and a refactoring or reimplementation to a managed implementation is not an option. In this case, the application is responsible for the complete provider implementation. This means that the application developers must implement all supported operations, such as create, delete, update, and save, on their own. This allows developers to reuse their existing application and delegate the work to the existing application APIs. The unmanaged implementation type is also the current option for traditional BOPF implementations. When developing a new application from scratch, the recommendation is to use the managed implementation type instead. Comparison of ABAP RESTful Application Programming Model Implementation Types Developers implement the operations and features of a behavior definition with dedicated ABAP classes. These classes refer to the behavior definition and thus have their method signatures inferred. This applies for the managed and the unmanaged implementation type, but developers must implement different parts in each case. In both use cases, developers must implement actions, functions, authority checks, and feature controls for their business object. In the unmanaged case, the application developer also must program the create, update, and delete operations, the transactional buffer handling, the locking, and the save sequence. For the managed implementation type, the create, update, and delete operations with buffer handling come for free and cannot be implemented at all by application developers. For locking, the managed scenario offers a generic implementation that can

be replaced if needed (with the UNMANAGED LOCK option). The ABAP RESTful application programming model framework takes care of saving of data if the database tables correspond to the entity definitions or simple mapping is defined in the behavior definition. If this isn’t sufficient, developers can replace the generic implementation (with the UNMANAGED SAVE option). A big benefit is that these ABAP RESTful application programming model features are integrated into the ABAP syntax and thus the signatures are fully typed. With this setup, design-time syntax checks and code completion are supported out of the box. In the BOPF-managed case, developers implement the operations and features of a behavior definition in the BOPF implementation classes defined in the BOPF model. Because BOPF is a generic framework, the signatures here are only partially typed. The signatures have generic data types only (TYPE DATA) because BOPF does not create specific interfaces for each business object. The Draft Feature The draft feature is a special feature that enables the application to store the user’s editing state at any point in time. This feature is also supported from end to end for SAP Fiori UIs implemented with SAP Fiori elements. Thus, for UI-based functionality and services, the draft feature is a core element of the architecture. The draft concept provides the following features: Users can interrupt the editing or creation of a business object without losing data. Support is provided for switching between devices and browsers during editing. In the future, support will probably be available for collaboration of multiple users working on the same business object. Furthermore, with the draft feature, SAP S/4HANA can follow a stateless paradigm (REST using OData), ensuring cloud qualities such as robustness and scalability, while using an exclusive lock and the invocation of the necessary backend application logic during editing. The draft implementation usually follows the implementation type of the application. For the managed implementation type, the with draft addition enables the draft feature. Then the ABAP RESTful application programming model infrastructure takes care of the draft handling. For the BOPF-managed implementation type, it is similar, with the difference that the BOPF infrastructure is responsible for draft handling. For the unmanaged implementation type, draft enablement currently only can be defined as managed by the ABAP RESTful application programming model infrastructure (DRAFT MANAGED) or by the BOPF infrastructure (DRAFT BOPF MANAGED ). Thus, the draft handling is always part of the infrastructure. Draft enablement for the unmanaged implementation type requires that the relevant parts of the application logic are reimplemented so that they can be called by the ABAP RESTful application programming model or BOPF infrastructure for managed draft handling. With ABAP RESTful application programming model, transactions are split into two phases, the interaction phase and the save phase (for details, Section 2.2.3). The basic architectural idea is that the interaction application logic is the same for active instances

and draft instances. The draft reflects a persistent state of the interaction and the transactional buffer. The application can have additional application logic that is invoked during the save phase and is relevant for nondraft instances only. The draft instances are saved as well, but they do not run through the related application logic of the save phase. This means that draft instances can be saved with missing or inconsistent data, which would be prevented by validations for nondraft instances. 2.2.2

Defining Business Services

Business services provide external access to certain aspects of one or more business objects, either for consumption by the UI layer or as a remote API to be called by other systems. However, you do not define these services directly on top of the transactional business object models. Instead, application developers create service-specific projections of the underlying business objects with the aspects they want to expose through the business service. Developers can choose entities from the business object’s consumer model and project it for a specific service exposure. This includes entities, their fields and associations, operations, actions, functions, ETags, drafts, or restriction with static feature control. Provider-side features like authorization checks, locking, or dynamic feature control, as well as the application logic, cannot be projected, deactivated, or overwritten because the provider is in charge of its own consistency. Typically, you define different services for different purposes, with different service-specific projections. OData services for the UI typically include information such as value helps, texts, and UI annotations that can be evaluated by metadata-driven SAPUI5 controls. In addition, such services support the draft functionality. OData services for remote APIs usually expose only the entities of one business object —for example, a transactional business document. Related objects, such as codes and their texts or master data objects and their texts, are not included but are exposed by their own services. Figure 2.5 extends the diagram from Figure 2.4 with the design-time artifacts for defining and implementing business services (included in the grey box). This includes the service-specific data model and implementation and the service definition and binding. These concepts are explained in the following sections.

Figure 2.5

Service Definition and Implementation

Service-Specific Data Model Developers define the business service-specific data model in two steps: 1. Select the entities to be included into the projection. 2. For each entity, decide which fields and associations are included. In the ABAP RESTful application programming model, the projection is defined with special CDS entity types, called projection view entities. This special CDS entity type has been introduced because not all CDS features are allowed in this kind of view. For example, aggregations, unions, and joins are not allowed. With a special CDS entity type, these restrictions can be enforced with syntax checks. With projection views, you can not only restrict what entities and entity elements— columns and associations—are exposed. You can also restrict results that are returned by the view by adding a WHERE condition. Sometimes the service layer is not a pure projection, but also needs to add certain additional service-specific elements to the data model. It is possible to add additional fields—currently fields that are calculated in ABAP only, also known as virtual elements —or to define new associations to service-specific entities on this service layer. At runtime, CDS projection view entities can be accessed with SQL. Therefore, developers need to define the read authorization for such a query access and model it for each CDS entity with a related CDS access control definition (specified in data control language). In this case of transactional projection, it’s only possible to “inherit”

the access control definition of the underlying CDS entity. Otherwise, the authorization of the query access could be different than the authorization checked during the transactional access that is delegated to the underlying layer by definition (see the next section). Behavior Model In addition to the behavior definition of the business object, developers can define a projection behavior definition related to the projection root view entity of the projection. The main purpose of the projection behavior definition is to choose which features of the underlying business object behavior definition are exposed as the service-specific behavior. In the projection, only a part of the create/update/delete operations and the functions can be exposed, while others are omitted. In the same way, other features, such as support for draft or ETags, can be included or not. In addition, certain behavior can be restricted, mainly for static feature control. For example, a field that is editable in the business object might be set to read-only in the projection. Developers can add certain service-specific implementations in the projection layer too. These could include additional checks to prevent certain data from being updated through this projection or additional transformations and augmentations that are not derived automatically by the infrastructure. All other aspects of the behavior cannot be influenced with the projection behavior definition. It is, for example, not possible to change the implementation type or include or omit aspects like locking or authorization checks or to change or replace the application logic of the underlying business object. Service Definition and Service Binding To define an OData service on top of a projection view entity, developers have to add two additional artifacts, the service definition and the service binding: The service definition defines the scope of the service by listing the entities that are exposed by the service. The service binding references a service definition and specifies the protocol and protocol version with which the service will be created. Currently the ABAP platform supports OData V2 and OData V4. The service binding also supports versioning of services. With this, published services can be kept stable by providing a new service version if incompatible changes to the interface are required. For OData services, the entities of the CDS model are mapped to OData entity sets and entity types. The fields of the CDS entities are mapped to OData properties, and the CDS associations are mapped to OData navigation properties. Whether an OData entity supports POST, PUT, PATCH, or DELETE operations, in addition to GET, also is derived from the declared operations in the projected behavior definition. Furthermore, all exposed actions and functions of the behavior definition lead to OData actions and OData functions (or OData function imports in OData V2). 2.2.3

Runtime Architecture

Now, what happens at runtime? To illustrate the runtime architecture, we take an application with OData services that is built with the ABAP RESTful application programming model. The architecture is shown in Figure 2.6, with ABAP RESTful application programming model abbreviated as RAP, for better readability. The frontend sends OData requests, which are received by the OData service runtime and forwarded to the ABAP RESTful application programming model runtime through a protocol adapter. The runtime calls application-specific handlers—for example, for global authorization, global feature control, and locking of the lock master. The ABAP RESTful application programming model runtime executes read requests directly using the corresponding views of the data models. For write operations, the runtime invokes the ABAP RESTful application programming model provider. As explained in Section 2.2.1, there are several implementation types for providers: managed, BOPF managed, and unmanaged. The ABAP RESTful application programming model runtime accepts requests in entity manipulation language. In the ABAP RESTful programing model, EML provides the runtime access to the modeled entities and business objects. Because EML is part of the standard ABAP language and syntax, it allows a fully typed consumption of any ABAP RESTful application programming model-based implementation. EML allows for access to all external operations, like create, update, delete, any action, any function, the invocation of a lock, or the retrieval of data or feature control and authorization information. With the MODIFY ENTITIES EML statement, several modifying operations of different types (such as create, update, delete, and action) can be bundled within one call—for example, to invoke multiple actions or to create a complete business object in one call and roundtrip.

Figure 2.6

ABAP RESTful Application Programming Model Runtime Architecture Overview

Usually other applications or business objects, integration tests, generic consumption tools like data migration, or specific consumption tools like a SOAP API access the business object through EML. For OData, the ABAP platform provides end-to-end support, as mentioned in Section 2.2.2. Phases of a Transaction in ABAP RESTful Application Programming Model At runtime, a transaction defines the changes between two consistent saved states. This is also known as a logical unit of work (LUW) in the ABAP runtime. ABAP RESTful application programming model splits the transaction into the following two phases: 1. Interaction, in which one or multiple entities of one or multiple business objects are changed together 2. Save, in which further application logic runs, the data consistency is checked, and the changes are finally committed to the database If the save phase is rejected due to an inconsistent final state, the transaction returns to the interaction phase to allow adjusting the data to reach a consistent state. Obviously, such an orchestration applies in an end user scenario only; while in machine-to-machine communication, the inconsistent data is not saved and the request is rejected. Interaction Phase Operation Flow If an operation is invoked (for example, through EML or OData), the internal processing follows a defined sequence that is partially enforced and provided by the ABAP RESTful application programming model runtime, partially to be implemented by the provider implementation. Looking at the example of an action call for an instance-bound action, the following steps are performed to invoke this action: 1. Check global authorization. Is the user allowed to invoke this action at all? 2. Check global feature control. Is the action enabled at all? 3. Lock instance. Invoke the enqueue lock of the lock master. 4. Check ETags (only for HTTP requests). Does the provided ETag still match the current ETag? 5. Application-specific (pre)checks. 6. Application-specific transformations and augmentations. Only for projections. 7. Invoke actions of the underlying provider: Check instance authorization. Is the user allowed to invoke this action for this instance? Check instance feature control. Is the action enabled for this instance? Invoke action implementation. Invoke validations and determinations. In case of a managed provider, the infrastructure executes the provider steps as listed here. Only the implementations of actions, validations, and determinations are

application-specific. In case of an unmanaged provider, the application implementation must perform all the steps. For other operations, the sequence is the same, but not all steps are always relevant, and different application handlers are invoked based on the operation. Static actions, for example, are not related to a specific instance. Consequently, the instancerelated steps are not relevant for a static action. Another example is locking, which is not relevant for read access or for functions. Within the interaction phase, multiple operations can be called that result in changes to the transactional buffer until at some point in time the editing is completed and the data is finally checked and saved to the database. Save Phase Operation Flow The COMMIT ENTITIES EML statement finalizes the transaction and the changed data is persisted to the database. For this, the ABAP RESTful application programming model runtime invokes all involved business objects within one transaction in a process called the save sequence. This sequence consists of the following steps: Finalize (finalize) Check before save (check_before_save) Draw numbers (adjust_numbers) Save (save) Cleanup (cleanup) Each step is implemented by a corresponding method. The runtime of the ABAP RESTful application programming model calls these methods in sequence for each involved business object. If one method or business object fails—for example, if it isn’t in a consistent state to be saved from an application perspective—the runtime rejects the save operation. During the finalize step, the runtime invokes validations, determinations, and application logic. Then during the check before save step, everything is checked for consistency and to determine whether it can be saved at all. If after these two phases an error occurs, the transaction moves back to the interaction phase, allowing the consumer to correct the issues and run the save again. If everything is consistent, then numbers are drawn from number ranges in the draw numbers step. This is required for applications that use late numbering—for example, because there are legal requirements for gapless numbering. Finally, in the save step the data is saved to the database, usually by registering an update task function module. After the save is complete, the cleanup step ensures that transactional buffers are cleaned up and are reset to the proper state. If in these late phases an issue occurs, the transaction is rolled back. An explicit rollback can be achieved with the ROLLBACK ENTITIES EML statement. Provider Implementation

The overall architecture and the process for the interaction and save phases are independent of the concrete provider implementation. Every provider needs to fulfill the contract defined by the ABAP RESTful application programming model. For managed providers, the infrastructure already fulfills most of the contract out of the box and the application developer can focus on the pure application logic. The infrastructure takes care of internal buffer handling of transactional changes, the proper program flow, and even the access to the database (see Figure 2.7, showing the runtime architecture with the managed provider implementation).

Figure 2.7

Runtime Architecture with Managed Provider

Application developers implement application handlers for different aspects only. Depending on the entity, these can be validations, determinations, entity-specific authorizations, feature controls, actions, and functions. As mentioned, the standard save functionality provided by the infrastructure can be replaced with custom save logic (UNMANAGED SAVE ; Section 2.2.1). The custom save handler shown in Figure 2.7 represents such a custom save logic. In most cases, the standard logic provided by the infrastructure is sufficient, and a custom save handler is not needed. For BOPF-managed providers, the situation is similar (see Figure 2.8). The ABAP RESTful application programming model runtime invokes the BOPF runtime through an adapter. The BOPF runtime takes care of the transactional buffer and the overall flow and calls the application-specific business object implementation handlers for determinations, validations, authorization checks, and so on.

Figure 2.8

BOPF-Managed Provider

In case of an unmanaged provider, developers can reuse the existing application implementation. However, the implementation must fulfill the ABAP RESTful application programming model contract, and the unmanaged provider must implement several aspects that are otherwise provided by the infrastructure. This includes, for example, the create, update, and delete operations, the different transaction phases, the save sequence, and the transactional buffer. Often the existing application is not structured in a way that fits, for example, with the transaction phases and the save sequence of the ABAP RESTful application programming model, and thus needs to be refactored. As explained earlier (Section 2.2.1), the draft functionality must be handled either by the ABAP RESTful application programming model infrastructure or by BOPF, even for unmanaged providers. Figure 2.9 illustrates this option for an unmanaged provider with drafts managed by the infrastructure of the ABAP RESTful application programming model.

Figure 2.9

Unmanaged Provider with Managed Draft

Consumption through OData As stated before, the ABAP RESTful application programming model provides full endto-end support for consuming business objects through an OData service (OData V2 and OData V4). Thus, if an OData service is defined using a service binding (Section 2.2.2), the ABAP platform provides the complete protocol layer and translates any kind of request to either SQL (for query read access) or EML (for transactional access). With this architecture approach, the application implementation does not have to take the protocol and protocol version used into consideration. Thus, supporting additional channels or switching the protocol or protocol version can be achieved with very low efforts. In addition, the application can focus on real application logic instead of dealing with technical protocol topics.

2.3

Summary

To stay compatible with SAP ERP, the data model of SAP S/4HANA uses the same database tables for primary data – while derived data such as aggregates is computed on the fly However, it shields these database tables using an understandable, comprehensive VDM that uses well-known business terminology, documents the relationships between entities, and enriches the entities with additional business semantics. The VDM is composed from a CDS hierarchy. The data model of a business object consists of CDS views too. Based on this business object definition, application developers can use the ABAP RESTful application programming model to implement the transactional behavior, especially in CRUD operations. With the managed, BOPF-managed, and unmanaged implementation types, the programming model supports three flavors. Each has a different level of framework support and solves a different use case, from developing a new application to refactoring an existing application. By using the ABAP RESTful application programming model, application developers can easily allow business users to store a draft version of their business object. Now you’ve seen that the transactional programming model relies on the VDM. In the next chapter, you’ll learn how SAP Fiori apps consume the data exposed by the VDM.

3

Simplified Experience In this chapter, we explain the UX concept of SAP S/4HANA, which is SAP Fiori. We introduce the SAP Fiori launchpad as the central starting point when exploring SAP S/4HANA, and we explain how the search architecture helps users find what they are looking for.

With SAP Fiori, SAP S/4HANA has a highly usable, consistent and unified user experience across all applications. Enterprise search and in-application search further simplify the user experience by helping you to easily find the application or information you need for the task at hand. A variety of personas experience SAP S/4HANA from different perspectives. Business users interact with it to accomplish their daily tasks in the enterprise; administrators take care of installation, configuration, and monitoring; and key users and developers deal with adapting or extending its core functionality. For any of these activities, it’s worth striving for a simplified experience to foster efficient work.

3.1

User Experience

User experience is about meeting the user’s needs in the most effective and enjoyable way. SAP’s understanding of how to create true innovation manifests in the awardwinning SAP Fiori UX. In fact, SAP Fiori defines the UX of the intelligent enterprise. It lays the foundation for consistently designing and developing applications in SAP S/4HANA and integrated solutions. 3.1.1

SAP Fiori

SAP Fiori has evolved since it put down its roots in 2013. What started as a collection of apps with a new design has grown to the present day into an entire design system. The original focus on casual user self-services has been extended to activities for power user roles, along with notifications in SAP Fiori 2.0. While from the beginning its adaptiveness fostered instant use on multiple desktop and mobile devices, SAP introduced specific flavors of the design system for the most common mobile operating systems: iOS and Android. These combine the strengths of the SAP Fiori UI and each operating system to deliver enterprise applications for high-impact scenarios via a native mobile experience. At about the same time, the conversational UX provided a design language for enterprise conversational products, which was integrated into SAP S/4HANA as the SAP CoPilot digital assistant. Recently, adoption of the latest step of this design evolution, SAP Fiori 3, has started in SAP S/4HANA to help ensure a seamless and consistent UX that incorporates means of machine intelligence. SAP Fiori sets the standard for enterprise UX by removing unnecessary complexity. This core goal is reflected in five design principles: Role-based, to design applications for the users’ needs and the way they work Delightful, for making an emotional connection

Coherent, to provide one fluid, intuitive experience Simple, including only what is necessary Adaptive, with respect to multiple use cases and scenarios Moreover, SAP’s UX strategy focuses on empowering SAP customers and partners to design their UX journey and execute on it. You find more details about the SAP UX and SAP Fiori at https://experience.sap.com. 3.1.2

User Experience Adoption Strategy

The SAP Fiori UX concepts and design principles are key components in SAP’s designled development process. The SAP S/4HANA architecture embraces them by being a stakeholder for the enabling technology and promoting an SAP Fiori UX-based adoption of its application portfolio. For accessing the portfolio, the SAP Fiori launchpad offers users a role-based entry point to services and applications they are authorized to launch via tiles. Preferably, these are native SAP Fiori apps built with SAPUI5 as the underlying UI technology. Overview pages as a pattern-based variant of such apps provide quick access to vital business information at a glance. SAP Fiori apps for analytical information and transactional processing allow users to get more detailed insight or take specific actions. Where appropriate, these apps follow a model-driven development paradigm by leveraging the SAP Fiori programming model with SAP Fiori elements floorplans or by applying the drilldown functionality of the SAP Smart Business framework. However, more individual application design and behavior is possible with freestyle SAPUI5 development. Finally, the SAP S/4HANA application portfolio and its architecture help to preserve major functionality not yet transitioned to native SAP Fiori apps. Therefore, the portfolio incorporates applications built in traditional UI technologies—namely, Web Dynpro for ABAP, SAP GUI (for HTML), and Web Client UI. Based on SAP Fiori’s design concepts and particularly its visual themes, all traditional UIs are harmonized to bring the UX and integration close to that of native SAP Fiori apps. Let’s take a closer look at the architecture and elements of these adoption levels. 3.1.3

SAP Fiori Launchpad

The SAP Fiori launchpad is the central entry point for all users to applications in SAP S/4HANA. It establishes role-based access to the SAP Fiori launchpad content, with multiple application types (SAP Fiori, traditional UIs,), as well as services and plug-ins integrated in the shell. Today, the single-page content is structured by catalogs, groups, and tiles. With SAP Fiori 3.0, additional options are introduced for structuring content (multiple spaces that correspond to functional areas along roles, and pages within them containing sections with tiles) and presenting this content in an enriched and more flexible way (cards). Already, selected tiles display dynamic content, like key performance indicators (KPIs) for instant business information in the overview page. Because the computation of such KPIs can be costly, the KPI values are cached. These cached KPIs are updated periodically or upon user request.

The purpose of SAP Fiori launchpad content artifacts can be distinguished into layout aspects (groups, spaces, pages) and the assignment of applications with associated authorizations to users. For the latter, tiles and target mappings are first defined in technical catalogs. Both, SAP Fiori apps and applications with traditional UI technologies are managed in technical catalogs. The content of technical catalogs can be reused and referenced in business catalogs to form a subset relevant for a business role. This subset reflects the authorization requirements of a certain business user. Finally, a business PFCG role contains references to business catalogs and business catalog groups in the menu. Once an administrator assigns the business PFCG role to a user, apps and groups get available in SAP Fiori launchpad. Note, that SAP S/4HANA 2020 evolves with an introduction of individual business apps as well as transaction codes for SAP Fiori apps (SAP Fiori ID equals to transaction code), a unified persistency (UIAD) and migration of technical catalogs to it, or the SAP Fiori launchpad app manager as successor of the tool formerly known as the mass maintenance tool. In SAP S/4HANA on-premise installations, rapid content activation eases the administration of SAP Fiori launchpad content and business roles. For SAP S/4HANA Cloud the concepts for managing roles and catalogs and assigning them to users are described in Chapter 17. Figure 3.1 shows the architecture of the SAP Fiori launchpad in SAP S/4HANA. The business server pages (BSP) infrastructure is used as a UI repository. It serves static UI artifacts using a BSP resource handler and assures the availability of corresponding technology and SAP Fiori app resources. For example, application resources comprise an app descriptor, models, views, controllers, components, or annotations as artifacts of the SAPUI5 programming model for SAP Fiori elements.

Figure 3.1

SAP Fiori Launchpad in SAP S/4HANA

For UI technology, this repository comprises the SAPUI5 libraries, SAP Fiori launchpad, generic application frameworks like SAP Fiori elements, and standard SAP Fiori UX

themes. In addition, an icon font is loaded once by the SAP Fiori launchpad for the use of icons in SAP Fiori apps. Today, for an optimized loading behavior, the delivery default of static SAPUI5 resources is via a content delivery network (CDN, see also Chapter 20, Section 20.1.2). In contrast, static resources of SAP Fiori apps and libraries or SAP Fiori launchpad plugins of SAP S/4HANA are delivered via the BSP handler of the ABAP platform, including UI texts translated into the supported languages. In the repositories, a cache buster approach is crucial to properly handle the lifecycle of updated static resources without the need for user interactions like clearing the browser’s cache on the client. Instead, the cache buster introduces a mechanism that enforces reloading updated static artifacts and is controlled on the server side. Traditional applications that are not based on SAP Fiori UI technologies use server-side rendering. Their development artifacts and resources reside in the SAP S/4HANA backend. Both SAP development and SAP customers can apply methods of UI flexibility to adjust standard apps to industry, organization, or user needs. For SAP Fiori apps, app variants and filter variants are among these measures. Customers’ extension applications built on SAP Cloud Platform also may be uploaded to SAP S/4HANA and integrated into SAP Fiori launchpad. Customers can further introduce their corporate branding by means of theming with the UI theme designer. The SAP Fiori launchpad home page (and in future, subordinate pages) offers users content in a role-based manner as defined and configured in the backend. Roles refers to business catalogs with tiles and target mappings to apps (and single business apps in future releases). These roles need to further comprise the authorizations required to call OData services used by the SAP Fiori apps. Furthermore, the SAP Fiori launchpad renders content in a device-dependent manner. On all devices, it can be accessed via supported web browsers, and on a desktop system the SAP Business Client with embedded Chromium is an additional option. Users can personalize pages to their needs and use the app finder for a search-based access to apps. Another option to personalize the user’s home page experience is saving tiles from application context for later quick access. Users can further set preferences for certain settings, like language or formatting, theme selection, or defaults for product and application parameters. The notification panel delivers information of interest asynchronously to users and lets them instantly take action. The free-text search for content is enabled in the home page with a results page that allows users to navigate to apps where appropriate. Intent-based navigation for launching apps from the home page or to navigate from one app to another is one of the key paradigms of SAP Fiori. SAP Fiori launchpad content spans across all UI technologies and forms a navigation network of interconnected apps that is favorably implemented in a role-based manner to ensure proper navigation. It is defined by application navigation targets using abstract intents, which at runtime are resolved into actual URLs by the SAP Fiori launchpad target-resolution service. These intents are composed of a semantic object and an action, which are appended as a fragment (prefaced by #) to the SAP Fiori launchpad base URL as follows: #-?=

The semantic object represents a business object, such as a customer, a sales order, or a product. It enables referring to such entities in an abstract, implementationindependent way. The action defines an operation, such as to display or approve a purchase order. This operation is intended to be performed on a semantic object. In the URL notation of navigation, parameters are optional, but they’re often required in navigation actions here, like a concrete ID needed to open a concrete business object instance without selection steps by the user. If assigned, intent-based navigation enables users to select from multiple navigation targets at runtime. If the number and/or size of intent parameters exceeds the URL length limits, automatic URL shortening is applied, which transfers parameters and values to temporary storage and makes them accessible by a single ID parameter. In addition to intent parameters, a cross-application state can be transferred during such navigation in cases in which complex parameter structures are sent or parameters are not be exposed in the visible URL. In contrast, the inner application state captures the UI state of an application and allows for bookmarking it or restoring it upon a sequence of forward and backward navigations. Several capabilities within the environment of the SAP Fiori launchpad require further provisioning and integration of services. Notifications are based on a persistence that temporarily stores messages for a user in an asynchronous way, especially in notifications produced in the SAP S/4HANA backend for later consumption by the user. If the user is online, notifications can also be delivered instantly by pushing them to the user interface in the browser. As mentioned earlier, enterprises typically apply visual aspects of their corporate branding via custom themes, which adjust specific styles of SAP’s standard themes. The theming architecture requires that such themes are generated and must correspond to the version and patch level of SAPUI5 used and the underlying SAP theme. Here, an integration with SAP Cloud Platform helps to manage the dependencies and lifecycle of themes. If the SAPUI5 version or patch level used changes, the themes are generated again. A read-through cache approach in SAP S/4HANA triggers the regeneration when the theme for the requested SAPUI5 version is not found in the theme cache. This regeneration is done on SAP Cloud Platform. The updated theme is then transferred to SAP S/4HANA to be cached there for subsequent requests of the same version. Beyond its own capabilities and services, the SAP Fiori launchpad plug-in interface allows for introducing specific behavior on the shell level. Via configuration, plug-ins are automatically loaded for any user once activated in the system. Several accompanying services have implemented this interface. Typically, the plug-ins can be accessed instantly via an icon in the shell bar or via a side panel. Among them are features to enable a conversational user experience, as well as human-to-human chat capabilities enriched by business object-related and screen capture features. The web assistant service provides tailored help to the user, as it is aware of the used product and application context. Mobile cards is one offering of the mobile services portfolio. It allows users to push SAP Fiori app content in the form of so-called cards to their mobile devices. There, card content is displayed in a native mobile app that retrieves the business data via the corresponding service in SAP Cloud Platform and further OData service communication with the SAP S/4HANA.

3.1.4

SAP Fiori Apps

Beyond question, SAP Fiori apps are a primary citizen of the SAP Fiori launchpad. Let’s take a closer look at the flavors of SAP Fiori apps and aspects of their architecture. One dimension for distinguishing SAP Fiori apps is the way they are built: SAP Fiori elements apps Apps built with SAP Fiori elements benefit from a framework providing the most common application patterns, using annotation-based SAPUI5 controls and enabling a feature set with continuous design consistency and compliance with the latest design guidelines. SAPUI5 freestyle apps Individual requirements for visual design or interaction behavior can exceed the generic capabilities of SAP Fiori elements patterns. This case is addressed by implementing SAP Fiori apps in a freestyle way. For freestyle apps, the full range of the SAPUI5 programming model can be exploited with its models, views, controls, controllers, components, and app descriptors. Some of the controls even allow annotation-driven behavior like in SAP Fiori elements. However, you must be aware then when you decide to build your application or parts of it in a freestyle way, it is your task to keep it consistent and to keep up with future design evolution. SAP Smart Business apps SAP Smart Business is a generic, analytical application framework. An application built with this framework enables users to view and analyze the data of one KPI. The corresponding artifacts, like KPIs, reports, groups, and apps, are defined with the associated SAP Smart Business modeler applications. SAP Fiori library SAP Fiori libraries provide reusable artifacts to be consumed by SAP Fiori apps, regardless of whether they use SAP Fiori elements or are freestyle. Some libraries contain a collection of lower-level artifacts (views, fragments, controllers); others provide more sophisticated functionality encapsulated in a component. The model of such a component is either shared from the consuming app or relies on an OData service at the expense of additional requests. Figure 3.2 illustrates how all these SAP Fiori apps access data in the backend via the same client-to-server communication based on the OData. For instant use in an app, SAPUI5 supports an OData model for both V2 and V4. In views, controls are bound to the OData model and its entities, properties, and metadata. SAP Fiori elements and freestyle apps can be built using development templates in the cloud- and browser-based IDE recently renamed to SAP Business Application Studio. For SAP S/4HANA, the UI resources of the apps are stored as a single-versioned snapshot in the BSP repository of the ABAP platform and loaded by the browser client using an associated BSP handler. A second dimension of classifying SAP Fiori apps is the purpose they are used for: transactional, analytical, or a combination of both. Users retrieve aggregated views on business data with the analytical apps. Any of the previously mentioned approaches to build apps is present in the set of analytical apps. For more details about the embedded analytical apps, including further SAP Fiori apps provided by SAP Analytics Cloud dashboards, see Chapter 4, Section 4.1.

Figure 3.2

SAP Fiori Apps and Libraries

In contrast, transactional SAP Fiori apps typically allow managing data on the business object level—that is, for querying, reading, creating, changing, taking specific actions, or deleting. Further insights into the domain of SAP Fiori elements apps are given in Section 3.1.5. Information about all apps of SAP S/4HANA is available in the SAP Fiori apps reference library (https://fioriappslibrary.hana.ondemand.com). Many filters may be applied for narrowing the result set of matching apps. There is one for UI technologies that covers the different flavors of SAP Fiori apps as well as traditional UI applications with their long-established architecture and programming models, which are not further described in this book. 3.1.5

SAP Fiori Elements Apps

SAP Fiori elements provide designs for UI patterns and predefined templates for common application use cases. Predefined floorplans, views, and controllers ensure UI consistency within the application and across similar applications. SAP Fiori elements are continuously adapted to the most recent design guidelines, thus bringing immediate benefit to the implementing applications. Several floorplans are available as SAP Fiori elements: Overview page An overview page is a data-driven SAP Fiori app for organizing large amounts of information. Information is visualized in a card format, with different cards for different types of content, presented in an attractive and efficient way. The user-friendly experience makes viewing, filtering, and acting upon data quick and simple. While simultaneously seeing the big picture at a glance, business users can focus on the most important tasks, enabling faster decision-making, immediate action, and navigation to associated SAP Fiori apps for further drill-down of data and specific actions.

List report and object page SAP Fiori elements contain predefined templates for list reports and object pages. A list report allows users to view and work with entities organized in lists—that is, in a table format. The list report is typically used in conjunction with an object page. This object page allows users to work with an entity, the object, providing functionality to view, edit, and create entities. Analytical list page The analytical list page lets users analyze data step by step from different perspectives, investigate a root cause through drilldown, and act on transactional content. The purpose of the analytical list page is to identify interesting areas within datasets or significant single instances using data visualization and business intelligence. Moreover, a user can navigate to the object page of the instance or to a related app to perform actions there. Worklist A worklist displays a collection of items that are to be processed by the user. Working through the item list usually involves reviewing details of the list items and taking action. In most cases, the user must either complete a work item or delegate it. The SAP Fiori elements framework supports the dynamic page layout for all available floorplans. In addition, the flexible column layout is supported for all available floorplans except the overview page. Among the features of SAP Fiori elements are an edit mode control for switching between display and edit, message handling, status colors and icons to indicate criticality, header facets to define which information is displayed in the header, and value help. It also supports handling of draft documents. Compared to regular optimistic locking in OData services by ETags, draft documents establish a pessimistic lock bound to the user (not the session). Moreover, these draft document locks function in the same way as session-based enqueue locks used in stateful applications. Given that the same lock arguments are used, draft-based locks protect against unintended changes to an application using the session-based lock and vice versa. The draft is autosaved even if it’s incomplete or contains errors. A user may resume working on a draft document later by selecting it in a list report. (For the draft concept, see also Chapter 2, Section 2.2.1). Developers can use SAP Fiori elements to create SAP Fiori apps with the patterns listed in Figure 3.3.

Figure 3.3

SAP Fiori Elements Application Patterns and Their Artifacts

As depicted there, they are based on OData services and CDS annotations. The app developer needn’t write any JavaScript UI coding. As one example of such a CDS annotation, a page header that contains information about the object you are editing in the object page floorplan is annotated with @UI.headerInfo (see Listing 3.1). … @UI.headerInfo: { typeName: 'Sales Order', title: { label: 'Sales Order', value: 'SalesOrder' -- refers to element in list }, description: { label: 'Customer', value: 'CompanyName' -- refers to element in list } } define view ZExample_SalesOrder as select from sepm_cds_sales_order as so { key so.sales_order_id as SalesOrder, so.customer.company_name as CompanyName, … }

Listing 3.1

View Definition with UI.headerInfo Annotation

The OData service model (entity sets and entity types) is derived from CDS views of the VDM. Requests from the client are processed by the underlying framework of the SAP Fiori programming model or application-specific logic. The resulting app uses predefined UI views and controllers that are provided by SAP Fiori elements in combination with application-specific UI resources as served by the BSP handler (see the left-hand side of Figure 3.3). This means that no applicationspecific view instances are required unless the app is extended with individual aspects.

The SAPUI5 runtime interprets the definitions in the app descriptor, metadata, and annotations of the underlying OData service and uses the corresponding views for the SAP Fiori app at startup.

3.2

Search

The search functionality is an integral part of SAP S/4HANA. It comes in two flavors, targeting different application requirements. Enterprise search offers free-text search for instances of practically all business object types and for SAP Fiori apps available in SAP S/4HANA. The search scope for each user is naturally restricted to only those objects and applications assigned to the user’s roles. Enterprise search is accessible through the search field at the top of the page in the SAP Fiori launchpad. In-application search is integrated into single SAP Fiori app and offers context-specific fuzzy search capabilities. It is accessible through the dedicated search fields in the SAP Fiori list apps, value help dialogs, and type-ahead input controls. 3.2.1

Search Architecture

The search architecture of SAP S/4HANA is shown in Figure 3.4. It is based on the VDM, which we introduced in Chapter 2, Section 2.1. The service adaptation definition language (SADL) framework is the infrastructure for model-based processing of OData services in the ABAP platform. SADL also implements the in-application search. It is controlled by special annotations of the @Search domain, directly in the VDM consumption views.

Figure 3.4

Search Architecture in SAP S/4HANA

The enterprise search framework consists of two parts, one implemented in SAP HANA and one implemented in the ABAP platform. The SAP HANA enterprise search framework offers the core enterprise search capabilities through a set of database procedures (for details, see the SAP HANA Search Developer Guide at help.sap.com). The ABAP enterprise search framework is responsible for the integration with the Search SAP Fiori app on the one hand and with processing enterprise search CDS models on the other. Enterprise search CDS models are special CDS views annotated with @EnterpriseSearch.enabled: true . Technically, the enterprise search models are not VDM views and can be created without following VDM rules. However, in SAP S/4HANA they are normally built on top of the VDM basic layer and thus share the same business models with other VDM entities. During the activation of an enterprise search CDS model, the framework generates corresponding sets of runtime metadata called enterprise search connectors. The connectors are the main artifacts controlling the execution of the enterprise search queries during runtime. Because the connector concept is the same as the one used for older types of enterprise search models, the modeling technology can change. Thus, SAP S/4HANA on-premise still supports another type of search model, called traditional search models. These models are built directly on the SAP S/4HANA tables without using CDS or VDM. These models are deprecated and are intended to be incrementally replaced with CDS models. SAP S/4HANA Cloud supports only CDS search models. Use of SAP HANA Search Functionality Both in-application and enterprise search use the text search capabilities of the SAP HANA database. For that, SAP HANA supports the contains predicate in SQL statements, which can be used in WHERE conditions to get all records that match given search terms. This predicate supports a variety of advanced search features, the most widely used of which are fuzzy and tokenized search. The implementation of contains in SAP HANA is motivated primarily by performance considerations and thus introduces a number of important limitations on the design of SQL and, therefore, CDS views suitable for search queries. In particular, contains cannot be applied to calculated or aggregated fields or used with specific unions and join conditions. These means that if a basic VDM view makes use of these features, it often cannot be used for the development of the search model. The solution to this issue is provided by a special type of VDM view, enterprise search auxiliary views. An auxiliary view is explicitly designed to fulfill the boundary conditions for search views and is linked to a basic view it should replace in the search models. If a basic view is equipped with an enterprise search auxiliary view, an enterprise search model developer must always use the latter in the search models. SAP S/4HANA offers special tool support that performs such substitutions automatically. Another important SAP HANA functionality used to support search in SAP S/4HANA is full text indexes. Full text indexes are required for tokenization of strings and allow searching for single words instead of complete strings. However, full text indexes

consume both memory and CPU resources in SAP HANA. For this reason, in SAP S/4HANA on-premise systems, full text indexes on big transactional tables are shipped with corresponding business switches that must be activated only if the corresponding search models are to be used. 3.2.2

Enterprise Search Extensibility

In-application search is a built-in functionality and doesn’t need any explicit configuration and extensibility. In contrast, the enterprise search offers a number of possibilities to adjust its behavior to particular demands. Configuration of Default Search Scope The enterprise search queries of each user are executed on the set of the enterprise search models that the user is authorized for. This set of models is often called the search scope. The search scope can be further restricted to a specific search model using the dropdown list near the search input field. Two entries in this list have special meaning: All indicates the full set of the models available for a user. Apps restricts the search only to SAP Fiori apps that a user has access to. The most powerful features of enterprise search become especially clear when searching within the “all” scope. Just by typing in a search term, which can be a (partial) ID or a combination of words, the user automatically gets an overview of related business objects and applications in the system, ranked by relevance to that user. Thus, it is not surprising that a lot of SAP S/4HANA customers use enterprise search as a kind of universal entry point into the system. On the other hand, it should also be clear that searching through the whole system has its price in terms of resource consumption. Especially in the aforementioned entry point scenario, users often are interested only in finding a specific app but spend unnecessary computing resources by searching in all search models included by default in their All scope, instead of restricting the search scope only to the applications (Apps). For this reason, there are enterprise search-specific settings in SAP Fiori launchpad for controlling the default scope of enterprise search. The following options are available: Make Apps the default option instead of All The search is executed only on the apps model by default. The user must explicitly select All if necessary. Completely remove the all scope from enterprise search It will be only possible to search on a specific model, like Apps or Sales Orders. Personalization of Enterprise Search As mentioned, enterprise search ranks the search results according to the relevance for the user. The relevance calculation can be substantially improved if the enterprise search framework has the permission to collect and evaluate data about user behavior within the search scenarios. By default, data collection is configured as an opt-in option. Each user can allow the collection of data in the SAP Fiori launchpad settings. In

addition, the system administrator can change the default to opt-out using the Configure Personalized Search app. In this case, the user can still switch off data collection. SAP highly recommends that you enable the collection of user behavior data to get more accurate search results. Note that this data is not transferred to SAP. Managing Search Models SAP S/4HANA includes several additional apps for the management and adjustment of single search models to meet the specific requirements of an organization. The most important use cases covered by these apps are described in the following sections. Getting an Overview of Search Models The Display Search Models app provides an overview of the models available in the system, their status, and the authorizations necessary for using each model. It also supports activation and deactivation of the models. Adjusting the Structure of Search Models The Manage Search Models app offers a number of capabilities for adjusting single models. The user can, for example, change the appearance of fields in the search results, change the weighting factors of the fields for ranking, and exclude a field from the search scope. The following features are of special interest: Inclusion of custom fields The custom field extensibility of the search models in SAP S/4HANA works in the standard way, using the Custom Fields key user extensibility tool (see Chapter 5, Section 5.1). However, adjustment of the search settings on such fields is only possible using enterprise search modeler. Customization of search models A key user can customize both the fields of the model used for search requests and their appearance in the enterprise search UI. It’s also possible to change the rankings of the fields and adjust further search-related settings of a model. Fine-Tuning of Rankings Search models support adjustable rankings, which allows you to assign custom ranking factors to the models or to fine-tune the factors defined by the developers of the model. Using the adjustable rankings, you can, for instance, boost the relevance of your own objects, newly created objects, objects created in a particular region, and the like. For utilization of this powerful feature, SAP S/4HANA offers the special Fine-Tune Ranking app. The application allows you not only to adjust the ranking factors but also to investigate the influence of these factors in real time. In this section, we had a look at the different search scenarios supported in SAP S/4HANA and some specifics of their implementation and modelling. We have also discussed the key considerations for the appropriate configuration of the search behavior to avoid unnecessary resource consumption.

3.3

Summary

In this chapter we gave an overview of two ingredients that enable SAP S/4HANA to provide a simplified user experience: SAP Fiori UX and the search capabilities in SAP S/4HANA. In the first section we discussed SAP Fiori UX, starting with its principles and the adoption strategy. Next, we had a deeper look at, SAP Fiori launchpad, starting with concepts and architecture, and including topics such as supported devices, intent-based navigation, theming, and the plug-in interface. We covered SAP Fiori apps, including their different types, communication with the backend, the differences between transactional and analytical apps, and the SAP Fiori apps reference library. Finally, we looked at SAP Fiori elements apps, their architecture, the different floorplans available, and discussed the related CDS annotations. In the second part of this chapter we looked at search in SAP S/4HANA. We explained the difference between enterprise search and in-application search and the search architecture in SAP S/4HANA. We discussed CDS-based enterprise search models and how they are used. Next, we discussed the options for extensibility in enterprise search, for example by adjusting the structure of search models, and by fine tuning of rankings. In the next chapter we look at functions in SAP S/4HANA that enable the intelligent ERP, including machine learning, intelligent situation handling, and analytics.

4

Intelligence and Analytics SAP S/4HANA leverages the intelligence of SAP HANA to provide embedded analytical and machine learning capabilities. SAP Analytics Cloud as well as sideby-side machine learning extend these even further.

With each business transaction, an ERP solution collects data. Over time, this data becomes a valuable asset to be leveraged for decision support, business planning, or simulation. Further, the collected data can be used to automate business processes that were executed by manual interaction steps in the past. SAP S/4HANA embeds intelligent technologies to combine transactional and analytical applications and enable truly intelligent enterprises. Especially the embedded analytics architecture of SAP S/4HANA uses the actual transactional data for reporting, planning, and simulation in real time without the delay caused by replicating the data into a business warehouse or a data lake. Intelligent situation handling enables users to deal with unexpected business situations. Finally, machine learning helps to automate business processes and prevent users from needing to perform cumbersome tasks.

4.1

Analytics

Typically, we speak of analytics if we see UI elements such as dashboards, slice and dice, or KPIs, and the data is calculated using sophisticated queries with complex aggregation and grouping rules. As additional criteria, we call an application analytical if it is based on the specialized models and frameworks of the rich analytical foundation of SAP S/4HANA. SAP S/4HANA differentiates between embedded analytics and enterprise analytics. Embedded analytics refers to a family of analytical applications, which from a UX perspective are embedded into the transactional UIs and which are implemented directly in SAP S/4HANA. They do not require additional infrastructure or data-replication services. Such analytical applications operate directly on the real transactional business data stored in SAP HANA and don’t depend on time-consuming data replication. For example, it is not uncommon that the replication of the transactional business data to a connected business warehouse takes 8 hours or more. Analysis and reports on this data cannot answer what happened just now. Such a reporting on old data does not support business scenarios in spare parts management or retail which require prompt decisions while still processing large number of transactions in short time. Embedded analytics allows you to decide now based on the business data stored just a millisecond ago in SAP HANA. The value of embedded analytics becomes apparent in the following business examples, which are all supported by SAP S/4HANA: A material planner who is responsible for the stock availability analysis for just-in-time (JIT) calls of components and therefore needs to view the mapping of requested components to JIT calls. The necessary alignment with shipping specialists, transport

planners and production planners needs access to real-time information for accurate decisions. A meter data specialist in the utilities industry needs a real-time picture of the progress of meter reading for a scheduled billing date. Only then it is possible to detect potential issues like missing or erroneous meter readings and to trigger corrective actions in time. A purchaser deciding which supplier to assign to a certain purchasing requisition benefits from having information about the supplier’s rating, delivery performance, and turnover in the past, presented in the same UI in which the assignment is made. A salesclerk responsible for resolving blocked orders needs criteria like the order value, the expected profit, or a prediction of the customer’s payment behavior to decide which orders to focus on. According to this information, the salesclerk may decide to increase a customer’s credit limit and unblock the order. In all these examples, a person responsible for a particular task relies on the availability of context-dependent aggregated information for making a correct decision. Providing such information on the fly is the main purpose of the embedded analytics infrastructure in SAP S/4HANA. In contrast to embedded analytics, the enterprise analytical applications are built and deployed independently of SAP S/4HANA on dedicated analytical platforms like SAP Business Warehouse, SAP Analytics Cloud, and SAP Data Warehouse Cloud. Enterprise analytical applications consume the data of SAP S/4HANA tenants or systems either remotely through live connections or by replicating it into their own storage. The live connection allows for analytics on real-time data, too. 4.1.1

SAP S/4HANA Embedded Analytics Architecture Overview

The core of the embedded analytics runtime architecture is the VDM, which offers a rich set of analytical models in different business areas defined as CDS views (see Figure 4.1). The analytical CDS views represent analytical queries, cubes, dimensions, and hierarchies. The ABAP application server of SAP S/4HANA includes two frameworks for analytical applications: Service adaptation definition language (SADL) Analytic engine SADL translates the OData requests from the SAP Fiori apps into SQL statements on the relational VDM. On the one hand, it offers clients full flexibility for formulating queries using the corresponding OData syntax. OData V4 in particular adds a lot of new aggregation functions with the new $apply system query option. On the other hand, the scope of SADL-based analytical functions is limited to what can be efficiently expressed in SQL.

Figure 4.1

Runtime Diagram of SAP S/4HANA Analytical Infrastructure

This is where the analytic engine comes into play. It is an OLAP engine based on the embedded ABAP business warehouse technology also known from SAP S/4HANA’s predecessors. Internally, it operates on the star schema, which is common to OLAP engines. This star schema representation of the VDM is provided by the analytic engine runtime objects (as depicted in Figure 4.1) and is generated based on a special type of VDM artifact, analytical queries. The analytic engine offers an impressive set of analytical features that are difficult to express in pure SQL. These include advanced hierarchy handling and flexible definitions for calculations after aggregation, including exception aggregation and intelligent unit treatment, just to name a few. These features require a special protocol for communication between clients and the analytic engine, which is called Information Access (InA). InA is an HTTP-based SAPinternal protocol widely used by SAP analytics products. In addition to InA, the analytic engine supports OData access for SAP Fiori apps, too. However, the support for OData in the analytic engine is limited to purely analytical purposes. You can’t freely combine the analytical and transactional entities in one OData service. The analytic engine makes extensive use of the SAP HANA capabilities for the most efficient execution of the analytical queries. This is achieved by pushing the query execution out of the ABAP application server and down to SAP HANA whenever it makes sense from query semantics and performance points of view. SAP S/4HANA Cloud includes SAP Analytics Cloud embedded as a special integrated edition of SAP Analytics Cloud, which is SAP’s cloud-based analytics solution. SAP Analytics Cloud can be used with all SAP S/4HANA editions, on-premise or in the cloud, and offers superior analytical functionality, including planning and data acquisition from various systems.

The embedded edition of SAP Analytics Cloud is limited to only one live connection to its hosting SAP S/4HANA Cloud tenant and offers a rich and sophisticated functionality for implementation of analytical dashboards (for details, Section 4.1.2). The SAP S/4HANA Cloud subscription includes the embedded edition of SAP Analytics Cloud, which is automatically deployed and configured during tenant provisioning. To better understand how SAP Analytics Cloud accesses the SAP S/4HANA data, the SAP Analytics Cloud infrastructure can be divided into two main components: The story runtime is executed in the browser (depicted as the SAP Analytics Cloud dashboard in Figure 4.1). The SAP Analytics Cloud backend serves as a container for specific metadata, including connectivity and dashboard (story) definitions. Note that the SAP Analytics Cloud backend is not involved in the actual data processing during the query execution. The queries are processed completely in the browser. As a consequence, even in the on-premise deployment, the SAP S/4HANA data does not leave the on-premise network. 4.1.2

Embedded Analytical Applications

Embedded analytical applications in SAP S/4HANA are based on a set of analytical UX patterns. In this section, we describe the most important analytical UX patterns and the SAP S/4HANA application types designed for the efficient implementation of these patterns. In Chapter 9, Section 9.10, we elaborate how these patterns are used in sales. Analytical applications in SAP S/4HANA support intent-based navigation of SAP Fiori UX (see Chapter 3, Section 3.1.3) and therefore can be connected easily to each other and to other types of applications. Dashboard Dashboards provide an overview of various pieces of information that are relevant for a specific objective or business process. Dashboards are based on multiple analytical queries that are linked through shared filters and dimensions. In SAP S/4HANA, the dashboard implementation is based on embedded analytics powered by SAP Analytics Cloud. There is a rich set of standard dashboards shipped with SAP S/4HANA, like the Treasury Executive Dashboard (SAP Fiori ID F4316), Group Financial Statements (SAP Fiori ID F4326), or the Purchasing Spend Dashboard (SAP Fiori ID F4517). In addition, customers can create their own dashboards from scratch or using the standard content as a template. Multidimensional Reporting Multidimensional reporting offers the possibility to slice and dice data accessible in a single query for detailed analysis. The standard approach in SAP S/4HANA for implementing this analytical pattern is by using the grid of the Web Dynpro development environment. SAP Fiori apps such as Customer Returns (SAP Fiori ID W0139), Profit Centers—Plan/Actual (SAP Fiori ID F0926), and Goods Movement Analysis (SAP Fiori ID W0055) are based on this pattern.

In SAP S/4HANA on-premise, this pattern is also offered with design studio templates which are not depicted in Figure 4.1. Hybrid Transactional-Analytical Applications Hybrid analytical applications combine the analytical information related to a particular business object, often in a master/detail representation, with operations available on the corresponding object type. In SAP S/4HANA, such applications are called analytical list pages (ALPs). Good examples of this pattern include Monitor Purchase Order Items (SAP Fiori ID F2358), Sales Order Fulfillment Issues (SAP Fiori ID F0029A), and Gross Margin—Presumed/Actual (SAP Fiori ID F3417). Key Performance Indicators KPIs are highly aggregated metrics that can be placed on an SAP Fiori overview page using KPI tiles. This is a special kind of analytical application based on the SAP Smart Business framework. In addition to the standard UX capabilities of the KPI tile like context-dependent colors and change indicators, SAP Smart Business supports generic drilldown for detailed analysis. These capabilities are demonstrated in Quotation Conversion Rates (SAP Fiori ID F1904), Off-Contract Spend (SAP Fiori ID F0572), and Payment Statistics (SAP Fiori ID F0693A). 4.1.3

Modeling Analytical Artifacts

How are analytical models organized in the VDM? In the common model, one or more query views are built on top of a cube view (see Figure 4.2). The dimensions of the cube are defined by dimension views, which refer to one or more hierarchy or text views.

Figure 4.2

Organization of Analytical Models in VDM

Cube View A cube view, annotated with @Analytics.dataCategory: #CUBE , is the center of the model, gluing together fact data and related dimensions. For better maintainability of the big cubes, the fact data can be structured in a separated view marked with

@Analytics.dataCategory: #FACT , which than serves as a basis for the cube. (We didn’t

include it in the diagram for sake of clarity.) Note that each field of a cube view is considered either a measure (a field suitable for aggregation) or a dimension (a field suitable for grouping). The corresponding assignment is controlled by the @Aggregation.default annotation. Dimension View A dimension view, annotated with @Analytics.dataCategory: #DIMENSION , provides additional information about a dimension included in the cube. The fields of the dimension view that are not explicitly included in the cube view can be used only as visual attributes but are not available for grouping and filtering. It is worth mentioning that a dimension view can also be used as a source for an analytical query. Normally it makes sense to do so when analyzing the number of entities comprising a dimension possessing specific characteristics. Hierarchy View A hierarchy view is annotated with @ObjectModel.dataCategory: #HIERARCHY and specifies a parent-child hierarchy to be used with a dimension. Multiple hierarchies can be assigned to the same dimension. Most of the central business entities in SAP S/4HANA, such as cost center (VDM view I_CostCenter) or product (VDM view I_Product) provide corresponding hierarchy views for meaningful structuring of information in the applications. Analytical Query View Analytical queries are special metadata objects interpreted by the analytic engine and implemented as VDM views annotated with @Analytics.query: true . As mentioned, analytical queries provide access to the rich set of OLAP functions offered by the analytic engine. For this purpose, they offer a special set of annotations under the @AnalyticsDetails domain. For example, the @AnalyticsDetails.formula annotation makes it possible to define restricted and calculated measures based on the other measures and dimensions defined in the same query. This is not possible using the standard CDS view syntax. Another important usage of analytical queries is to restrict the scope of a cube. Cubes in SAP S/4HANA typically include all dimensions and measures relevant for a specific business area, so they tend to offer a broad scope of analytical data. Analytical queries defined on such cubes ensure the focus on a specific business perspective by limiting the set of available measures and dimensions to only those relevant in its context. 4.1.4

Analytics Extensibility

Analytical applications in SAP S/4HANA provide an efficient way to visualize and process analytical information based on the existing models. However, unleashing the full power of these analytical capabilities requires powerful tools for the development of custom models and applications. SAP S/4HANA includes a whole family of analytical

key user tools together with their corresponding object types and repositories (see Figure 4.3).

Figure 4.3

Design Time of Analytics Infrastructure in SAP S/4HANA

The access to all these tools is granted for users with the SAP_BR_ANALYTICS_SPECIALIST role. The users assigned to this role are often called key users. All key user tools are integrated into the standard SAP S/4HANA lifecycle management routines (see Chapter 5, Section 5.1.8). CDS-Based Extensibility Tools The tools described in this section are included in all SAP S/4HANA editions. In the onpremise edition, the analytical CDS views can also be directly edited with the ABAP Development Tools (ADT). With key user tools and CDS extensibility, it is important to keep in mind that only released views are generally available for custom extension scenarios. In SAP S/4HANA Cloud, the key user tools restrict access to released views only. In SAP S/4HANA on-premise, it is generally possible to use any CDS view for custom development using ADT. Still, using nonreleased views should be avoided because nonreleased SAP CDS views may be changed at any time in incompatible ways. View Browser and Custom View Builder The View Browser application allows you to find the CDS views suitable to use as a basis for your specific tasks and extensions. This is the main entry point for searching for a view suitable for a desired extension and examining its structure.

The creation of custom CDS views is possible with the Custom View Builder. In the analytical context, this tool is used to create custom cubes and dimensions. Custom Analytical Queries The Custom Analytical Queries application offers the creation of analytical queries for the analytic engine. As explained earlier, analytical queries support a set of specific OLAP annotations not available for the other types of CDS views. Therefore, this special tool is provided to utilize this functionality. Very useful is the utility for defining user-specific date functions to be used as parameters for analytical queries. This utility is called Manage Date Functions and allows defining functions like NEXT2WEEKS or LASTDAYOFCURRENTYEAR, which can be very helpful for specification of relative dates and intervals in the analytical queries. Manage KPIs and Reports The SAP Fiori app Manage KPIs and Reports is the central entry point for creating custom analytical applications in SAP S/4HANA. It allows you to create KPIs together with evaluations and drilldowns; an evaluation represents a more specific variant of a KPI with additional filter capabilities, and a drilldown defines a more detailed page that opens when you click on a KPI tile. A KPI tile is a widget that can be added to the SAP Fiori launchpad overview page. Furthermore, Manage KPIs and Reports lets you create and manage SAP Analytics Cloud stories to be used with the embedded edition of SAP Analytics Cloud in SAP S/4HANA Cloud. If used with the separate SAP Analytics Cloud, it allows you to include existing stories from SAP Analytics Cloud in the SAP Fiori app in SAP S/4HANA. KPI tiles offer a very powerful instrument to create a comprehensive overview of different aspects of a business area directly in the user’s SAP Fiori launchpad overview page. However, although the KPIs are cached, depending on the complexity of the queries and the number of active users, massive usage of KPIs can create substantial load on the tenant. Therefore, you should decide which KPIs are needed on the SAP Fiori launchpad overview page and choose a meaningful cache refresh time for them. SAP Analytics Cloud Story Designer SAP Analytics Cloud Story Designer is available in both SAP Analytics Cloud and SAP Analytics Cloud embedded edition for creation of SAP Analytics Cloud stories. In SAP Analytics Cloud embedded, a story must be generated in the Manage KPIs and Reports application first before it becomes available for editing. 4.1.5

Enterprise Analytics Applications

Enterprise analytics is a collective term for analytical applications that are not embedded into SAP S/4HANA but access data potentially from multiple sources, including SAP S/4HANA. Enterprise analytics is set up using SAP Business Warehouse, SAP BW/4HANA, SAP Analytics Cloud, and, recently, SAP Data Warehouse Cloud.

For enterprise analytics scenarios, we differentiate between live access and data replication. Live access is currently supported by SAP Analytics Cloud only and uses the InA protocol. Live access from SAP Analytics Cloud is similar to the access used for SAP Analytics Cloud. The difference is that with enterprise analytics live access, connectivity and user management must be explicitly managed by the customer’s IT department. For the scenarios requiring data replication, SAP S/4HANA provides support for the SAP Business Warehouse extractors. In addition, a new framework for CDS-based extractors has been implemented to support a generic extraction approach over an OData V4–based protocol. This new framework is called Cloud Data Integration (CDI). CDS-based extraction and CDI are described in Chapter 6. Section 6.8.1. Data Protection and Privacy Regulations and Data Replication When using data replication, one has to consider the implications related to data protection rules such as the European General Data Protection Regulation (GDPR; see also Chapter 7). For the enterprise analytics scenarios based on SAP products, SAP provides all necessary tools to ensure that these rules can be fulfilled with the data replicated from an SAP S/4HANA system. For more information on this topic, refer to the corresponding documentation in the related products.

4.2

Machine Learning

Improved processing power, better algorithms, and the availability of big data are facilitating the implementation of machine learning (ML) for infusing intelligence into back-office processes and providing an intelligent ERP solution. SAP S/4HANA’s underlying in-memory database, SAP HANA, increases speed, combines analytical and transactional data, and brings innovation with embedded machine learning capabilities. Thus, machine learning is natively integrated into SAP S/4HANA and can be used across an entire organization to optimize business operations, improve employee job satisfaction, and create better customer services. With conversational AI, natural language experience is provided by SAP S/4HANA, which revisits the way users interact with the system due to enablement of hands-free applications. However, when incorporating machine learning capabilities into SAP S/4HANA, various substantial challenges have been solved, like the following: How to integrate machine learning systematically into the business processes for ease of consumption. Machine learning models have been created by data scientists for decades. However, often those models resided in special tools, were consumed by experts only, and therefore added limited value. How to make machine learning enterprise ready. In the context of SAP S/4HANA, fused machine learning functionality must be enterprise-ready. This comprises qualities like compliance, lifecycle management, data and process integration, workload management, and performance. The solution architecture explained in the next section provides answers for the described challenges. But first we introduce some basic machine learning-related terminology. In machine learning, a desired function is realized by training a mathematical model with sample data so that the model learns to produce the desired outputs. The trained model is then used to compute outputs—for example, predictions or classifications—from actual input data. This is called inference. 4.2.1

Machine Learning Architecture

The architecture for machine learning in SAP S/4HANA has two flavors (see Figure 4.4): Embedded machine learning, in which the machine learning application runs in the SAP S/4HANA backend Side-by-side machine learning using SAP Data Intelligence

Figure 4.4

SAP S/4HANA Machine Learning Architecture

Embedded machine learning fits for use cases like forecasting, categorization, and trending that can be solved with traditional algorithms like regression, clustering, or timeseries analysis. Usually those algorithms are not allocating much memory and CPU time. Thus, they can be implemented within the SAP S/4HANA backend, close to the application data for training the models and to the business processes that consume the results. The embedded machine learning architecture is based on the machine learning capabilities of SAP HANA, which provides the necessary algorithms as part of the SAP HANA predictive analytics library (SAP HANA PAL) and SAP HANA automated predictive library (SAP HANA APL). Use cases such as image recognition, sentiment analysis, and natural language processing require deep learning algorithms based on neural networks. For model training, these algorithms usually demand a huge volume of data and graphics processing unit (GPU) time. Such scenarios are implemented side by side with SAP S/4HANA in the SAP Data Intelligence solution, for several reasons: SAP Data Intelligence provides a scalable infrastructure for machine learning training and inference based on GPU technology. It provides state-of-the-art libraries like TensorFlow and Scikit-Learn. The performance of SAP S/4HANA is not affected by training and inference jobs so that the transactional business processes don’t suffer. Many of the relevant scenarios work with data such as images, audios, text documents, historical logs, and external data. Such data is typically not stored in SAP S/4HANA but in the business data lake provided by SAP Data Intelligence. User interfaces for natural language interaction with SAP S/4HANA are enabled by SAP Conversational AI. This is a self-learning solution using machine learning functionality to gain knowledge based on historic data and experience. Machine learning requires additional visualization capabilities on the user interface—for example, for illustrating confidence intervals or forecasting charts. Thus, to embed machine learning capabilities in the user interfaces, corresponding intelligent SAP Fiori elements are used. 4.2.2

Embedded Machine Learning

The embedded machine learning architecture is based on CDS views and uses the machine learning capabilities provided by SAP HANA (see Figure 4.5). The algorithms for embedded machine learning can be performance-intensive because they typically must process high volumes of application data. Thus, for performance optimization the algorithms must be processed close to application data. SAP HANA contains the predictive analytics library (PAL) and automated predictive library (APL), which offer statistical and data mining algorithms. The functions provided by these libraries can be called from SAP HANA database procedures that are written in SQLScript. The algorithms require application data as an input for model training. This data can be read from application tables or from the SQL views that are created for the CDS views of the VDM on the database level. The trained models are exposed to business processes by wrapping them with CDS views for machine learning. These CDS views are based on ABAP classes, which contain ABAP-managed database procedures (AMDPs) that call the trained machine learning model in SAP HANA. The CDS views for machine learning can be combined with other VDM CDS views and can then be exposed to the consumers. By consuming machine learning models through CDS views, existing content (for example, VDM views) and concepts such as authorization, extensibility, or UI integration are reused. This results in a simple and very powerful solution architecture. The inference outcomes are integrated into business processes and provided to the right person, in the right place, and at the right time. The embedded machine learning architecture is based on CDS views and uses the machine learning capabilities provided by SAP HANA. This makes SAP S/4HANA for the vast majority of our customers already to an intelligent solution without the need of side-byside machine learning, especially when the prediction algorithm can be automatically found by the embedded PAL/APL engine.

Figure 4.5

Embedded Machine Learning Architecture

In Chapter 9, Section 9.9 we describe how sales has implemented embedded machine learning scenarios for predictive analytics according to this architecture. 4.2.3

Side-By-Side Machine Learning Architecture

SAP Data Intelligence provides a data lake for business data. Thus, application data can be extracted from SAP S/4HANA for the training of machine learning models. As illustrated in Figure 4.6, pre- and postprocessing of the application data are based on the pipeline engine, which offers graphical programming for creating pipelines. The pipeline engine orchestrates complex data flow pipelines and is based on scalable infrastructure provided by SAP Cloud Platform, Kubernetes environment. Based on the application data available, data scientists perform exploration and feature engineering to define machine learning models. For this, common data science tools such as Jupyter Notebook and Python (see https://jupyter.org) are supported. For deep-learning scenarios, SAP Data Intelligence provides a GPU infrastructure.

Figure 4.6

Side-by-Side Machine Learning Architecture

To implement machine learning use cases, applications must define machine learning scenarios and model pipelines. SAP Data Intelligence organizes each machine learning use case by the artifact machine learning scenario. This contains all development entities that are required for the implementation of a specific machine learning use case. Inference and training processes are developed as pipelines comprising sequential and parallel tasks. In particular, for each machine learning scenario, a training pipeline is provided that receives the training data from SAP S/4HANA and processes it to train the algorithms for the specific use case. Structured data is handled by the pipeline operator for CDS views. This pipeline operator extracts application data from SAP S/4HANA based on CDS views for training algorithms. The training and inference pipelines are exposed by REST services. SAP S/4HANA applications invoke those remotely and integrate them into the business processes and user interfaces. Thus, machine learning capabilities are provided as built-in functionality. On the SAP S/4HANA side, the intelligent scenario is the corresponding artifact to the machine learning scenario of SAP Data Intelligence. The intelligent scenario is a design-time artifact that represents a machine learning use case and contains metadata like the name and description of the use case. In particular, it encompasses the ABAP class that is implementing the consumption API of the machine learning model.

The machine learning lifecycle management framework ensures uniform integration and operation of side-by-side machine learning scenarios. For this, the framework provides generic functionality for training, deployment, and monitoring of side-by-side machine learning models. It enables harmonized integration of machine learning capabilities into SAP S/4HANA business processes by supplying standardized interfaces that the ABAP machine learning logic class must implement. 4.2.4

Machine Learning in SAP S/4HANA Applications

Numerous machine learning use cases have been delivered for SAP S/4HANA. Those use cases follow technical patterns like embedded and side-by-side machine learning, as well as the following machine learning application patterns: Matching Assigns relationships and detect similarities and anomalies in a given dataset. Manual matching is very time-consuming for users, but intelligent systems can significantly speed up matching decisions by using machine learning methods. The system can present one or more strategies and their qualities to link similar objects. Users then only need to approve or reject the suggestions or adjust them to their needs. For developing matching patterns, commonly used algorithms include XGBoost, multilayer perceptron, k-means, k-nearest neighbors, and neural networks. Recommendation Proposes datasets or actions based on the current context. Intelligent systems can help users by recommending appropriate content or suggesting an action or input the user may prefer. Content, input, and solution recommendations are the common recommendation types. Typical machine learning algorithms used in this context are social analysis, XGBoost, multilayer perceptron, text analysis, and recurrent neural networks. Ranking Distinguishes between relevant and less relevant datasets of the same type in relation to the current context. Items in a group are ranked by comparing criteria that are relevant for the user’s business context, such as an amount, priority, or score. We differentiate between a ranking that uses an available value and a ranking based on a calculated machine learning score. Typical machine learning algorithms used in this context are XGBoost, k-means, gaussian mixture model, k-nearest neighbors, and neural networks. Prediction Predicts future data and trends based on patterns identified in past data, taking into account all potentially relevant information. Intelligent systems based on predictive models significantly reduce the cost required for companies to forecast business outcomes, environmental factors, competitive intelligence, and market conditions. Parametric and nonparametric classes of predictive models are differentiated. Typical machine learning algorithms used in this context are regression, random forests, decision trees, and neural networks. Categorization Assigns datasets to predefined groups (classes). It also discovers new groups (clusters) in the datasets, such as grouping customers into segments for appropriate product offerings, targeted marketing, or fraud detection. Categorization is a complex

task, of which intelligent systems can help increase the automation level by applying machine learning algorithms like classification, clustering, XGBoost, k‐means, and neural networks. Conversational AI Interacts with the system based on natural language conversation. Being able to have a conversation with a digital assistant for business processes is a key part of the user experience for an intelligent application. Conversational AI technology understands typical natural language patterns to query for business entities using various parameters, to look up a specific business entity by name or ID, to retrieve the value of an attribute of a specific business entity, or to create simple new entities, including line items. The strategic direction at SAP is to provide for each pattern a uniform concept and framework for implementation. Thus, the machine learning application patterns can be applied as reusable building blocks by development teams to accelerate the implementation of machine learning use cases. To explain all machine learning applications delivered by SAP S/4HANA would go beyond the scope of this book, but we list the top 10 machine learning applications in Table 4.1. Use Case

Key Machine Learning Task

Application Pattern

Technical Pattern

Intelligent approval workflow (business process: procure-toinvoice)

Increase efficiency of purchase requisition process by automatically identifying and approving workflow requests by machine learning considering historical data

Recommendation Side-byside machine learning

Slow moving stock (business process: make-to-order)

Identify and predict the inventory items that are moving slowly, or will move slowly, to help production planning and procurement

Prediction

Embedded machine learning

Predict arrival of stock in transit (business process: stock-intransit)

Predict shipment dates for Prediction each goods movement to allow business users to take action to manage delivery delays

Embedded machine learning

MRP material exception (business process: make-toorder)

Enabling the material Recommendation Embedded planner to be directly machine informed about supply learning elements, such as purchase orders or production orders, which are not needed anymore

SAP Cash Application (business process:

Ability to clear incoming Matching payments by matching them

Side-byside

invoice-to-cash)

with the corresponding

machine

learning

receivables, using machine learning to automate this labor-intensive process Intelligent accruals (business process: record-to-report)

Improving cash flow planning by faster and more accurate calculation of accruals and deferrals

Intelligent goods receipt/invoice-receipt (GR/IR) account reconciliation (business process: invoice-topay)

Financial account Recommendation Side-byreconciliation to improve the side accuracy of the financial machine statement and help ensure learning compliance with corporate rules

Stock in transit (business process: make-to-order)

Predict the delay in stock that moves between different plants and storage locations in order to display items for which the planned delivery date significantly differs from the predicted delivery date

Prediction

Side-byside machine learning

Image-based buying (business process: procure-to-invoice)

Search results in crosscatalog search and then display the best matching items, per their similarity scoring, based on the image-recognition machine learning service running on SAP Data Intelligence

Categorization

Side-byside machine learning

Contract consumption (business process: source-tocontract)

To avoid contracts running Prediction out of consumption, the date of the contract’s full consumption is predicted using machine learning algorithms

Table 4.1

Ranking

Top 10 Machine Learning Applications Delivered by SAP S/4HANA

Side-byside machine learning

Embedded machine learning

4.3

Intelligent Situation Handling

Business processes may be well defined, but not all of them run smoothly and without interruption from start to end. SAP has invented intelligent situation handling to help business users dealing with unexpected situations. In this context, a situation is the coincidence of a set of business object updates, which the business user is typically not aware of. Intelligent situation handling is an example of how SAP S/4HANA sets a software standard for management by exception. It recognizes issues in business processes requiring attention. Using responsibility management functionality, situations are proactively displayed to the responsible users, with all relevant information and actions in one place. Situation statuses, user actions, and attribute values of the pertinent objects are tracked, thereby forming a data context. Over time, this results in a comprehensive data basis for the use case defined by a situation type—for both advanced analysis and input for machine learning and automation. Urgent and important issues are identified automatically and indicated to the right groups of users to speed up business. Users can act immediately, and fewer issues are missed. Users get intelligent support to make the right decisions and continuously optimize business processes. 4.3.1

Example: Contract is Ready as Source of Supply

Let’s look at a simple process in operational procurement. It starts with a requestor ordering some products in her shopping cart. From the shopping cart, a purchase requisition is created. In the next step, an operational purchaser assigns a source of supply to this purchase requisition. In our example, the source of supply can be a contract with a supplier, defining a negotiated price and delivery conditions. From the purchase requisition, a purchase order to this supplier is created based on the available contract. The supplier delivers the products, and after the goods receipt, the invoice received from the supplier is paid. During such a business process, a variety of issues and problems may arise. For example, it can happen that there is no valid supplier contract for an ordered product. Sourcing products without a contract can lead to a higher price and to suboptimal delivery conditions. Thus, a strategic purchaser is asked to negotiate a contract with a supplier—usually for multiple products—and enter the contract data into the system. This is communicated to the operational purchaser, who searches for the matching purchase requisitions and assigns the negotiated contract as source of supply. The ordering process may continue, and the product is delivered based on the contract. Communication about a new contract for all covered products with the respective purchasers, their search for matching purchase requisitions, and the assignment of the source of supply may be tedious and error-prone. This is a situation we might better solve using intelligent situation handling. Using the Contract is Ready as Source of Supply situation template, the available contract is signaled case by case automatically to the responsible operational purchasers, using notifications including references to the matching purchase

requisitions. Clicking on a notification directly leads to the object page of a purchase requisition, with a highlighted situation message—Contract is ready as source of supply for this purchase requisition—and contract data in a Related Information section. The proposed Assign Contract action immediately leads to the assignment of the contract as source of supply and thus to resolution of the situation. 4.3.2

Technological Background

SAP delivers situation templates for predefined use cases in the different application areas (see Figure 4.7). A situation template contains the link to the technical basis, including a reference to the CDS view of a trigger object and to a second CDS view of an anchor object. In some use cases, trigger and anchor objects can be the same. The trigger CDS view contains a query, which prefilters the objects according to the respective use case. The situation handling framework reads situation instances from the trigger object CDS view and assigns them to the anchor object. The association between the trigger object and the anchor object is defined in the CDS view of the trigger object. In the Contract is Ready as Source of Supply example, the new contract is the trigger object and a matching purchase requisition is the anchor object. The situation instance is assigned to the purchase requisition (on the item level), with a reference to the trigger object—the contract. Situation templates can be understood as context-specific proposals to power users to solve issues in a dedicated process step in a business domain. To use a situation template, a business process configuration expert copies the situation template into a situation type. The expert gives the situation type a name and can specify filter conditions to sharpen the use case—for example, to create situation instances for a certain purchasing group only. Further, the expert can modify the texts that business users receive with notifications and situation messages including attributes. At last the expert specifies responsibility definitions that select the right users to be informed and chooses relevant notification options and data-tracking options. Finally, the situation type is activated. Situation types can be used to differentiate customer-specific cases from each other, based on different filter conditions on attributes of the trigger object view. In the example, an enterprise may create one situation type for plants in Europe and another situation type for plants in the Asia Pacific region to use different texts and assign different responsibilities, or differentiate by purchasing groups. The situation template defines the type of triggering, which can be based on either an event or on a technical background job (“batch job”). Every time the evaluation of a situation is triggered, the trigger object view is queried, and the result is additionally filtered with the conditions. All items in the result set are potentially new situations. Using the anchor object, Intelligent Situation Handling evaluates if there is already a situation instance of the specific situation type or if a new situation instance needs to be created. The condition evaluation is part of a complex decision process and leads to situation instances being created, updated, or deleted.

Our example is event-based. As soon as a new contract is created or an existing contract is updated, an event is triggered by the supplier contract business object and the corresponding situation condition is evaluated. If notifications are enabled in the situation type, the situation engine sends notifications for new situation instances to the users responsible. The users are determined by calling the responsibility management service, based on the definitions in the situation type and attributes from the anchor object view. If monitoring is switched on in the situation type, situation instance activities are tracked. Such activities can include the creation, update, or deletion of situation instances or user activities like navigation from a notification to the object page or the selection of a proposed action. If monitoring is activated, updates of situation instances always trigger an event. If a data context definition is available for a situation template, a data context can be written for status changes of the situation instance. The data context contains attributes of the trigger and anchor object views or associated views, which are usually relevant to the user deciding how to solve a situation or are used as parameters for such an action. In the given example, the contract, purchase requisition, supplier, material, amounts, prices, and so on may be part of the data context. The situation handling framework signals the creation of a data context by an event, and it may be fetched using the Business Situation API. The data context is provided as a JSON document.

Figure 4.7

4.3.3

Architecture of Situation Handling in SAP S/4HANA

Intelligent Situation Handling Concept

In SAP S/4HANA, the situation handling framework recognizes issues requiring attention based on situation templates and application data (see the left side of Figure 4.8). Responsible users are derived with responsibility management, using attribute values from the involved business objects; these users are notified using notifications in the SAP Fiori launchpad. Navigation from the notification leads to an object page or a situation page containing related information and solution proposals.

In addition, situations are indicated to all users who have access to the affected business object (the anchor object), with indications in lists and on the respective object page.

Figure 4.8

Intelligent Situation Handling: Overall Concept

After the evaluation process, the situation handling engine triggers an outgoing event using the business event framework to send a business situation event that informs subscribed components of detected, updated, or resolved situations. These events— one situation instance signaled after the other—can be used to start an intelligent automation procedure using rules, for example, on SAP Cloud Platform. These rules can invoke SAP S/4HANA APIs to automate the processing of the situation and signal the automated processing back to the situation handling framework (see right side of Figure 4.8). Together with the event generated by the situation handling framework—if defined in the respective template and if activated—a data context file containing the values of attributes from the involved business objects is created and provided in the API. This data context can be retrieved as a parameter for intelligent automation and it can be stored in a data collection. The more data that is captured in the data collection, the stronger the basis for a detailed analysis to improve the business process and to sharpen the situation handling conditions if required. As soon as a large amount of structured data is collected, data science processes can be used for more sophisticated analysis and as an input to machine learning. The resulting machine learning models can in turn be evaluated within the rules of intelligent automation and can lead to better precision and to a larger number of situations being handled automatically. 4.3.4

User Experience

Business users have similar needs when dealing with issues requiring their attention— for example, their business situations. The user experience concept for dealing with situations in business domains consists of four components: Observe Users want to be informed about the situation early. Orient Users need to understand the context for decision-making—that is, the business impact.

Decide Users need to identify an appropriate solution. Act Users apply the correct solution. All these aspects are considered in the UX concept for situation handling, which is based on the SAP Fiori design principles (see Chapter 3, Section 3.1). Situation Indication Situations may be indicated in a variety of places across applications. To indicate whether one or several objects in a list are affected by a situation, a situation indicator is displayed next to the affected object. The situation pop-over is an enhancement of the situation indicator and available in selected use cases. It allows users to learn more about the situation without having to navigate to the corresponding details page. To indicate that an object is affected by a situation, a dedicated SAP Fiori object page section is used (see Figure 4.9). The situation section 1 contains a situation message strip 2 with core information about the business situation, and it includes an option to close the situation. In a subsection for related information, data from other objects that is relevant for the current situation may be displayed 3. Solution proposals may be available for resolving the issue 4. These can be simple actions or more complex solution proposals. If users choose to close the situation, they can select from the following options: Resolved, Obsolete, or Invalid. The situation indication will disappear immediately, and the feedback provided by the user is sent to the situation handling framework for monitoring and subsequent analysis.

Figure 4.9

Situation Indication

Situation Notification A user can be notified about a situation through the channels supported by the notification framework. These include notifications on the SAP Fiori launchpad or sent via email. Email notifications can be activated individually by each user in the SAP Fiori launchpad configuration. The My Situations app is shown in Figure 4.10. It displays open situations that are relevant for the user 1, each of them represented by a situation line item 2. The user has the option to close the situation 3, search the situation list 4, and to navigate to the situation page 5. The app tile on SAP Fiori launchpad shows the number of open situations 6.

Figure 4.10

My Situations App

Progressive Disclosure Progressive disclosure is an interactive design pattern used to avoid overwhelming the user with too much information at once. Because the footprint on the UI should be kept reasonable, the technique integrates into the design system (see Figure 4.11). A solid information architecture is the foundation of progressive disclosure. It allows the application of this pattern on several levels. The first level (XS) contains only the situation indicator. The second level (S) adds the title and description. The third level (M) contains the situation preview with the status and processor. The final level (L) is the situation page. It contains all the information about the issue, including related information and solution proposals. Situation Page Depending on the requirements of a use case, a dedicated UI, the situation page, may be provided. Contrary to the situation message embedded in the object page with a focus on a particular object (such as a purchase order item), the situation page focuses on the problem. Figure 4.12 shows an example of a situation page.

Figure 4.11

Progressive Information Disclosure (Conceptual Depiction)

Figure 4.12

Situation Page (Conceptual Depiction)

The header of the situation page 1 contains situation data and the status of the situation. The related information section 2 contains data about the affected objects to help users understand the business impact. The system may support users with specific recommendations for resolving a situation 3. 4.3.5

Use Case Examples

With situation handling in SAP S/4HANA, predefined situation templates for different application areas are delivered. The following three examples may give you an impression of possible and available use cases. Financial Accounting: Goods Receipt/Invoice Receipt Deviation Exceeds Threshold The goods receipt/invoice receipt (GR/IR) account reconciliation process is an exception-handling process for all purchase order items with differences between goods receipts and invoice receipts. The process is usually triggered by general ledger

accountants as part of fiscal period-closing activities. When the good receipts and invoice receipts match, the clearing run clears the open financial items on the GR/IR accounts. But if there are differences in quantities and amounts between the posted deliveries and the invoices, these open financial items need to be processed manually. General ledger accountants or managers in general ledger accounting naturally want to know about the company’s business transactions, especially when large deviations in account balances or on purchasing documents occur. Previously, they would have to manually check these deviations, which is a timeconsuming and error-prone process. The general ledger accountant has to work through a huge list of purchasing document items to prioritize the backlog. Happily, this kind of check can now be automated: situation handling can check for discrepancies and then inform the responsible parties that they need to take a look and resolve the issues. Research and Development: Project Budget Threshold Exceeded Project budgeting and budget availability control can monitor the consumption of budgets assigned to projects and prohibit budget overruns in time. With situation handling, the project financial controller is informed immediately about budget overruns so that they can analyze the root cause, plan additional budget allocations, or cancel budget-consuming activities. This leads to increased transparency, reduced manual effort, and an acceleration of corrective measures. Action taken may start with a detailed analysis of cost postings and the overall project financial status. Often, collaboration with responsible users, project managers, and project stakeholders is required to resolve the underlying issues. Budget changes or budget transfers may be carried out or requested. Sometimes, it may be required to temporarily lock the project to prohibit further cost postings. Sourcing and Procurement: Quantity Deficit in Supplier Delivery Purchasing materials that are critical to the business are activated for supplier confirmations. If the supplier’s current delivery quantity is lower than initially confirmed, notifying the responsible purchaser helps customers to overcome the impact of the delay. Situation handling automatically alerts the responsible purchaser about a quantity deficit for a material delivery, which enables an immediate reaction and prevents late delivery. The purchaser may contact the current supplier by phone or email to find a solution. Alternatively, the volume of an existing purchase order with another supplier can be increased or a new purchase order can be created; a list of proposed alternative suppliers for the same material is provided. 4.3.6

Intelligent Situation Automation

Situation handling offers a continuous improvement process from manual processing to high-quality, human-supported handling of situations, including automated handling of

situations based on rules, up to machine-learning-based recommendations and automation. When situation handling is activated, users are proactively supported with notifications, situation indications, and proposed actions. Attribute values are captured together with data about user behavior and user feedback. An analysis of the data captured can provide insights that lead to business process improvements. When machine learning is used to calculate predictions or recognize patterns, situation handling can be used to enhance the scenarios, to surface the results and propose actions. An example for procurement is the predicted contract consumption, which can indicate if a supplier contract is consumed faster than expected. Together with business experts, users with an analytical background may derive rules from business knowledge and data analysis for rule-based automation of concerned process steps. If required, intelligent robotic process automation may be involved, using the information from the situation type and the captured attribute values from the data context. All data captured may be stored in persistent data storage, such as a data lake, and used for advanced analysis. As soon as a sufficient amount of data has been collected —depending on the algorithms used—the data can be processed by a data scientist and used as input for machine learning with SAP Data Intelligence. The resulting machine learning models can be evaluated with the data from individual situation instances and provide input for the automation of more sophisticated cases. Machine learning can demonstrate its strengths in cases in which situations occur frequently and enough qualitative data is available—especially in complex or nonobvious cases. When rules are obvious, well-known, or can be derived from analytics—and in cases in which data availability is limited—rules play an important role. And in some cases, a hard transition from one behavior to another is required at a certain point in time—for example, when regulatory changes are coming. With situation handling, the manual processing of issues requiring attention can go hand in hand with automated processing and machine learning. It can be expected that the automation quota starts small and can be increased step by step with an increasing sophistication of rules and with data availability using machine learning. In cases in which the results of rules or machine learning do not indicate an automatic resolution of a situation (for example, when reaching a threshold, for compliance reasons, or if the confidence is not high enough), the responsible user is asked to take over using situation handling and to drive the resolution manually. 4.3.7

Message-Based Situation Handling

A special variant of situation handling, message-based situation handling, has been implemented for mass data processing, which is usually highly automated and includes sophisticated error handling. However, there can be issues that need to be resolved by human users. Usually, such errors or warnings are written as messages into a log like an application log and evaluated by an administrator. The idea is to transform groups of messages that may be created when a certain problem occurs into situations and

directly inform the users responsible based on attribute values. Instead of writing into a log, the application passes the messages to message-based situation handling using an API. Message-based situation handling allows users to cluster multiple messages and create a situation template for a problem pattern, such as an Account Locked template. From an action repository, a technical user may choose actions that help to resolve the underlying problem. All situations that result from such a batch process will appear in the My Situations—Message-Based app for the responsible users. A user can review the situation details in a situation page together with proposed actions for processing the situation. Using the proposed actions, the users responsible may instantaneously resolve the situation—which speeds up the process significantly. Alternatively, the situation can be closed without action. For example, one use case for message-based situation handling has been realized for contract accounting (FI-CA) in SAP S/4HANA finance (see Chapter 14, Section 14.7).

4.4

Summary

With advanced analytical capabilities, embedded and side-by-side machine learning, and intelligent situation handling, the SAP S/4HANA architecture provides the foundation to create intelligent ERP applications. We have seen in this chapter that the VDM with its CDS views provides the data access layer not only for transactional processing and UI interaction, but also for the analytics and intelligence frameworks. Enriched by meta data the CDS views become cube views in analytics, anchor views for situation handling, or views for machine learning. Embedded analytics and embedded machine learning do not require data replication as they access the business data directly in SAP S/4HANA while using the analytical and machine learning functions of SAP HANA. Enterprise analytics as well as side-by-side machine learning provide additional capabilities. Intelligent Situation Handling helps to deal with exceptions in business processes. It identifies exceptional situations and informs the responsible business user. Intelligent Situation Handling can further use historic data to automate such exception processes. In the next chapter we describe how SAP S/4HANA can be extended.

5

Extensibility In addition to business configuration there are several ways to make SAP S/4HANA fit to the requirements of an enterprise or organization. Key user extensibility enables power users to extend SAP S/4HANA whereas developers may build cloud-native side-by-side extensions.

For enterprise software, there are many reasons for the demand to make the software fit an organization: to extend the scope, to extend the reach, to innovate, to optimize, or to integrate into a given landscape. Not too long ago, extending SAP ERP systems was tightly coupled with the technology platform it was based on, namely the SAP NetWeaver Application Server. Software engineers used this application server to develop and run extensions, mainly using enhancement spots that were baked into the technical architecture. This approach worked well for the use cases it was meant for—running the core and extensions as one single, seamless system. Because all SAP ERP components ran on the same application server and shared the same database instance, extensions were developed in this manner. While this brought simplicity in development, it led to challenges—for example, in handling upgrades. Often, the results were complex systems with many hard-to-handle interconnections between core and extension functionalities. With SAP S/4HANA, this was set to change with the introduction of a new well-defined extensibility concept that comprises two dimensions: in-app extensibility and side-byside extensibility. The guiding principles for both are to be loosely coupled, but tightly integrated. In this chapter, we introduce key user extensibility and cloud-native side-by-side extensions as the two ways of extending SAP S/4HANA’s capabilities. You’ll gain a basic understanding of extension scenarios and how to build them.

5.1

Key User Extensibility

Technically, key user extensibility belongs to in-app extensibility. In-app extensibility means that the extensions are made by using predefined enhancement points inside the core application. Both the core and the extension run on the same server and share the same database instance. However, the key differentiator is that the extension is nonintrusive. That is, it doesn’t modify the core objects, but only sits on top, following well-defined extension principles. This is different for the traditional in-application extensibility in ABAP, which refers to all the coding activities that IT departments do in SAP S/4HANA on-premise and the SAP Business Suite applications. Besides these technical aspects, key user extensibility is designed for market requirements: in the cloud, project decisions and implementations happen much faster than in on-premise software. Also, decisions are made by lines of business (LoBs) units rather than by IT departments. Consequently, extensibility designed for the cloud addresses key users from LoBs as the main personas.

In SAP S/4HANA, a key user is the person who is empowered to adapt an application to the needs of a business, using standard tools provided by SAP. Therefore, whether the user wants to add custom logic to an existing business object, enhance a standard data structure, or make field-level changes to the user interface, she or he does it in a predefined, standard manner. This ensures that all enhancements are done using standard tools, following standard rules. By contrast, in traditional extensibility, enhancements are more or less an open field for developers. With this approach, developers with technical skills are required in complex extension projects only, for expert-oriented tasks such as database design, integration, or coding (Section 5.2). In this chapter, we describe key user extensibility tools and their architectures, as well as the qualities and stability criteria they need to fulfill and the requirements of simplicity and robustness derived from the same. 5.1.1

Stability Criteria for Extensibility

Besides its richness of features and the necessary support of flexibility for nondevelopers, key user extensibility needs to fulfill stability criteria. It’s important to ensure that extensions are future-proof, simple to maintain, and easy to operate. This applies both for on-premise and cloud deployments, but for the cloud, it’s essential to guarantee that the SaaS can be operated smoothly. The most important stability criteria, which must be enforced by the technical implementation, are reliable data interfaces, simplicity, and decoupling. Reliable Data Interfaces The first stability criterion is reliable data interfaces. This means that data interfaces— such as APIs, code exits, or VDM views—need to be stable across releases, not only technically, but also from a semantic perspective. Therefore, only a subset of all software artifacts of a SaaS product is exposed to SAP customers and partners for being extended, called, or referenced. These artifacts are part of the public extension model. Within the in-app extensibility key user tools, only the released software artifacts are visible and ready to be extended or to be used by extensions. All other objects are either hidden or only visible for information purposes. Besides the restrictions to key user tools and side-by-side extensibility, this is the major difference between SAP S/4HANA Cloud and the on-premise version from an extensibility perspective. Figure 5.1 shows how artifacts can be used which are released for extensibility. In SAP S/4HANA Cloud, you can only extend and use released artifacts. In the on-premise version, the traditional extensibility options of the ABAP development environment are available. They make it technically possible to extend and even modify everything, resulting in a high coupling between SAP software and custom extensions, which can lead to lower upgrade stability.

Figure 5.1

Extensibility in SAP S/4HANA On-Premise and Cloud

The released objects of the public extension model have underlying stability contracts for a specific purpose, which are documented for SAP customers and partners. The objects may be released for use inside the ABAP platform (for example, for reading a CDS view or using an ABAP class), external use (for example, for a public API), or for extension. In the SAP development process, the stability of these objects is enforced by technical measures. The basis of all contracts is that no released artifacts can be deleted or have their behavior changed. The same applies for their public elements. For in-app extensibility, several types of objects are released in the public extension model. The most important ones are as follows: Business contexts define part of an application that can be extended by custom field or business logic. Each business context points to one business object node that is extensible. ABAP Business Add-Ins (BAdIs) allow you to plug custom code into SAP’s ABAP code. ABAP classes can be called as local APIs from custom code, such as custom BAdI implementations. Data types are used as interfaces for released BAdIs and API classes or for use in type declarations within custom code. CDS views can be used as data sources in ABAP code or when you define custom CDS views. Simplicity Simplicity is the second of the criteria for stable extensions. Extensibility tools for the key users from LoB units must be easily consumable. Technical objects (OData services, SAP Fiori UIs, custom code, view extensions and associations, and database changes) are generated by tools without further interaction with the tool user. This guarantees cloud-like stability and allows key users to create extensions without the need to understand the technical details. Loose Coupling between Extensibility and SAP Software

The third stability criterion is loose coupling between customer extensions and the SAP software. This means that the following rules are fulfilled: Software updates by SAP can be applied without the need to change the custom code. Extensions must not block any SAP software updates. A clear separation between the SAP core and extension objects is given, both logically and technically. Custom extensions must access SAP S/4HANA resources via publicly released APIs only. Modifications of SAP code are neither allowed nor possible. In SAP S/4HANA, this decoupling is guaranteed for custom artifacts created with key user tools. At runtime, the SAP-provided parts and the custom parts work together in a stable and controlled way. The lifecycles of the design-time artifacts, however, are strictly separated for SAP artifacts and custom artifacts. This way, the loose coupling allows extensions to benefit from the SAP S/4HANA capabilities without interfering with the lifecycle of SAP objects. 5.1.2

Principles of In-App Key User Extensibility

Before we describe the different types of in-app key user extensibility, we need to discuss the principles behind the same. For that purpose, we cluster key user extensions into the following categories: Content adaptation, such as for print forms, analytical reports, and UI layouts Structural extensibility, such as for creating custom CDS views, field extensions, or custom business objects Business logic extensibility, such as for checking an order before it’s posted Integration extensibility, such as for adding a custom field to an integration API All these extensibility categories require implementation steps that have an impact on technical artifacts across the complete SAP S/4HANA stack. The key user tools generate the necessary technical artifacts. Depending on the extensibility pattern, the following software artifacts are generated, most of them hidden from the key user: ABAP Dictionary appends, CDS extend statements, CDS views, data elements and domains, custom database tables, custom ABAP code, OData service extensions, custom OData services, UI layout changes, and custom UIs. SAP S/4HANA key user tools are integrated tools that support all the extensibility categories listed previously. They provide a stable and robust way to create complex extensions and hide the technical complexity from the user—following the principle of simplicity. Ahead, we explore the most important extensibility use cases in more detail: field extensibility, integration extensibility, custom business logic, custom business objects, and custom CDS views. 5.1.3

Field Extensibility

In the area of structural or data model extensibility, in-app field extensibility is the most important capability. Field extensibility means that as an SAP customer or partner you can add additional attributes to data in SAP applications. Examples include new fields in invoice headers or purchase order items. In most cases, extension fields are persisted in the same database table as standard fields and have the same default behavior and analytical possibilities as SAP-delivered fields. Key user in-app extensibility in SAP S/4HANA uses SAP object types and SAP object node types to describe the structure of extensible applications. SAP object types and SAP object node types describe the fundamental structures of SAP S/4HANA applications in an implementation-independent way. SAP object types are a catalog of SAP business object definitions, using a common representation that is independent of the framework or technology with which these objects are implemented. Business objects are typically hierarchical structures— consisting of, for example an order header and order items. For SAP object types, the elements of this hierarchy are described by SAP object node types. SAP object types are used not only by key user extensibility but also by other frameworks—for example, for defining business events (see Chapter 6, Section 6.7). To make a specific SAP business object node type extensible—for example, a sales order item—SAP developers must define a business context that points to this SAP business object node. This way, the SAP business object node is marked as extensible and available in key user extensibility tools. A business context not only identifies an extensible SAP object node type. It also includes information about what can be extended for this node type (for example, which CDS views, APIs, or UIs), and it has technical information that tells the key user extensibility tool how to implement the extension technically. For adding custom fields to a specific SAP business object node, the business context has to be assigned—for example, the information about which tables are involved and what internal steps need be performed to implement the extension. When a key user defines an extension field, all necessary software artifacts are generated by the extensibility tool. The extensibility tool enhances the assigned CDS views and database tables with that new field in a modification-free way. Chapter 8, Section 8.1.5 explains in detail how the product master has enabled field extensibility according to the described architecture. In most cases, field extensions apply to all architecture layers (such as the database, business logic, and UI), but isolated extensions are possible as well. Calculated attributes, for example, are not persisted in the database. For integration scenarios, external APIs also need to reflect extensions with custom fields. To allow field mapping along business process lines and between application layers, extensible applications consider these generated extension fields in a generic way so that those fields behave like native SAP fields. In addition, extension fields are visible in other key user tools, like custom CDS views or analytical tools. As key users work in an application-oriented way rather than an API- or code-oriented way, field extensibility can be launched directly from the application user interface. The key user field extensibility tool knows the relation of application user interfaces to

business contexts and thus can reduce the complexity of extensibility to a few simple steps: 1. Definition of field properties like datatype, field length, or code list 2. Definition and translation of labels, tooltips, and code list texts 3. Definition of which parts of the application the extension field must be considered: print forms, mail templates, further UIs, or APIs 4. Selection of a business process, called a business scenario, which should be completely extended (including field mapping at runtime) Together with the definition of extension fields, key users can adapt the SAP Fiori UI layout of the extended application, a feature called UI runtime adaptation. Adapting SAP Fiori UIs to business needs and personal needs is a strong desire of users. Hence, business users can personalize SAP Fiori UIs for their own usage and, for example, hide or move elements, add fields, and change labels. Beyond that, key users can do these UI adaptations for the whole system/tenant and all its users. Technically, this mechanism works modification-free, which means it does not overwrite the SAP standard software. UI runtime adaptation is available to all users assigned to the key user business role. The traditional SAP UI technologies support extensibility as well. SAP GUI user interfaces show extension fields dynamically in a specific tab when being enabled for key user extensibility. Web Dynpro for ABAP user interfaces can be adapted by the floorplan manager. To give key users an overview of the extensibility capabilities of SAP S/4HANA, launching field extensibility is not restricted to the application-bound way described earlier. The extensibility cockpit is a discovery tool that allows key users to explore extensible applications and their extensibility capabilities (such as extensible CDS views or APIs). Browsing through the connected model of software artifacts, key users can extend applications. If discovery isn’t required, the field extensibility key user tool can be directly called from the SAP Fiori launchpad via the Custom Fields and Logic tile. 5.1.4

Integration Extensibility

Equally important as UI extensibility is integration extensibility. Extensibility is an essential feature in cross-system integration. It helps to integrate an extended application into highly organization-specific system and application landscapes. This is especially important for the standardized business processes in cloud software. In SAP S/4HANA, integration extensibility provides the following capabilities: Adding custom fields to SAP S/4HANA standard APIs (like OData services, SOAP messages, IDocs, or BAPIs). This feature is natively supported by key user field extensibility, and the necessary code artifacts are generated as well. Exposing SAP S/4HANA data via custom APIs. For this, can you use key user tools to first create a custom CDS view and then create a remote-enabled service on top of this view. You can expose not only SAP core fields but also custom fields.

Key users may want to enrich the custom fields in APIs, as well as custom APIs, with custom business logic—which leads to the next extensibility option. 5.1.5

Custom Business Logic

Behavioral extensibility comprises every kind of flexibility, changing the behavior of business processes. In that sense, behavioral extensibility complements structural or data model extensibility. In SAP S/4HANA, behavioral extensibility is realized by the custom business logic tool. Key users can leverage it for the following use cases: Creation of their own calculations for SAP fields and custom fields. Examples include calculation algorithms for pricing or taxes, case-specific default values, or calculations that determine whether a specific process step need to be executed. Validation, which means any kind of custom checks for SAP fields and custom fields. Validation includes simple value checks (such as checking against code values of a dropdown list) or more sophisticated checks that consider several attributes or states, possibly even external resources. Examples include approvals or a stricter credit limit check before continuing with the next process step. Mapping, which means custom fields are mapped in a custom way. This happens between the architecture layers of an application, both inside business processes and in the API-based outside communication of an application. Examples include mapping of internal data representation to the representation expected by business-tobusiness (B2B) standards or mapping to business-specific engines (such as pricing) that are not relevant for all extension fields. Technically, custom business logic extensibility utilizes application-specific code exits in the form of BAdIs. These are stable interfaces foreseen by SAP for dedicated extension purposes, which are directly called by the SAP application logic. Developers can implement these interfaces with their own ABAP code. Data transfer into the custom code happens via IMPORTING parameters; data transfer back to SAP business logic occurs through CHANGING parameters. Some BAdIs also support a filter mechanism that allows several BAdI implementations to exist for one interface. At runtime, the filter criteria specify the implementation to be called. To be available in custom business logic, a BAdI must fulfill several requirements: it must be released for cloud usage, allow multiple implementations, and have simple and stable data types. In that case, the BAdI is called a cloud BAdI. The terms cloud BAdI and released for cloud usage must not be misunderstood. They originate from the fact that custom business logic was initially introduced with the cloud in mind, but these terms do not imply that these BAdIs are available in SAP S/4HANA Cloud only. Implementations of cloud BAdIs can be developed in an web editor for ABAP based on SAP Fiori, which offers a user experience that is appropriate for key users. This web editor supports easy exploration of APIs, code completion, documentation for key users, syntax checks and syntax highlighting, and integration with code tracing (which is an alternative to debugging, allowing the tracking of customer-defined variables). It also supports the creation of custom reuse classes so that custom code can be reused in the implementations of several cloud BAdIs.

To ensure the stability of the extensions and the decoupling from the SAP software lifecycle, the cloud BAdI editor restricts what you can do in a BAdI implementation. First, the cloud BAdI editor restricts object access to publicly released SAP objects. Second, the ABAP language version used has a restricted scope of language statements (called restricted ABAP). Data can be selected from released CDS views only and only written via released ABAP APIs. Direct database operations are not allowed at all in custom business logic. In addition, no influence on execution in parallel tasks and no transaction handling is possible (no commit or rollback), no dynamic programming or code generation is supported, and obsolete ABAP language statements have been removed. These restrictions mean that ABAP can be used for extending business logic even in SAP S/4HANA Cloud, while still ensuring robustness, security, and data consistency. Custom business logic is a powerful and flexible tool. However, SAP’s priority is on avoiding the need for custom code as much as possible. This is reflected in field extensibility, which does not require any custom code for all standard use cases. Exposure of custom fields in APIs does not require custom business logic. The key user tool ensures that data is appropriately mapped between APIs and the application. In the same way, the tool performs validation against customer-defined code values behind the scenes. Thus, custom business logic is tailored for use cases with a higher complexity, such as integration scenarios triggering a business process via a call to an external service, maybe by calling an API of a custom application running on SAP Cloud Platform. If the requirements are even more complex, an integration middleware product such as SAP Cloud Platform Integration service is recommended. Using such a middleware product, it is possible to realize more complex extensibility patterns, like data transformation, splitting and combining of messages, message reduction, data enrichment, or filtering and distribution. 5.1.6

Custom Business Objects

It is possible not only to enhance SAP applications with new attributes using field extensibility, but even to add custom business objects for self-contained custom applications. Custom Business Objects is also the name of the separate key user tool which enables key users to manage any kind of data that is needed in an extension application or process logic. Custom business objects can be modeled as a hierarchy of nodes—for example, with a root node for an order and child nodes for the line items. Each business object node contains a set of type definitions with custom fields. As these are custom business objects, they are independent of SAP-defined business contexts and can have associations to other custom business objects. You can also create a loosely coupled one-to-many relationship from the custom business object to SAP business objects. For this, you define a field of the association to an SAP application type. Loosely coupled means that the data type, value help, and navigation for the chosen SAP business object are supported, but the custom business object is not bound to the transactional behavior of the SAP object. Deleting the SAP object instance does not result in deletion of the corresponding custom business object instances.

In contrast to field extensibility, custom business objects lead to new tables in the database. For each custom business object, a set of additional software artifacts is created. This includes CDS views; generated ABAP classes with APIs for Create, Read, Update, Delete (CRUD) operations; OData services; and an SAP Fiori application. This way, data of custom business objects can be created and modified, through an application UI or remotely from other applications (see also side-by-side extensions, Section 5.2). The key user tool for custom business objects also allows defining custom code lists that contain values for dropdown lists, as well as for facts or dimensions in analytical scenarios. The standard logic for simple CRUD operations on your custom business object is provided by the infrastructure. However, you can implement certain predefined operations for custom object-specific checks, mappings, or validations. This is done with mechanisms of business logic extensibility (Section 5.1.5). 5.1.7

Custom CDS views

The Virtual Data Model (VDM; see Chapter 2 Section 2.1) uses CDS entities to define the simplified and harmonized, business-oriented, and semantically enriched data model of SAP S/4HANA. Thus, CDS views are the basis for extending the data model, custom analytics, and consuming data in custom logic or side-by-side extensions. With the Custom CDS Views key user tool, you can build you own CDS view based on publicly released SAP CDS views. This CDS view can read data in the combination and granularity needed or with required aggregations, for example. The tool supports joining several CDS views, calculations, and aggregations. Custom CDS views can be exposed as OData services in the key user extensibility tool. When building custom CDS views, you can select from released SAP CDS views to start. Released CDS views exist for all important business objects. If required, you can also extend these SAP views with new fields as described earlier, and then build your custom views on top. Like ABAP dictionary structures, CDS entities and their metadata are extensible via built-in extension options. These are automatically used and orchestrated by the key user tools. In SAP S/4HANA on-premise, CDS views also can be extended manually. This is expressed with the EXTEND VIEW statement using the Eclipse-based ABAP Development Tools. 5.1.8

Lifecycle Management

Besides functional aspects, lifecycle management of SAP customer and partner extensions must consider stability qualities—which is an important criterion in the case of SAP S/4HANA Cloud. All key user tools are used by users with the key user role in the quality system or in the development system, depending on whether a two-system landscape or a three-system landscape is used for development. Lifecycle management enables key users to transport their extensions to a quality system for testing (in the case of a three-system landscape) and later to the production system for productive use.

The transport behavior differs between SAP S/4HANA Cloud and on-premise. In the onpremise case, key user extensions are captured by normal transport orders and are handled following the usual practices. Thus manual, “traditional” extensions can be mixed with key user extensions, according to the individual need. In SAP S/4HANA Cloud, key users create extensions in the test and configuration tenant. From there, they are then transported into the production tenant. This transport of key user extensions can be done by the key users using the Adaptation Transport Organizer (ATO), without the need to interact with administrators from SAP cloud operations unit. Here it is important that extensions continue to run after each upgrade or update of SAP software without any further activity. ATO helps to ensure this stability. It automatically enforces the completeness and integrity of the transports and guarantees that transports of custom artifacts and SAP maintenance activities do not happen at the same time. Sometimes you want to transport extensions between different SAP S/4HANA system landscapes—for example, landscapes of different organizations. This is supported by the template approach. With the template approach, it’s possible to collect and package key user extensions and import this package into the quality system of another SAP S/4HANA system landscape. There the extensions can be adapted to individual needs (hence why this is called a template approach), tested, and transported to productive systems. To avoid naming collisions, a namespace concept is used. This approach is mainly intended for SAP partner solutions sold to several SAP customers. In 2020, these solutions have to be “one-shot” solutions, meaning further imports with corrections or further functionality is not yet supported.

5.2

Side-by-Side Extensions

As we discussed, in-app extensibility is standardized but has limitations in terms of functional coverage. But your business knows no bounds: this should be true also for the software you trust to run your business. When you are facing such requirements, side-by-side extensions provide your playing field. In this approach, the functionality is developed on SAP Cloud Platform and has tight integration with both SAP S/4HANA on-premise and SAP S/4HANA Cloud. This is a unique proposition: leverage your current investments in SAP S/4HANA, including existing in-app extensions, and build additional functionality on top, using state-of-the-art cloud technologies. Some of the typical side-by-side scenarios are as follows: Custom user interface As the name suggests, you can develop your own web-based user interfaces based on the SAPUI5 library and available patterns therein, following SAP Fiori UX design guidelines. The UI application thus developed can be deployed on SAP Cloud Platform. The application itself can be based on a standard or custom OData service. Custom backend application In this scenario, you develop a full-blown application using the technology of your choice (such as ABAP, Java, or JavaScript), with a custom user interface on top of it. Your application can consume OData services exposed by SAP S/4HANA and leverage existing SAP S/4HANA functionality. One very convenient way to do so is through SAP Cloud software development kit (SDK, see http://developers.sap.com/topics/cloud-sdk.html) libraries, discussed later in Section 5.2.5. The real power lies in the fact that nothing stops you from extending objects in the entire technical stack—from database structures or tables, to application logic, all the way to the user interface. Furthermore, as stated before, in-app and side-by-side extensibility can be used in combination. 5.2.1

Introduction to Cloud-Native Applications

The term cloud-native is broadly used to identify practices and technologies that enable building and operating state-of-the-art applications on modern cloud infrastructure. These applications are often built based on the microservice architecture style. Teams behind cloud-native applications typically practice DevOps—a culture where all team members work together to develop, deploy and operate the software. This situation, where the boundary between developing and operating software no longer exists, enables teams to continuously package and deliver productive software to consumers at any point in time. This section introduces microservices, DevOps, continuous delivery, and their merits for building side-by-side extensions to SAP S/4HANA. Microservices “Software is eating the world”, as stated by Marc Andreessen. To stay ahead of their competitors, successful enterprises need to build up modern IT that enables them to differentiate and maximize efficiency. As a result, business applications need to fulfill a

range of increasingly challenging qualities, which have a major impact on application infrastructure, design, and operation: Availability Business continuity needs to be assured around the clock. Downtimes, no matter if planned or unplanned, are not accepted anymore. Therefore, applications need to use appropriate infrastructure and software design patterns that facilitate high availability. Scalability and elasticity Usage intensity and data amounts vary over time and between businesses. Therefore, applications need to be designed in a scalable manner so that they can deal with different usage intensities and data amounts. By elastically scaling software components to the current demand, the user experience can be maximized while at the same time driving down costs due to increased resource efficiency. Resilience All complex systems will eventually face issues during their operation. Therefore, it’s crucial to build systems that anticipate failures and ensure that the user experience stays intact as much as possible, even under harsh conditions. Evolvability Market conditions change, and technology evolves quickly. Accordingly, the long-term success of business applications is bound to the ability to quickly introduce new features and continuously modernize internal structures with minimal toil. Combining modern cloud-native infrastructure and application design approaches helps to achieve the qualities we discussed for SAP S/4HANA extensions. Cloud-native infrastructure, such as Cloud Foundry or Kubernetes, enables us to make applications available around the globe and to continuously update them with zero-downtime deployments that do not lead to a reduction in service availability. Building applications based on a microservice architecture helps unlock even more cloud benefits. Microservices are relatively small, distributed parts of an overall software system, connected to each other via well-defined APIs. Each of them is built and operated independently. Successfully building a microservice-based SAP S/4HANA extension critically depends on a suitable decomposition of requirements into microservices. Proper boundaries can be derived from a mix of various factors: Business domain boundaries Microservices are well-suited to decompose large, complex application domains into smaller, decoupled units using design techniques such as domain-driven design. Organizational boundaries High organizational and technical coupling slows down development velocity. Therefore, the developers working on a software artifact should be limited to a small team. Conway’s law states that organizations tend to design systems that mirror their own communication structure (see http://www.melconway.com/Home/Committees_Paper.html). Following Conway’s law, microservices enable a tight alignment of organizational structure and technical system architecture. Due to the decoupled nature of microservices, teams can conduct major changes with high confidence, achieving a high degree of evolvability.

Scaling boundaries Microservices enable fine-tuning the application architecture to observed usage patterns. Components that experience a high load can be deployed and scaled separately. This helps to achieve elasticity and low costs since overprovisioning can be avoided. Microservices typically come with a low baseline of resource consumption, which further reduces costly overhead. Technology boundaries In the space of available programing languages and technology stacks, there is no silver bullet for all problems. Therefore, it’s essential to achieve a good fit among the problem to solve, the technology available, and the skills within the team. In a microservice architecture, teams can build services with their favored technologies without being forced into a predetermined software stack. Even the smallest side-by-side extension just consisting of one service that integrates with SAP S/4HANA embodies a distributed system. Eventually, at some point in time, things will go wrong in the communication between the extension and SAP S/4HANA— for example, because of network issues or system downtime. To mitigate this issue, cloud-native applications need to be designed explicitly for failure. Suitable design techniques and technologies help us to build systems that are resilient against temporary problems in their involved (micro)services. Examples include decoupling via asynchronous message queues and the implementation of resilience patterns such as bulkheads, circuit breakers, or caches. DevOps and Continuous Delivery Unleashing the potential of cloud-native SAP S/4HANA extensions requires not only new technologies but also an adjusted engineering culture. Studies have shown that adopting DevOps helps organizations to be significantly more successful than those that stick to traditional development approaches when moving to the cloud (Forsgren et al., 2019 State of DevOps Report, https://services.google.com/fh/files/misc/state-of-devops2019.pdf). The idea behind DevOps is to bring the development and operations of software close together. DevOps is not a rigid methodology but rather a set of three principles that should guide the practices of development teams: A high flow of work Early feedback Continuous learning and experimentation Working software only creates value once it reaches the hands of its end users. Therefore, software organizations must ensure a high flow of work from idea over engineering to the deployment of new versions. Teams should focus on small valueadding changes and work on them from end to end until they are ready to be released. On this path, teams need to actively build in feedback loops that help to catch problems and errors at a point where they are still cheap to fix. For development, this implies the heavy use of automated testing. This helps to establish a safety net within the application that allows you to make high-impact changes with high confidence. Only by taking quality automation investments seriously can teams in the long term be able to

keep up a high flow of work. Last but not least, DevOps teams should continuously challenge the status quo. As the cloud-native technology ecosystem evolves, new approaches and technologies might arise that help to address challenges in a better manner. Innovative experiments and continuous improvements, also via codified learnings, should be regular activities of DevOps teams. A key practice of DevOps teams is continuous delivery. This is the development, testing, and deployment of software in short cycles that are carried out in a reliable and fully automated manner. The process of delivering a software change is automated by a continuous delivery pipeline. It automatically builds the new software version and runs test suites, static code checks, and security validations. Finally, if all steps went well, it deploys the new version. In other words, a continuous delivery pipeline embodies the codified knowledge of the software validation and delivery process of a team. Compared to manual processes, it is extremely fast, cheap, and reliable. The SAP Cloud SDK offers support for continuous delivery by a containerized continuous delivery pipeline and corresponding infrastructure. With the SAP Cloud SDK continuous delivery pipeline, teams can establish a fully functional cloud-native development setup from day one of a project. An example of such a continuous delivery pipeline is shown in Figure 5.2.

Figure 5.2

Continuous Delivery Pipeline of SAP Cloud SDK

The SAP Cloud SDK pipeline helps to efficiently build and deliver cloud-native extensions while ensuring quality at all times. The SAP Cloud SDK continuous delivery pipeline is fully open source. More information can be found at https://github.com/SAP/cloud-s4-sdk-pipeline. 5.2.2

SAP Cloud Platform and Programming Models

SAP Cloud Platform is a platform-as-a-service (PaaS) offering, which provides application development, deployment, and operation capabilities. On SAP Cloud Platform, you can build cloud-native applications that integrate with on-premise and cloud systems and extend their functionality without any disruption to the stable core. SAP Cloud Platform offers the following environments for development and operation of your cloud-native services or applications: SAP Cloud Platform, Cloud Foundry environment SAP Cloud Platform Extension Factory, Kyma runtime

Let’s discuss these further. 5.2.3

Cloud Foundry Environment

This environment brings to you the Cloud Foundry application runtime, which is an open-source application platform offered by the Cloud Foundry Foundation (https://www.cloudfoundry.org/foundation). SAP Cloud Platform, Cloud Foundry environment comes with build packs that provide a number of technology options for development, such as Java, Node.js, or “bring-yourown-language” options. Cloud Foundry environment is offered across many data centers, covering major geographies of the world. 5.2.4

Kyma Runtime

SAP Cloud Platform Extension Factory, Kyma runtime lets you build cloud-native applications based on Kubernetes (https://kubernetes.io), an open-source containermanagement solution for managing, scaling, and automating deployment of your container-based applications. You can combine serverless functions, microservices, or custom container images with your development simultaneously. The Kyma runtime also has several useful built-in capabilities, such as event bus or service mesh, which you can use in your application. Three of the main components of the Kyma runtime are: the application connector, which lets your application connect with a given Kubernetes cluster; the service catalog, via which you can expose your APIs and events and consume services offered by hyperscalers (Google Cloud Platform, Microsoft Azure, Amazon Web Services); and the serverless option to develop application extensions, in which functions are triggered by events or API calls. The SAP Cloud SDK supports both runtime environments: SAP Cloud Platform, Cloud Foundry environment and SAP Cloud Platform Extension Factory, Kyma runtime. In the following section we describe how you can use SAP Cloud SDK together with SAP S/4HANA. 5.2.5

Integrating with SAP S/4HANA Using the SAP Cloud SDK

Let’s briefly recall the cloud-native qualities from Section 5.2.1: availability, elasticity, evolvability, scalability, and resilience. A good programming model and platform makes it easier for applications to score high on these qualities. SAP Cloud Platform, with its platform services offerings, does a lot of heavy lifting to bring these cloud qualities to life. Still, it’s left to the application to implement the services and interfaces offered by SAP Cloud Platform. This results in lots of boilerplate code, which is an antipattern to efficient programming. This is where the SAP Cloud SDK comes into the picture. SAP Cloud SDK offers libraries to ease the development of cloud applications on SAP Cloud Platform. These

libraries communicate with other SAP solutions and services, such as SAP S/4HANA Cloud, SAP SuccessFactors solutions, and OData services, just to name a few (see Figure 5.3).

Figure 5.3

Extensions for the Intelligent Enterprise: Role of SAP Cloud SDK

In other words, SAP Cloud SDK is your one-stop shop to overcome regulation programming challenges while building side-by-side extension applications on SAP Cloud Platform. At the same time, SAP Cloud SDK is fully compatible with the SAP Cloud application programming model (see https://cap.cloud.sap/docs/), which is SAP’s recommended approach to developing enterprise-grade services and applications on SAP Cloud Platform. You can download the SAP Cloud SDK at https://developers.sap.com/topics/cloud-sdk.html. The SAP Cloud SDK comes with two variants—one for Java and one for JavaScript/TypeScript—and provides libraries, project templates, and a continuous delivery pipeline that you can use immediately. Now, let’s touch on some of the details of how SAP Cloud SDK makes development and delivery of extension applications on SAP Cloud Platform so much more convenient. We’ll consider a simple SAP S/4HANA Cloud side-by-side extension application that displays business information that it fetches from an SAP S/4HANA Cloud system in real time, by consuming the respective public OData API, published on SAP API Business Hub (https://api.sap.com; see also Chapter 6, Section 6.2). At a minimum, this application will need to perform the following actions, in addition to its own local application and data-fetching logic: Connect with SAP S/4HANA Cloud Log in to the tenant with the right authentication keys Consume the needed public API with the required authorizations in place Let’s see how the SAP Cloud SDK helps in achieving this. Connectivity SAP Cloud Platform provides the destination service, using which your app can reach an SAP S/4HANA Cloud tenant (or any other cloud service or system). To use the destination service, you have to carry out the following steps:

1. Retrieve login credentials from environment variables stored in the destination service. 2. Using these login credentials, generate a JSON web token (JWT) through the SAP Cloud Platform user account and authentication (UAA). Add this JSON web token to the HTTP request header for authentication. 3. Formulate the service endpoint you wish to call and send the HTTP request. That is, to call an external system, you construct the HTTP request manually by adding the respective headers and security information manually. In addition, you also need to perform all tasks related to the handling of the HTTP request, such as building the request URL, establishing the connection, handling the response, and parsing it. With the SAP Cloud SDK, however, you can get the same result in a single line of code by using the DestinationAccessor class of the SDK that gives you an instance of the SAP Cloud Platform destination service: final ErpHttpDestination destination = DestinationAccessor .getDestination("MyErpSystem") .asHttp() .decorate(DefaultErpHttpDestination::new);

Now you can use this destination instance in your OData call without the need to carry out the manual steps as stated previously. Authentication Every application needs to be secure. SAP Cloud Platform’s Cloud Foundry environment helps you make your application secure by providing an application router (AppRouter) that does the following: Serves as the single entry point for your application that can comprise of multiple microservices. Any incoming request needs to be abstracted from the complexity arising from the several microservices at play. AppRouter does just that by acting as a reverse proxy and dispatching incoming requests to the right microservice. Manages the authentication flow for the application by initiating an OAuth2 flow with the extended services for user account and authentication (XSUAA) service of SAP Cloud Platform, Cloud Foundry environment. The XSUAA is an SAP-specific extension of Cloud Foundry’s UAA service to handle user authentication and authorization with SAP’s standards in place. To achieve this, the XSAA service fetches a JSON web token that embodies the user credentials and which application scopes he or she possesses. For a detailed explanation, see the tutorial available at https://developers.sap.com/tutorials/s4sdk-secure-cloudfoundry.html. With the SAP Cloud SDK, you can achieve much more. For example, using the getCurrentToken method of the AuthTokenAccessor class, you can get the current authorization token, which further contains the JSON web token from the authorization header of the HTTP request.

Similarly, the getCurrentTenant method of the TenantAccessor class gives you access to the current tenant from different sources, such as the JSON web token in the authorization header or the bound XSUAA instance. In failing to do so, it provides elegant exception handling. In summary, using SAP Cloud SDK libraries gives you convenient access to a number of XSUAA parameters that you would have had to manually fetch otherwise. For further information, visit https://developers.sap.com/topics/cloud-sdk.html. API Consumption Now that the user knows where to go (Destination) and has been successfully authenticated (AppRouter), she or he needs to get the required information for further processing. To achieve this, SAP S/4HANA Cloud provides public APIs through the SAP API Business Hub. Let’s now discuss how the SAP Cloud SDK makes consumption of these APIs far simpler than otherwise would be the case. We’ll look at the Business Partner OData API (API_BUSINESS_PARTNER) as our example. You can find it at the SAP API Business Hub at https://api.sap.com/api/API_BUSINESS_PARTNER/resource. To access this API in your cloud application, you need to get the exact API semantics and traverse through the exposed entities and attributes. Programmatically, this may pose a challenge as you have to switch between the API description and the programming environment, with frequent manual context switching. To enable easy API consumption with type-safe and fluent programmatical access, SAP offers the SAP Cloud SDK, which comes with API-specific libraries. These libraries are class representations of these APIs that you may import in your application and use directly in a type-safe manner. In our example API (API_BUSINESS_PARTNER), the corresponding library class is DefaultBusinessPartnerService, also known as the SAP Cloud SDK VDM, which implements the BusinessPartnerService interface. This, and all other library classes, representing all public APIs of SAP S/4HANA, are available as standard in the SAP Cloud SDK. Now that we’ve taken a brief look at the value that the SAP Cloud SDK brings to development of side-by-side extensions, let’s bring it all together in the next section. Build the Application Logic for Resilience You’ve seen the connectivity, authentication, and API consumption capabilities of the SAP Cloud SDK. Now let’s put it into action. Recall how you accessed the SAP Cloud Platform destination service through the SAP Cloud SDK libraries. final ErpHttpDestination destination = DestinationAccessor .getDestination("MyErpSystem") .asHttp() .decorate(DefaultErpHttpDestination::new);

You may now use this destination to retrieve information from SAP S/4HANA through the SAP Cloud SDK VDM directly (DefaultBusinessPartnerService). However, connections to remote systems are often unstable. Many things can go wrong between your request for data and the response, such as temporary system unavailability or broken network nodes. In such situations, we always recommend making your call in a resilient manner. In short, resilience is the ability of your application to elegantly handle failure and not go down when one of the downstream processes fails to respond in a timely manner. With the SAP Cloud SDK, you can wrap your calls into a resilient one by using the ResilienceDecorator library. To do so, start by configuring your resilient call using the ResilienceConfiguration class. myResilienceConfig = ResilienceConfiguration .of(BusinessPartnerService.class) .isolationMode(ResilienceIsolationMode .TENANT_AND_USER_OPTIONAL) .timeLimiterConfiguration(ResilienceConfiguration .TimeLimiterConfiguration.of() .timeoutDuration(Duration.ofMillis(10000))) .bulkheadConfiguration(ResilienceConfiguration .BulkheadConfiguration.of().maxConcurrentCalls(20));

In this configuration, among other configuration choices, we have specified a timeout duration of one second, with no more than 20 concurrent calls. You can adapt these parameters according to the needs of your application. Next, we encapsulate the actual call to the BusinessPartnerService, in a new method called run, like this: final List run { return businessPartnerService .getAllBusinessPartner() .select(BusinessPartner.BUSINESS_PARTNER, BusinessPartner.LAST_NAME, BusinessPartner.FIRST_NAME, BusinessPartner.IS_MALE, BusinessPartner.IS_FEMALE, BusinessPartner.CREATION_DATE) .execute(destination);

Now, with myResilienceConfig, we call a chosen method of the ResilienceDecorator class (here we use the executeSupplier method). List = ResilienceDecorator .executeSupplier(this::run, myResilienceConfig, e -> { return Collections.emptyList(); } );

Note the use of the run method, which makes the actual call to the BusinessPartnerService VDM library. A word about the BusinesPartnerService: it follows the builder pattern and works with the execute method that takes the given destination as input. For simplicity, we have omitted statements for class imports, requisite variable declarations, and exception handling. For a complete example, visit

https://developers.sap.com/tutorials/s4sdk-resilience.html. In practice, you can achieve much more. For example, you can add various query options while fetching the date, like this: .filter(BusinessPartner.BUSINESS_PARTNER_CATEGORY .eq(CATEGORY_PERSON)) .orderBy(BusinessPartner.LAST_NAME, Order.ASC) .top(200) .execute(destination);

Now with the businessPartners object, you can perform all CRUD operations, including deep insert. In addition, if you have a custom OData service, you can use the VDM generator of SAP Cloud SDK to generate your OData service-specific library. For Java projects, this means a new Maven dependency with some configuration. Thereafter, you generate your library with a simple mvn clean install command. The only input you need is the API descriptor file in edmx format. To see end-to-end instructions, please visit https://blogs.sap.com/2018/04/30/deep-dive-10-with-sap-S/4HANA-cloud-sdkgenerating-java-vdm-for-S/4HANA-custom-odata-service. Once your library has been generated, you may use it as you would use any other library available in the standard SAP Cloud SDK.

5.3

Summary

The power of SAP S/4HANA can be further enhanced by developing specific functionality on top of it, known as extensions, without disrupting the stable core. In this chapter, we discussed how SAP S/4HANA treats extensibility with the highest focus, in both of its deployment options (cloud and on-premise), by providing flexibility and yet standardization in extension development. Key user extensibility is meant for quick enhancements within the application scope, whereas side-by-side extensibility offers you high flexibility and lets you develop an extension application the way you like. Based on your requirements, you can choose what works best for you, including a combination of the two. For side-by-side extensibility in particular, using SAP Cloud SDK takes a lot of development complexity away, giving you an opportunity to focus deeper on your application logic instead. To give you some guidance for what type of extensibility to apply in a given case, we offered some criteria to consider. There are situations in which key user extensibility fits best. It is the first choice when the focus is on simplicity, context-aware extensions, and tight integration. Here are some examples: Same transactional context The extension should be part of a transaction within SAP S/4HANA, and the integration into SAP S/4HANA will ensure transactional integrity and low latency. Examples include an additional check before creating a sales order or determining a currency exchange rate at the time of posting an accounting document. Tight data integration SAP data and custom data should be deeply integrated with each other to leverage the full power of SAP HANA for data retrieval and reporting. An example would be analytics with customer-defined dimensions. Data exposure and consumption An extension can add custom APIs to SAP S/4HANA or extend existing SAP S/4HANA APIs—for example, to expose data from SAP S/4HANA in a new combination, in a new aggregation, or as extened with custom data. This could be done for integrating third-party applications or for exposing data from SAP S/4HANA to a side-by-side application. Adding or extending SAP S/4HANA APIs can of course be done with key user extensions only. However, there are cases in which tight integration is not required, and the custom application works in a mostly self-contained way. In these cases, side-by-side extensibility is the better choice. Typical use cases in which side-by-side extensibility should be preferred include the following: You want to utilize the benefits of loose coupling—for example, independent scalability or completely separated lifecycles. You also want the freedom to use a different architectural style, technology stack, or programming language, and to use open-source software, for example. You want to extend the reach beyond the typical users of an ERP system—for example, via customer portals, mobile applications, consumer applications, or

applications for specialists. You want to complement and combine data and business services from different backend systems. You want to connect am extension to several applications that are built on different technology stacks. It’s difficult to define common rules for when to use key user extensibility and when to use side-by-side extensibility. Very often, the choice will depend on the details of your project, the skills of your team, the history of existing extensions, and the project roadmap. In many cases, a combination of both techniques will be the best choice: implement the parts that need to be close to the SAP S/4HANA applications or their data as in-app extensions (for example, to expose data), and implement the rest as side-by-side extensions on SAP Cloud Platform. In the next chapter we look into the different integration techniques available for SAP S/4HANA.

6

Integration This chapter gives an overview of the integration technologies that can be used to integrate SAP S/4 HANA with other applications and services. This includes interface technologies, cloud to on-premise communication, integration middleware, event-based integration, and data integration. We also discuss the simplified communication management in SAP S/4HANA Cloud.

Software applications, and especially ERP solutions, live in a connected world. Being able to integrate ERP into business processes spanning multiple software applications, even owned by different organizations, belongs to the base requirements.

6.1

SAP S/4HANA Integration Interface Technologies

In SAP S/4HANA, the entire set of integration technologies of the ABAP technology stack, such as SOAP services, OData services, IDocs, BAPIs, and RFCs, are available. This section provides a brief overview of these technologies and discusses the strategy for interfaces in SAP S/4HANA. 6.1.1

OData Services

OData APIs are resource-oriented APIs that allow you to query and modify data models with entities that are connected by navigation relationships. OData services expose their metadata in the entity data model (EDM) format. OData entities and their elements can be annotated with additional metadata. For more on how to create OData services with the ABAP RESTful application programming model, see Chapter 2, Section 2.2.2. OData services in SAP S/4HANA are currently synchronous services, in which the result of the operation is returned immediately in the body of the HTTP response. For example, when the client sends a POST request to create an entity instance, the service creates the new instance in the server and then returns its representation with the response. SAP is working on the option to use asynchronous OData services also in the future. 6.1.2

SOAP Services

SOAP services are services that use the HTTP protocol and the XML-based SOAP message format. The operations provided by the SOAP service and the schemas for input and output data are specified using the Web Services Description Language (WSDL). In the ABAP environment, SOAP services can be defined in the Enterprise Services Repository or in the backend repository of the ABAP application server. For custom services, there is also the option to generate SOAP services from existing ABAP function groups and BAPIs. In SAP S/4HANA, SOAP services are typically used for asynchronous, reliable, and transactional communication, with one-way messages that have no response. Think of a buyer system that sends an order to a supplier system as an asynchronous message.

The buyer system sends the message without waiting for a response, and the reliable messaging infrastructure ensures that the message gets delivered, maybe at a later point in time. If the supplier system wants to send an order confirmation, it sends it as another one-way message through an asynchronous service provided by the buyer system. Synchronous SOAP services are used only in exceptional cases. 6.1.3

Remote Function Call

The traditional mechanism for remote communication is the remote function call (RFC), SAP’s protocol for calling a remote-enabled ABAP function module. As an alternative to communication based on the native RFC protocol, it’s possible to perform RFCs over WebSocket, especially if only HTTP connectivity is available. Several types of RFCs exist. The synchronous RFC is executed at the time it is called and the output is received immediately. The transactional RFC and queued RFC can be used for asynchronous reliable communication. See the SAP documentation for ABAP at help.sap.com for details. 6.1.4

BAPIs

Business Application Programming Interfaces (BAPIs) are stable, documented interfaces that are defined as methods of object types that are listed in the Business Object Repository (BOR). Technically, BAPIs are implemented as RFCs. BAPIs are typically called as synchronous RFCs so that the calling party immediately receives the output. 6.1.5

IDoc

An intermediate document (IDoc) is a standard SAP format for asynchronous message exchange between systems. IDocs standardize the data that is exchanged, independent of the method used for data transfer. IDocs can be transferred via transactional RFCs, via HTTP using an XML encoding, or via SOAP over HTTP. IDocs are used in Electronic Data Exchange (EDI) between organizations and for asynchronous messages in an Application Link Enabling (ALE) scenario within an organization. ALE uses IDocs for asynchronous communication that changes data and synchronous BAPI calls for reading data. 6.1.6

SAP S/4HANA API Strategy

While traditional interfaces such as RFC, BAPI, and IDoc are supported in SAP S/4HANA, the SAP strategy is to focus on SOAP and OData APIs for new development. SOAP services will be used for asynchronous messages (where IDocs would have been used in the past), and OData services will be used for synchronous operations on resources (where BAPIs would have been used in the past). SAP has already applied this strategy for SAP S/4HANA Cloud. Here SOAP services and OData services are already the preferred API technologies and SAP’s strategy is to not implement new BAPIs, RFCs, and IDocs for the cloud. SAP has released a controlled set of scenarios with traditional interface technologies (IDoc, BAPI, RFC) also for SAP S/4HANA Cloud. This was done to allow SAP customers to leverage existing

and implemented integration scenarios. In these scenarios, RFC communication between cloud and on-premise systems in both directions is enabled by Cloud Connector (Section 6.5).

6.2

SAP API Business Hub

SAP API Business Hub is SAP’s public directory of public APIs and various other types of content—for example, integration content, business events, CDS views, and reference business processes. SAP API Business Hub is available at https://api.sap.com. In the context of SAP S/4HANA integration technology, the following content is especially interesting: The catalog of public SAP S/4HANA APIs (SOAP and OData). APIs that are not listed here are not released by SAP as public APIs. An example of a nonpublic API is application-local services that are called by the user interface of an SAP S/4HANA application. Such services are not listed on SAP API Business Hub, have no compatibility contract, and can be changed or removed at any time. Predefined integration content for SAP Cloud Platform Integration Suite. The catalog of SAP S/4HANA Cloud business events that can be consumed through SAP Cloud Platform Enterprise Messaging. For SOAP APIs, you can find the online reference documentation and download the WSDL definition. For OData APIs, you can browse the API specification online and download the specification in OpenAPI format and Entity Data Model format. Further details include, for example, information about supported authentication methods and about scope items that need to be activated for using the API, as well as links to business documentation in the SAP help portal. From SAP API Business Hub, you can try out OData APIs by calling a shared SAP S/4HANA Cloud tenant provided by SAP. You can even try out the APIs against your SAP S/4HANA Cloud instance after you have configured the required credentials in SAP API Business Hub. Released BAPIs and IDocs are not published on SAP API Business Hub.

6.3 Interface Monitoring and Error Handling with SAP Application Interface Framework In this section, we discuss the core concepts of SAP Application Interface Framework, an infrastructure for interface monitoring, error-handling, and processing in ABAP-based systems. In SAP S/4HANA Cloud, all asynchronous APIs of the ABAP backend are integrated with SAP Application Interface Framework for monitoring and error handling. This means that for these interfaces SAP already provides the necessary configuration for SAP Application Interface Framework. For SAP S/4HANA on-premise, this is not the case for most applications. However, as an on-premise user, you can configure interfaces to be monitored and processed by SAP Application Interface Framework and you can develop your own interfaces with the framework. Using SAP Application Interface Framework in an on-premise system requires an additional license. The different interface technologies of the ABAP application server come with different tools for monitoring and error handling. These tools are technology-specific and operate on a technical level, so they might be hard to understand unless you are a technology expert. SAP Application Interface Framework provides uniform monitoring, alerting, and error handling, which operates on a function level. Users can, for example, correct business-level errors without having to deal with the underlying technology. Figure 6.1 shows the conceptual architecture of SAP Application Interface Framework. For asynchronous incoming and outbound calls, the supported interface technologies are SOAP service, IDoc and RFC. For asynchronous outbound calls, OData is supported as an additional interface technology. For synchronous OData calls, only error monitoring is provided. Incoming calls are processed by the corresponding integration interface runtime of the ABAP application server—for example, the web service runtime. When SAP Application Interface Framework is configured to be used for a specific interface, the interface runtime first registers the call with the monitoring component of the framework. The framework is now aware of the call and can show it in the monitoring application. Then the interface runtime calls the actual applicationspecific request processing. The application processes the call and writes progress and errors to the application log. When processing is finished or fails, control is returned to the interface runtime. In case of errors, the interface runtime logs the error information in its log. Then the interface runtime again calls SAP Application Interface Framework to tell it the status of the call. The framework updates the status in its internal monitoring data and writes additional log messages in case of errors. Monitoring can be configured to trigger alerts so that users get notified about errors.

Figure 6.1

SAP Application Interface Framework

With the monitoring functionality of SAP Application Interface Framework, users can see the calls with their status and details about errors that might have occurred. SAP Application Interface Framework reads the details from the logs and displays them according to configuration provided by the application. When an error occurred on the business level, users can try to resolve it—for example, by providing some missing data or business configuration. Via configuration, applications can define direct links to business applications in which such corrections can be made. Asynchronous messages are stored in queues by the interface runtimes. SAP Application Interface Framework can read the message payload from the different interface runtime queues and display it to the user in a user-friendly way. Users can edit the message payload in a form-based, easy-to-use editor, and SAP Application Interface Framework can write the edited version back into the interface runtime. Finally, when the problem is solved, the framework can tell the interface runtime to process the queued messages again. The flow described for inbound communication is similar way to that for outbound messages and requests. Before and after sending, SAP Application Interface Framework monitoring is informed, and if sending fails, users can try to fix the problem and trigger reprocessing. So far, we’ve discussed the monitoring and error-handling capabilities of SAP Application Interface Framework. However, this framework can do more. It can intercept the processing of requests and messages—for example, to perform value mappings, structure mappings, checks, and so on. This can be specified for each interface as processing configuration for SAP Application Interface Framework. For inbound calls

with SAP Application Interface Framework processing configured, the interface runtime doesn’t call the application-specific request processing but calls the request processing in SAP Application Interface Framework instead. The framework can now perform checks and mappings, based on interface-specific configuration. Value mapping could be used, for example, if the calling application consistently sends the wrong code value in a certain field of a message. SAP Application Interface Framework then calls the request processing of the application with the (potentially mapped) input data. For outbound calls, the interface runtime calls SAP Application Interface Framework processing before sending the request, so that it can apply mappings and checks. SAP Application Interface Framework returns the mapped request, which is then sent by the interface runtime. Independent of request processing, monitoring is still called for registering and status updates, if configured. The SAP Application Interface Framework processing capabilities are not only available for processing interface calls. They can also be called by applications in other contexts as a reuse component for mappings, checks, and so on.

6.4

Communication Management in SAP S/4HANA Cloud

SAP S/4HANA Cloud comes with a new approach for configuring communication with external systems and services, called communication management. Its goal is to simplify and streamline the configuration with other systems across the different interface technologies. This section discusses the concepts of communication management in more depth for two reasons: first, because it’s new and architecturally interesting; and second, because it’s a good example of simplification and improved usability in SAP S/4HANA Cloud. Communication management in SAP S/4HANA Cloud is based on the following principles: The same unified communication management concept is used across the different types of technical interfaces (such as OData, SOAP, and IDoc). Users configure connectivity to other systems on a high level of abstraction. The underlying interface technology runtimes and their configuration mechanisms are hidden from the user. Required technical artifacts are generated behind the scenes— for example, destinations, or logical ports for SOAP services. This not only improves usability but also is a prerequisite in a controlled SaaS offering in which the user has no access to the underlying tools and infrastructure. Communication with other systems is managed for a group of inbound and outbound interfaces that are required for a meaningful scenario. Assume that a specific business process requires the exchange of several types of information in both directions, involving different interfaces. With communication management, the required configuration is managed in one place and not individually for each interface. The information model in Figure 6.2 shows the elements of communication management and their relationships on a conceptual level. The elements at the top of the diagram are created at design time, typically by SAP development. The bottom section shows the elements that are specified by the user at configuration time.

Figure 6.2

6.4.1

Communication Management in SAP S/4HANA Cloud

Communication Scenario

The communication scenario is defined at design time. It groups the inbound and outbound services that are required to realize a meaningful goal on the business level. As an example, consider the communication scenario for integration with Ariba Network through the SAP Ariba Cloud Integration Gateway solution. This communication scenario groups the services that are needed so that SAP S/4HANA as the buyer system can exchange information with a supplier on Ariba Network. There are, for example, outbound services for sending documents such as purchase orders and payment advice notes to suppliers, as well as inbound services for receiving documents such as order confirmations and invoices from suppliers. Each inbound and outbound service in a communication scenario has a service type that indicates the interface technology. Inbound services can have, for example, OData V4, OData V2, SOAP, IDoc, or RFC service types. Example outbound service types include SOAP, IDoc, REST, RFC, or SMTP (email). Services with different types can be combined in the same communication scenario. The communication scenario also defines the supported authentication protocols for inbound and for outbound communication. Available inbound authentication methods include basic authentication with a username and password, X.509 certificates, and OAuth. Available outbound authentication methods include basic authentication, X.509 certificates, OAuth, or no authentication. A communication scenario is like a template that can be instantiated by an administrator to configure the communication with a concrete remote system. For that, the user must specify the technical connection details for the remote system and the authentication

credentials to be used for inbound and outbound communication. This information is specified with the help of three concepts: communication system, communication user, and communication arrangement. 6.4.2

Communication User

When an inbound service is called, the caller needs to be identified and authenticated. Once the caller is authenticated, the SAP S/4HANA Cloud can check whether the caller is authorized, and it will know what permissions apply when the call is processed. In communication management, you define the identities for authentication of inbound services by creating a communication user, a special kind of technical user. These users do not represent real human users, but technical identities for system-to-system communication. Communication users can be authenticated with a username and password or with certificate-based authentication. 6.4.3

Communication System

Before a communication scenario for a specific remote system can be instantiated, you must make this system known as a communication partner and provide the technical information needed for the communication. For that, an object called a communication system needs to be created. A communication system represents a remote system that can call inbound services or be the target of outbound service calls. When a communication system is created, connection attributes need to be specified—for example, the hostname, OAuth2 endpoints, or Cloud Connector details, if required. For inbound communication, one or more communication users need to be assigned to the communication system. It’s possible to assign the same communication user to different communication systems. For outbound communication, you create outbound users that belong to the communication system. These are not users in SAP S/4HANAbut credentials for calling into the remote system. Each outbound user has a specific authentication method. Depending on the authentication method, different details are stored with the outbound user—for example, the username and password, OAuth client credentials, or a client certificate. 6.4.4

Communication Arrangement

Before SAP S/4HANA Cloud can communicate with the remote system, you need to create a communication arrangement, which instantiates a specific communication scenario for a specific communication system. The communication arrangement must specify one inbound user and one outbound user (if there are inbound or outbound services). These users must be assigned to the communication system as inbound or outbound users, and their authentication method must be supported for inbound or outbound communication in the scenario. When the communication arrangement is created, the system automatically creates the necessary technical communication settings and artifacts that are required internally by the different communication runtimes. As shown in Figure 6.2, a communication scenario can contain a role for inbound communication. When you create the communication arrangement, this role is assigned to the inbound communication user.

This ensures that the necessary authorization permissions are in place when an inbound service is called with the credentials of this communication user. 6.4.5

Calling Inbound Services with User Propagation

Usually an inbound service call is authenticated with the credentials of the inbound communication user specified in the communication arrangement. The inbound call is then processed with the identity and permissions of this technical communication user. We can say that the service is executed on behalf of the trusted remote system, and not on behalf of an individual business user. This fits for typical process integration and data distribution scenarios, for example. In some cases, however, the inbound service must be executed on behalf of a specific human user who is logged in to the remote system. In this case, the identity of the logged-in user is propagated from the remote system to SAP S/4HANA Cloud and mapped to a business user in the system, and the service is executed with the permission of that business user. In SAP S/4HANA Cloud, user propagation is done with the OAuth 2.0 SAML bearer assertion flow. It roughly works like this: The remote system has a SAML assertion that proves the identity of the logged-in user to SAP S/4HANA Cloud. SAP S/4HANA Cloud trusts this SAML assertion because it was issued and signed by an identity provider for which the user has configured trust as part of communication system data. Before making the inbound call, the remote system uses this SAML assertion to request an OAuth access token for the inbound service from the OAuth authorization server of SAP S/4HANA Cloud. SAP S/4HANA Cloud validates the token request and maps the asserted user identity to a local business user. To the remote system, SAP S/4HANA Cloud returns an access token, which certifies the identity of the business user and the permission to call the inbound service. The remote system uses this token to authenticate the inbound service call, which is then executed with the identity and permissions of the business user. In this scenario, the communication user credentials are not needed for the actual inbound service call, but they contain the username and password needed for requesting the token from the OAuth authorization server.

6.5

Cloud Connector

Cloud Connector is a component that is installed on a computer in the on-premise network to enable communication between cloud applications and on-premise applications. You can use Cloud Connector, for example, to enable communication between SAP S/4HANA Cloud and on-premise systems or between custom applications on SAP Cloud Platform and SAP S/4HANA on-premise, or for connecting SAP S/4HANA on-premise with cloud services such as SAP Cloud Platform Enterprise Messaging or the SAP Cloud Integration service. Cloud Connector supports HTTP and non-HTTP protocols, such as RFC or access to an SAP HANA database via ODBC/JDBC. 6.5.1

Cloud Connector Principles

Cloud Connector acts as reverse proxy between the on-premise network on one side and SAP S/4HANA Cloud or SAP Cloud Platform on the other side. Communication happens through a secure and encrypted tunnel connection, which is established by Cloud Connector as an outgoing connection (see Figure 6.3). Through this tunnel, you can have inbound calls from the cloud without the need to open any ports in your onpremise firewall. In addition, Cloud Connector enables non-HTTP communication in both directions.

Figure 6.3

Cloud Connector

Administrators must manage the resources that are accessible in the on-premise systems via Cloud Connector. Administrators can also restrict which cloud applications

may use the connector by setting up an allow list of trusted cloud applications. Calls to on-premise systems can be authenticated with basic authentication and a username and password of a technical service user, or with user propagation. With user propagation, the calling cloud application provides a token from a SAML2 identity provider to prove the identity of the user. The target system maps this identity to a local user and executes the call with the permissions of this user. To enable the successful propagation of this identity, two trust relationships must have been configured. First, Cloud Connector must trust the identity provider that issued the token; second, the target on-premise system must trust Cloud Connector. 6.5.2

RFC Communication with SAP S/4HANA Cloud

Cloud Connector enables native RFC calls in both directions: cloud to on-premise and on-premise to cloud. It’s possible, for example, to make RFC calls from an on-premise system to SAP S/4HANA Cloud. For that, you create a service channel in your Cloud Connector, which allows RFC calls to your SAP S/4HANA Cloud tenant. Note that you cannot call arbitrary RFC-based interfaces in an SAP S/4HANA Cloud tenant. Communication with SAP S/4HANA Cloud requires a predefined communication scenario (Section 6.4). SAP has released only a controlled set of communication scenarios with traditional RFC-based interfaces (see also Section 6.1.6).

6.6

Integration Middleware

SAP’s strategy for integrating SAP applications is based on direct communication between the applications through aligned APIs. However, there are cases in which you may prefer to use communication mediated by dedicated integration middleware. Integration middleware is useful, for example, when you want to integrate third-party products, when you want to design your own integration flows, when you want to route messages to different receiving systems, or when you want to manage complex mappings centrally. For the cloud, SAP’s recommended integration middleware is the SAP Cloud Platform Integration service, which is available as a part of the SAP Cloud Platform Integration Suite This is a multitenant-enabled service that runs on SAP Cloud Platform and can be used both for cloud-to-cloud integration and for integrating cloud and on-premise applications (using Cloud Connector if required). Figure 6.4 illustrates how SAP Cloud Platform Integration service works. Messages from senders are processed by SAP Cloud Platform Integration service and routed to receivers. Senders and receivers are connected via adapters, which implement the communication protocols used by senders. Sender adapters convert incoming data from the external protocol of the sender to an internal message format, which can then be processed. The final outbound messages are converted from the internal format to the protocol of the receiver by a receiver adapter. Incoming data can be pushed by the sender (for example, with an HTTP request) or can be pulled by SAP Cloud Platform Integration service at regular times. SAP Cloud Platform Integration service has adapters for various protocols, such as SOAP, OData, HTTP/REST, AMQP, IDoc via SOAP, RFC (client only), email, JMS, AS2, AS4, cXML for SAP Ariba solutions, and many more. With the Open Connectors capability in SAP Cloud Platform Integration Suite, you can integrate a variety of thirdparty APIs. The Open Connectors capability is based on the API integration technology offered by an SAP partner, Cloud Elements Inc. With Open Connectors, you can access third-party APIs from many cloud services in a common way, with standardized authentication, error handling, pagination, and data structures.

Figure 6.4

SAP Cloud Platform Integration Service

The processing and routing of messages is described by integration flows (also known as iFlows). iFlows describe how messages coming from senders are processed and distributed to receivers. iFlows are visually modeled using the Business Process Model and Notation (BPMN) standard, with design tools that are integrated into the web-based user interface of SAP Cloud Platform Integration service. iFlows describe processing and routing of messages in a graph of steps. Examples of supported steps include the following: Mapping of message structures Value mappings Filtering of parts of messages Modifying message content Message encoding and decoding (ZIP, Base64), encrypting and decrypting Converting message formats (for example, between JSON, XML, CSV) Digital signing and signature verification Calling external services (asynchronous or synchronous) Data enrichment, merging content with data from an external service Splitting of complex messages Persisting messages Routing messages to one or many receivers A custom processing step implemented by a script in Groovy or JavaScript iFlows are executed by the SAP Cloud Platform Integration service processing runtime, which is based on the Apache Camel open-source integration framework. SAP provides predefined integration content, which is published on SAP API Business Hub and available in the web user interface of SAP Cloud Platform Integration service for use in your tenant. For scenarios in which SAP products are directly integrated through APIs, SAP offers integration content as a starting point for setting up your custom integration with SAP Cloud Platform Integration service. In these cases, the SAP-provided content just routes the messages through but can be used as the basis for your enhancements and extensions. SAP’s well-known integration middleware for on-premise deployment is SAP Process Integration technology. SAP Process Integration supports several installation options, containing Java-based and ABAP based components. The Java-only flavor of SAP Process Integration is known as advanced adapter engine extended. Advanced adapter engine extended supports connectivity through adapters for various protocols, mapping of message structures and values, and routing of messages based on routing rules. It also includes Enterprise Services Repository and the integration directory, for modeling and configuring integration content. SAP Process Orchestration is an offering that includes advanced adapter engine extended together with SAP Business Process Management software and the SAP Business Rules Management component.

With release 7.5 of the functionality, advanced adapter engine extended can execute integration content created with SAP Cloud Platform Integration Suite on-premise also.

6.7

Event-Based Integration

Sometimes you want to notify other systems or services about business events that happened in SAP S/4HANA—for example, when a business partner was created, when a request for quotation was published, or when a sales order was changed. Important use cases include side-by-side extensions that need to be informed of changes in SAP S/4HANA. Here, events are complementing traditional APIs. By publishing events, other systems can be notified without the need to set up direct communication between the involved systems. Without events, the information either would have to be pushed to the target systems with API calls or pulled by the target systems with periodic API calls to the source system. This would require, for example, that the systems know the communication details and credentials of the other system, and high volumes of messages to many receivers would create an additional load for a source system with many target systems. Synchronous API calls would even require the receiving side to be online when the event is sent. Therefore, events are typically distributed using a specialized messaging service that enables asynchronous communication and supports the publish/subscribe pattern. The source system publishes event messages to the messaging system, from which the messages are distributed to one or more receivers. This way, the source system needs not know about the identity and technical specialties of the receiver systems, and the routing of high volumes of messages is offloaded to the messaging service. Message services have standardized messaging protocols and message queues to decouple senders and receivers. SAP S/4HANA uses SAP Cloud Platform Enterprise Messaging as the service for publishing and delivering events. Before we explain how business events are defined and published in SAP S/4HANA, we first give a brief overview of SAP Cloud Platform Enterprise Messaging concepts. 6.7.1

SAP Cloud Platform Enterprise Messaging

SAP Cloud Platform Enterprise Messaging is a fully managed, scalable cloud service on SAP Cloud Platform. Applications can send and receive messages via the standard AMQP 1.0 and MQTT protocols, via a proprietary REST API, and via webhooks. When an application wants to publish a message on SAP Cloud Platform Enterprise Messaging, it addresses the message to a topic, which classifies the type of event. Technically, a topic is a string with segments separated by slashes, which reflects a hierarchy of topics. The SAP S/4HANA system with system ID ABC could, for example, use the topic S4/ABC/BO/BusinessPartner/Created to publish event messages about business partners created in this system. Consuming applications can directly listen to topics in SAP Cloud Platform Enterprise Messaging. In that case, they get all messages that are published while they are online and listening. To get messages that are sent while the consumer is not connected, SAP Cloud Platform Enterprise Messaging also supports storing messages in queues. Each queue is meant to be consumed by exactly one receiving application. A queue can be subscribed to topics so that messages published to the topic are stored in the queue. For the subscription, you can use topic patterns—for example,

S4/ABC/BO/BusinessPartner/*—to get all types of events about business partners. By

subscribing several queues to the same topics, you can distribute messages to several queues and thus to several receiving applications. SAP Cloud Platform Enterprise Messaging is a multitenant service, in which customers get their own tenant. You can view your SAP Cloud Platform Enterprise Messaging tenant as a separate virtual message bus, to which your applications can connect in order to communicate with each other. For that, each application must be registered at the message bus as a messaging client. Each messaging client has a set of credentials and a configuration. In the client configuration, you can specify access rules that restrict the topics it can send or the queues it can consume. In addition, a messaging client has a set of security credentials that are used to authenticate with SAP Cloud Platform Enterprise Messaging, using the OAuth client credentials flow. SAP Cloud Platform Enterprise Messaging provides a web application in which administrators can view client configurations and manage queues and subscriptions. In this application, you can view the catalog of events that connected systems can publish. To get this information, the application calls an event discovery service provided by the source system. 6.7.2

Business Events Architecture in SAP S/4HANA

Figure 6.5 illustrates the architecture for publishing business events from SAP S/4HANA. SAP S/4HANA applications raise events when business objects are changed.

Figure 6.5

Publishing Business Events from SAP S/4HANA

For these application events, SAP S/4HANA uses SAP Business Workflow as a general-purpose event infrastructure, independent of workflows. When an event is raised, the workflow event manager of SAP Business Workflow calls registered event consumers asynchronously using transactional RFC calls. This decouples the application from the event processing. The business event handling component has registered itself as an event consumer for the local application events that are configured to be published as business events. When business event handling is called for some event, it translates it to a business event and forwards it to the enterprise event enabling component. Enterprise event enabling publishes the event messages to SAP Cloud Platform Enterprise Messaging using the MQTT over WebSocket protocol. The format for event messages sent by SAP S/4HANA follows the CloudEvents specification (https://cloudevents.io). Note that the event messages do not contain the complete data of the affected business object instances. It typically contains their identifiers and a limited, selected set of data for event filtering. If the receiving application needs more information, it must call the corresponding SAP S/4HANA APIs to get the details about created or changed instances. To make this easier, the same field format is used in CloudEvents and in the OData API services. 6.7.3

Business Events in SAP S/4HANA

Let’s now explain in detail what a business event is. A business event in SAP S/4HANA is an event that can be published externally, using SAP Cloud Platform Enterprise Messaging. It reflects a change to a business object—for example, a product or a production order. Different business events can be defined for each business object type. Technically, business events are defined as events of SAP object types. SAP object types were introduced with the goal to get a well-defined catalog describing business artifacts as business object types (such as sales order or product), with meaningful and aligned names, and independent of the way the business objects are implemented. It is the task of the business event handling component to map workflow events to business events of SAP object types. SAP object types are also used for other purposes. In Chapter 5, Section 5.1.3 we explain that they describe the structure of extensible applications in the context of extensibility. To get an idea of what public events are available for which business objects in SAP S/4HANA, you can browse the catalog of events at the SAP API Business Hub, under Events in the Content Types category. Like public APIs, SAP lists and documents all officially released business events on SAP API Business Hub. 6.7.4

Event Channels and Topic Filters

Enterprise event enabling supports multiple outgoing channels for business events. The option to define several event channels makes it possible for the same SAP S/4HANA system to connect to different SAP Cloud Platform Enterprise Messaging tenants or to the same SAP Cloud Platform Enterprise Messaging tenant using different configurations.

The attributes of a channel include the HTTP destination for the SAP Cloud Platform Enterprise Messaging endpoint; the OAuth credentials for the messaging client used for connecting; and communication parameters such a quality of service level (message delivery guaranteed or not), number of reconnect attempts, and reconnect wait time. Another important attribute of a channel is its topic space, a channel-specific prefix that is added to the topics of all event messages that are published through this channel. For each channel, you must specify which event types will be published through this channel. This is done by defining one or more topic filters for the channel. When enterprise event enabling gets an event from business event handling, it publishes it through all channels that have a matching topic filter. For SAP S/4HANA on-premise, the channels and the required HTTP destination must be created by an administrator. In SAP S/4HANA Cloud, this is done based on a communication scenario for business event integration. You must create a communication system that holds the details for connecting to SAP Cloud Platform Enterprise Messaging. You also must create a communication user for the inbound event discovery service, which SAP Cloud Platform Enterprise Messaging calls to get the catalog of events. When you create the communication arrangement, you must specify the channel name and you can provide additional parameters. The HTTP destination and the channel are created automatically when the communication arrangement is activated. After the communication agreement is created, you can create the topic filters for the channel with an SAP Fiori app.

6.8

Data Integration

Process integration is about building distributed business processes. The exchanged messages typically connect steps of a business process across different systems or tenants and trigger the next process step on the receiving side. Creating a sales order, for example, may trigger several sales-related process steps in the receiving system, such as delivery and billing. For data integration, this is different. Here the data is transferred by a generic mechanism for various purposes, such as analytics, machine learning, or simply synchronizing two data stores in a replication scenario. When sales orders are replicated into a data warehouse or into a data lake, it doesn’t trigger any business process steps, such as sales and delivery activities. 6.8.1

CDS-Based Data Extraction

Although there are several technologies available for data integration, SAP positions CDS-based data extraction as the strategic option. This section gives an overview of CDS-based data extraction in SAP S/4HANA, which works very well in on-premise and cloud environments. It is used, for example, for feeding data from SAP S/4HANA into SAP Business Warehouse and into SAP Data Intelligence. CDS-based data extraction is a nice example of how CDS views and the VDM are used throughout the SAP S/4HANA architecture. CDS extraction views provide a general data extraction model, which is made available through several channels for the different data integration scenarios. CDS-Based Extraction Overview CDS-based data extraction is implemented based on CDS views that are marked as extraction views with special CDS annotations (@Analytics.dataExtraction.enabled). These extraction views are typically basic views or composite views in the VDM (see Chapter 2, Section 2.1.3). During extraction, the database views generated from the CDS extraction views are used to read the data to be extracted. Extraction can be done in two ways: Full extraction extracts all available data from the extraction views. Delta extraction means that after a full extraction, only changes are extracted from then on. This requires that the CDS extraction view is marked as deltaenabled (@Analytics.dataExtraction.delta.*). Figure 6.6 shows the architecture of CDS-based data extraction for data integration with SAP Business Warehouse and with SAP Data Intelligence. As shown in Figure 6.6, there are several APIs through which external systems can extract data based on CDS views: The operational data provider (ODP) framework is part of the infrastructure for analytics and for extraction of operational data from SAP business systems such as SAP S/4HANA. Among other functions, it supports data extraction with corresponding APIs and extraction queues. In an on-premise scenario, systems such as SAP Business Warehouse can call an RFC provided by the ODP framework in SAP S/4HANA. In S/4HANA Cloud, the same API can be used via a SOAP service, which basically wraps the RFC.

CDS-based data extraction is also possible through the cloud data integration (CDI) API. CDI provides a data-extraction API based on open standards and supports data extraction via OData services. As shown in Figure 6.6, this option is also built on top of the ODP framework. The CDI consumer operator in SAP Data Intelligence can be used to extract data from SAP S/4HANA through the CDI OData API. The ABAP CDS reader operator is an alternative option for extracting data into SAP Data Intelligence. It uses an RFC via WebSocket to extract data from the ABAP pipeline engine in the SAP S/4HANA system. The ABAP pipeline engine is the component that can execute ABAP operators that are used in data pipelines in SAP Data Intelligence, the ABAP CDS reader operator being one of them.

Figure 6.6

CDS-Based Data Extraction

For full extraction, the ODP framework and ABAP pipeline engine read all currently available data records from the extraction views. Delta Extraction with the Change Data Capture Framework For delta extraction, the architecture depicted in Figure 6.6 uses trigger-based change data capture, a concept known from SAP Landscape Transformation. In SAP S/4HANA, this is implemented by the change data capture engine. For trigger-based delta extraction, database triggers are created for the database tables that underlie the extraction views. Whenever records in these tables are created, deleted, or updated, corresponding change information is written to change data capture logging tables. As mentioned earlier, the CDS extraction views must be annotated accordingly to indicate that they support this option. For complex views involving several tables, the view

developer must also provide information about the mapping between views and tables as part of the definition of the CDS extraction view. For simple cases, the system can do the mapping automatically. Delta extraction with the change data capture engine has some limitations, which require that the views are not too complex. Delta-enabled extraction views must not contain aggregates or too many joins, for example. Change data capture for CDS-based delta extraction works as follows: A change data capture job is executed at regular times to extract changed data from application tables. For extraction, the change data capture engine uses the logging tables to determine what was changed and the database views generated from the CDS extraction views to read the data. The change data capture job writes the extracted data to the ODP queue of the ODP framework, from which it can be fetched via APIs as described previously. As already explained, the Cloud Data Integration API is built on top of the ODP framework and therefore uses the same mechanism for delta extraction. Delta extraction is also supported by the ABAP pipeline. For delta extraction, it calls the change data capture engine, which then reads the data based on logging tables and the database views generated from the CDS extraction views. Before the introduction of the trigger-based change data capture method, delta extraction was available only based on timestamps. This requires a timestamp field in the data model to detect what has changed since a given point in time. With triggerbased change data capture, this restriction on the data model is not needed. In addition, timestamp-based delta extraction requires more efforts at runtime, especially for detecting deletions. CDS-based delta extraction with timestamps is implemented by the OPD framework. It is still available, but SAP recommends using trigger-based change data capture for new developments. 6.8.2

Data Replication Framework

The data replication framework (DRF) is a local business object change event processor that decides which business object instances will be replicated to defined target systems. It is implemented in ABAP and part of the SAP S/4HANA software stack. Its architecture is shown in Figure 6.7. SAP S/4HANA business applications can register local change events of their business objects at DRF and connect corresponding outbound interfaces of their business objects. In a productive SAP S/4HANA landscape, integration experts can define a replication model that defines, for a given business object, the filter conditions and target systems for replication. At runtime, the application informs DRF about a business object create or change event —for example, the creation of a sales order. DRF checks the filter conditions for sales orders. If the result is that this object will be replicated, DRF sends the business object instance via the given outbound interface—for example, a SOAP service—to the defined target system(s). DRF always sends complete business object instances. The integration follows the push mechanism. Business object change events initiate the downstream transfer of the business object instances to the defined target systems. DRF is complemented by the key mapping framework and the value mapping framework to support nonharmonized identifiers and code lists.

Figure 6.7

Data Replication Framework

SAP S/4HANA uses DRF also as an event trigger for master data distribution with SAP Cloud Platform Master Data Integration service, which we discuss in the next section. 6.8.3

Master Data Integration Services

Cloud architecture and operation models rely on distributed business services. A prerequisite for this is the exchange and synchronization of common master data objects among these business services. SAP has created the SAP Cloud Platform Master Data Integration service, a generic master data distribution engine. Business applications and services create and store master data in their local persistence. They use the master data integration service to distribute the master data objects and ongoing updates. The master data integration service does not process applicationspecific logic, but some basic validations of the incoming data—for example, schema validations. The master data integration service uses master data models, which are specified by CDS views. The models do not follow the object model of one application, but an SAPwide aligned domain model. When creating or changing master data, the application calls the asynchronous change request API of the master data integration service. The master data integration service validates the incoming data and writes all accepted changes to a log. Applications interested in this master data object use the log API to read the change events. They can set filters to influence what master data they want to get. Filtering can be done on the instance level (to specify which master data records they want) and even on the level of fields to be included. As of now, the master data integration service supports the employee and cost center master data objects, which are shared between SAP S/4HANA Cloud and SAP SuccessFactors Employee Central. The exchange of further master data objects is planned to fully support the intelligent enterprise.

6.9

Summary

Now we have explored the technical options for integrating SAP S/4HANA with other systems or services. We gave an overview of the available interface technologies and introduced SAP API Business API as the central directory for APIs and other integrationrelevant content. Next, we discussed the concepts of monitoring and error handling with SAP Application Interface Framework. The communication management framework in SAP S/4HANA Cloud simplifies the way communication with other systems is configured. Communication between SAP cloud applications and on-premise applications is enabled by Cloud Connector. Mediated communication with middleware is required especially for integrating thirdparty systems. This can be done with SAP Cloud Integration or with SAP Process Orchestration. SAP S/4HANA also supports event-based integration. We discussed how SAP S/4HANA can publish business events via SAP Cloud Platform Enterprise Messaging. Finally, we looked at data integration. In the next chapter we discuss data protection and privacy requirements and how they can be fulfilled with SAP S/4HANA.

7

Data Protection and Privacy Several data protection and privacy regulations forbid processing and storing of personal data unless specific conditions are met. SAP S/4HANA includes tools and technologies which enable companies to run their business processes in a compliant way.

Protection of personal data is a must have in most countries. More than that, it is a human right, introduced in the 19th century in the US as the “right to be left alone”. Nevertheless, the willingness to invest in security and data protection is still not a core virtue in many companies. This is surprising, especially if you want to believe the slogan that the data is the oil of our century. On May 25, 2018, the European Union’s General Data Protection Regulation (GDPR) went into effect. This legislation of the European Union regarding the protection of personal data affects most countries in the world. Also, in other regions, states, and countries—such as California in the United States, Brazil, India, Russia, and South Africa —legislation related to the protection of personal data has been established or is on its way. Some of these acts are based on the traditional global approach to the protection of personal data as of 1980, agreed upon in the Recommendation of the Council Concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, from the Organisation for Economic Co-operation and Development (OECD). Effectively, some technical principles of the protection of personal data have been legally established for more than 40 years, and the question arises: What has changed with GDPR? Technically, not too much, but GDPR has introduced fines up to 4% of the annual turnover of the company, and it seems that the fines have become the game changer. Central reporting of fines in Europe has not been established yet, but a nongovernmental offering, the GDPR Enforcement Tracker (www.enforcementtracker.com), provides a comprehensive overview of such fines. As of April 2020, the highest fine given was nearly 205 million euro. At SAP, data protection has long been relevant in the design of products. This chapter introduces a logical approach for data protection and generally relevant features in SAP S/4HANA.

7.1

Compliance Base Line

The data protection and privacy (DPP) baseline is simple: It is forbidden to process personal data. Unless you can prove a justifiable reason. The baseline is valid not only for a complete set of personal data, such as a business partner record, but also for single pieces of data, like the birthday of a customer. As basic principle of DPP, processing personal data is forbidden unless a justifying reason is provided. At any stage of processing, you must be able to document for what purpose you are processing the data and how this purpose is legally justified.

7.2

Definitions and Principles

To understand the concepts, you need to have an overview of relevant terms and definitions. According to Article 4 No. 1 GDPR, personal data, is data “relating to an identified or identifiable natural person (’data subject’) who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” To keep it simple, personal data is any data providing the ability to identify a natural person and any data linked to a natural person. The most common example of how easy it might be to identify a person in most companies is the equal career example. For example in a lot of companies, if you have only the information about career level and gender of employees, you are able to identify single women on board level; adding the attribute age you will likely be able to identify most women on the first level below the board. The definition of personal data is quite broad, as it includes all such attributes. The definition also contains the term data subject, referring to the natural person who can be identified. The data subject is the most important bearer of rights in data protection. The data subject is the person who must be protected and is the starting point for any risk evaluation. The next quite broad definition is the term processing, which, according to Article 4. No. 2 GDPR, refers to “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.” Simply put, more or less anything you can technically do with personal data is included. Storage is processing. Keeping data for audit purposes only, for example, is processing. Consent—often discussed like it’s a magical spell—turns out in practice to create some remarkable challenges. According to Article 4. No. 11 GDPR, consent is “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.” Article 7 GDPR provides more details and requirements. Technically, the withdrawal of consent is a huge challenge: personal data processed based on consent must be deleted more or less immediately after the consent has been withdrawn if no other legal reasons exist for granting the lawfulness of the processing. The principles for the processing provided in Article 5 GDPR are as follows: Lawfulness, fairness, and transparency. Limitation of purpose. Only personal data relevant for the predefined purpose will be processed, and only as long as it is relevant for that certain purpose. Data minimization. Personal data must be limited to the necessary scope. Accuracy. Personal data must be kept correct and up to date.

Storage limitation. Nonrequired personal data must be deleted. Integrity and confidentiality. The data has to be safeguarded. According to article 6 GDPR, processing of personal data is lawful only if: the data subject has provided consent, the processing is required for the performance of a contract, the processing is required to fulfill legal obligations, the processing is required in the public interest, the processing is required to safeguard a vital interest, the processing is based on a legitimate interest. In Article 9. Par.1 GDPR, special categories of personal data are defined as “personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation.” In addition, Article 10 GDPR restricts the processing of “personal data relating to criminal convictions and offences” for rare cases under official control. At SAP, both the named data and personal data tied to bank account data and national identifiers are considered sensitive personal data requiring additional protection. 7.2.1

Basics in SAP S/4HANA

It should be obvious that ERP software can provide a range of features to help SAP customers to become and remain compliant with GDPR, but also that there are some obvious boundaries. This starts with the reasons for processing the personal data. In ERP software, the basis for processing data regularly should be as follows: Most importantly, the performance of contracts and likely related legal obligations, such as providing evidence for tax reporting Also processing based on consent in some sales scenarios, as well as in some voluntary HR scenarios Other aspects such as the existence of public, vital, or legitimate interest must be clearly documented outside SAP S/4HANA. The SAP governance, risk, and compliance solutions provide support here (see Chapter 14, Section 14.11). The most relevant end-to-end processes in SAP S/4HANA are based on contracts. Figure 7.1 illustrates a basic example of such a process.

Figure 7.1

Simplified Process (Lehnert et al., 2018)

The process starts with the collection of data that is quite obviously personal data: the customer master data. Step by step, more data is linked to the customer master and becomes personal data for that reason: the red suit offered to Mr. Doe and the pink suit Mr. Doe finally purchased become personal data, as well as the information that Mr. Doe paid five weeks after he was dunned. In such a process, what is required to achieve compliance with GDPR? The processing of the data in the example is obviously based on the execution of a contract and related precontractual steps. This must be documented organizationally. The next relevant organizational measure is to provide the data subject with the required transparency into the processing. How this must happen is regulated in articles 12, 13, and 14 of the GDPR. Simplified, the data subject has to be provided with comprehensive information about what data is undergoing processing, for what purposes the data is processed, applicable retention policies, and what data subject rights are given. 7.2.2

Data Subject Rights

Figure 7.2 shows the data subject rights. The first data subject right is the right to receive information, before or when the processing starts, as a kind of a logical baseline: If the data subject doesn’t know what’s happening, where, and why, how could the data subject interfere at all? This is called the right of prior information.

Figure 7.2

Data Subject Rights

The next important data subject right is information access, which enables the data subject to get information on the personal data that is undergoing processing, including

the purposes and applicable retention periods. SAP S/4HANA supports this requirement with the information retrieval framework (IRF). The framework is a feature that collects all personal data in the SAP S/4HANA instance. It’s possible to link the data to a purpose. The information retrieval framework needs to be configured according to business needs. The next data subject right is a fundamental restriction on automated decisions about a data subject. Article 22 GDPR sets limitations for automated decisions and defines the data subject’s rights in more detail if automated decisions are taken. In all applications in SAP S/4HANA in which automated decisions are supported, it’s possible either to run the process with manual user interference or to overrule the automated decision. The portability right provides the data subject with the option to get a copy in a machinereadable format of all personal data that has been provided, according to Article 20 GDPR. This requirement is also supported by the information retrieval framework, which offers a download capability. The erasure of personal data is a key requirement of data protection. In Article 17 GDPR, the right to be forgotten is defined, regulating deletion of data on the request of the data subject. This is indicated in Figure 7.3. In addition to erasure, article 17 includes related requirements such as the notification of former recipients of such data and the already discussed principles in article 5 provide the context for the need to delete personal data and how this needs to be handled.

Figure 7.3

Right to Be Forgotten and Context (Lehnert et al., 2018)

In SAP S/4HANA, the deletion of personal data is supported by SAP Information Lifecycle Management (SAP ILM), which allows you to set up automated blocking and deletion procedures that handle all data in relation to the legal reason for the processing and applicable retention periods. The basic idea is based on the fact that most personal data in ERP software is related to contracts. As long as there is a contractual need to keep the data, including the need for financial or tax audits, the related personal data has to be kept, even if the data subject requests the deletion. But as soon as the named reasons are gone, the data must be deleted. The logical challenge of deletion is the process shown in Figure 7.1. The data must remain consistent in terms of the named audits, so deletion has to follow the process logic shown, but in the opposite direction.

Article 18 GDPR defines the right of restriction. The most likely use case in an ERP application is one in which a data subject wants to use the data in a legal case and requests the company to keep the data until the legal case is closed. This is also supported by SAP ILM. The last data subject right is that the data has to be kept accurate and current. The data subject also can request the correction of personal data according to article 16 of the GDPR. In SAP S/4HANA, the correction of personal data is supported by change capabilities; to support this requirement more systematically, SAP Master Data Governance application is the solution of choice. 7.2.3

Technical and Organizational Measures

In several articles of the GDPR, the need for technical and organizational measures is stated. It is also required that the safeguards are appropriate to cover the risk of the data subject. Article 32 GDPR deals with the security of processing and states: “Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk.” SAP S/4HANA provides several technical measures accordingly, discussed in the following sections. Physical Access Control Physical access control prevents unauthorized persons from gaining access to dataprocessing systems in which personal data is processed or used. In SAP S/4HANA onpremise, it is the sole responsibility of the corresponding IT department to ensure physical access control. For SAP S/4HANA Cloud, SAP as the operator ensures physical access control for the devices and infrastructure under its control. Authentication Authentication describes secure procedures to enable system access based on personal authentication. Several mechanisms exist in SAP S/4HANA to provide secure authentication. SAP S/4HANA Cloud uses the Identity Authentication service of SAP Cloud Platform (see Chapter 17, Section 17.1.2). Authorizations Authorizations are procedures differentiating which data can be accessed and in which mode (e.g., display, post, delete). SAP S/4HANA on-premise uses the well-known ABAP authorization concept (see Chapter 17, Section 17.1.1). Access to SAP S/4HANA Cloud is restricted by role-based authorizations (see Chapter 17). Disclosure Control

Disclosure control describes the ability to document all access to personal data, with logging features to monitor which users accessed which data. In SAP S/4HANA, the read access logging (RAL) capability is provided within the license. RAL comes in SAP S/4HANA Cloud with a default configuration and in SAP S/4HANA on-premise with an example configuration. Both cover SAP’s definition of sensitive personal data. SAP UI logging offers additional benefits but requires an additional license. Change Control Change control enables users to prove what personal data has been changed by which user. SAP S/4HANA provides suitable change logging capabilities. Transmission Control Transmission controls are procedures and safeguards for the transmission of personal data, such as encryption during transmission. SAP S/4HANA offers the following standard: Encrypted communication Protecting communication using the Unified Connectivity (UCON) Framework Additional functionalities to safeguard and monitor transmission of personal data are delivered with SAP Enterprise Threat Detection, SAP governance, risk, and compliance solutions, and SAP Data Custodian (see also Chapter 14, Section 14.11). Job Control The data controller must ensure that the data processor is following the data controller’s instructions and guidelines. This organizational task has some technical aspects like the system audit, but also organizational and contractual aspects. The Audit Information System of SAP S/4HANA on-premise provides a comprehensive range of audit functionality. SAP S/4HANA Cloud fulfills several certifications (see Chapter 21, Section 21.3). Availability Control Procedures like backup, disaster recovery, and business continuity must be in place to keep personal data available. SAP operations creates regular backups of SAP S/4HANA Cloud tenants. SAP S/4HANA on-premise includes relevant features to set up these procedures. Encryption Encryption means encoding a message or information in such a way that only authorized parties can access it. Encryption capabilities are delivered standard for communications and databases. Pseudonymization

Pseudonymization is about changing data in such a way that the data subject is not identifiable without using separately kept additional information. Pseudonymization and anonymization (irreversible) are hard to achieve. There is no single feature to apply it, but some procedures such as test data pseudonymization are available for S/4HANA Cloud. In ERP-like solutions, most processing is based on contracts and only contractual relevant data is processed, so encryption does not provide additional value. A completely different discussion would center on pseudonymization in data lakes. Data Separation Personal data collected for a specified purpose must be separated from personal data collected for other purposes. This depends on the design of the organizational structure, the processes, and the master data. A detailed description is available in GDPR and SAP: Data Privacy with SAP Business Suite and SAP S/4HANA (Lehnert et al., 2018).

7.3

Summary

Hopefully we’ve made the point that any processing of personal data has to be based on a purpose that has a legal justification. On a quite generic level, we showed what SAP S/4HANA offers to support an SAP S/4HANA customer becoming compliant with GDPR. To apply the functionality, you need to be clear on the risks. The most common misunderstanding herein is that we’re talking about the risks for the processor of the personal data—in fact we’re actually talking about the risks for the data subject. Beyond any legal argument, consider what information typical humans like to share with people they do not know, and implement safeguards accordingly. Read access logging on a postal address won’t be adequate in most cases. In some countries however, even personal data like sexual preferences has to be collected in some areas, to avoid discrimination. But sexual preferences are obviously something that should be better protected than a postal address. You have to define an adequate level of protection and how you ensure data subject rights. We have now completed the part about the architecture foundation of SAP S/4HANA. In the next part of the book, we present the architecture of important business application functionalities in SAP S/4HANA, starting with selected central master data such as product master and business partner.

Part II Application Architecture The core task of an enterprise resource planning (ERP) application is to manage the resources of an enterprise, namely material, money, ideas, and people. SAP S/4HANA is SAP’s enterprise resource planning software. It supports all core business processes and functions required by today’s enterprises and embraces the following core capabilities (see Figure II.1): Sales covers all business processes for selling products and services starting from sales inquiry and quotation management, through sales order processing and sales contract management, to returns and refund management, and billing. Service operations is about providing customer services using field service, in-house repair, or interaction center service and the corresponding service order and contract management, Sourcing and procurement is used to purchase external goods and services that are required to keep operations of an enterprise running. This includes providing decision support for strategic buying, managing purchase order and contracts, as well as verifying incoming invoices. R&D and engineering supports project management and product lifecycle management when creating new products. Manufacturing enables production planning and execution to manufacture products. Supply chain covers the planning, fulfillment, regulation, and tracking of supply chain activities, in especially warehouse and transportation management. Asset management is about planning, scheduling, and executing maintenance activities for assets. Finance records, classifies, and summarizes the financial transactions of a company for financial accounting, payables and receivables management, but also financial planning and analysis. In addition, it provides treasury functions to ensure proper cash flow and liquidity. Closely related to finance are governance, risk and compliance capabilities for risk management, corporate governance and regulatory compliance. Human capital management covers the processes related to the people working for an organization and includes organizational management, personnel administration, payroll, personal development, and time management. Localization to country-specific regulations as well as industry-specific enhancements (for example, professional services, chemical, or retail industries) make the software fit to the needs of each organization. The sheer size of the functional scope of SAP S/4HANA prevents us from covering it completely in this book. In addition, the description of application architecture partly demands a different structure compared to a pure explanation of functional capabilities listed above.

Figure II.1

SAP S/4HANA Capabilities

For engineering, application architecture defines how best to implement a certain business functionality while meeting the functionality-specific quality requirements (like throughput or flexible configuration) as well as general SAP S/4HANA qualities like uniform key user extensibility or real-time analytics, which are covered in the first part of this book. When discussing the application architecture of SAP S/4HANA, we begin with the central master data objects product master and business partner, as they are reused by all applications (see Chapter 8). There we describe the basic architecture of bill of materials and variant configuration, too (see Chapter 8, Section 8.2), which (from a functional capability perspective) is part of R&D and engineering. Next, we’ll explore the application architecture of sales, service, and sourcing and procurement. A key quality of SAP S/4HANA is that all these operational business processes (sales, service, procurement, but also production) are closely integrated with the management of material flows (logistics) and money flows (finance). It is the task of application architecture to ensure this interlinkage and guarantee consistent logistic and financial data. This is why we have collected the multiple logistics application architecture topics into Chapter 12: Logistics and Manufacturing. For the supply chain capability we picked the application architecture of extended warehouse management as an example (see Chapter 13) and cover further aspects like inventory management in Chapter 12: Logistics and Manufacturing. Note For those interested in SAP Transportation Management, you find insights into its application architecture in the Transportation Management with SAP by Bernd Lauterbach, Stefan Sauer, Jens Gottlieb, Christopher Sürie, and Ulrich Benz (available at www.sap-press.com/4768).

Chapter 14: Finance, Governance, Risk and Compliance covers the application architecture of the big functional blocks of accounting, payables and receivables management, and treasury. Then we move on to SAP S/4HANA for central finance and the specific architecture that makes it possible to operate the SAP S/4HANA Finance solution at corporate level, connected to local instances of SAP S/4HANA Finance for subsidiaries. Finally we close out the chapter with an overview of the SAP governance, risk, and compliance solutions and their architecture. We conclude this part of the book with a look at localization in SAP S/4HANA in Chapter 15 by explaining the architecture of two interesting applications: advanced compliance reporting and the SAP Document Compliance solution. As of now, the application architecture of human capital management included in SAP S/4HANA on-premise has not changed since SAP ERP. For more details, we recommend The Architecture of SAP ERP by Jochen Boeder and Bernhard Groene (Tredition, 2014). In SAP S/4HANA on-premise, the human capital management capability is part of the compatibility scope, as the strategic human capital management solution of SAP is the cloud solution SAP SuccessFactors Employee Central. From an application architecture perspective, we cover R&D and engineering and asset management partly with bill of materials (covered in Chapter 8, Section 8.2) and field service (covered in Chapter 10, Section 10.2). However, these are areas of innovation and the applications are in the process of redesign. At the time of publication, they have not implemented a final architecture we could share publicly in this book. Altogether, we cover the application architecture of all components colored dark grey in Figure II.1. SAP S/4HANA has built-in integration with the cloud solutions SAP Ariba, SAP Fieldglass, SAP SuccessFactors, and SAP Concur (see Figure II.1). We refer to these integrations when we describe the corresponding SAP S/4HANA application. The integration with SAP SuccessFactors Employee Central, for example, is addressed in Chapter 8 when we discuss business partner master data, and the integration with SAP Ariba is covered in Chapter 11 about sourcing and procurement.

8

Master Data Master data is the core data of an enterprise. In this chapter we discuss selected, important types of master data in SAP S/4HANA. We give an overview of business partner master data, which captures information about persons or organizations that are of interest for your business, and product master data that manage various aspects of products you are purchasing, selling or producing. We also discuss master data that describes how products are characterized, composed, produced, and configured.

When an organization serves its customers, it typically works with a number of stakeholders: these can be sales partners or suppliers, employees or temporary workers, manufacturing partners or consulting companies. All provide some input to the company’s operations, which is then transformed by the organization into an output desired by its customers. In doing so, the organization can purchase, manufacture, and sell a variety of products or services. This is the core around which all business is conducted, and the objects representing this core are the product master and business partner master data objects in SAP S/4HANA. In this context, the term master data describes business objects on which business transactions are performed. The business partner and product are the most prominent ones, but there are others, such as organizations and accounts.

8.1

Product Master

The material master has always been an important element of central master data in SAP’s ERP software. The significance of the material master is evident from the fact that it’s the central source of information about all materials that an organization procures, produces, and keeps in stock. It holds information about how a material is used in various processes, such as sales, procurement, planning, storage, accounting, and so on. For example, it holds the sales price, minimum order quantity, and name of the sales department responsible for a material. For procurement, it defines the responsible purchasing group, which can be used later when creating a purchase order. It also defines the over- and underdelivery tolerances and the purchase order unit. For storage, it defines the packaging dimensions and storage conditions. To support financial accounting, it stores the valuation and costing data. The material master also has additional flavors that were originally built for various specialized SAP solutions, such as SAP for Retail solutions and SAP Supply Chain Management (SAP SCM). Thus, retail article master was built to support the needs of the retail industry more comprehensively and therefore provides several retail-specific views and the ability to maintain various types of articles, such as single articles, prepacks, sales sets, displays, generic articles, and so on. Furthermore, a material in SAP SCM supports processes such as extended warehouse management (EWM) and advanced planning and optimization (APO) that include demand planning, production planning, scheduling-dependent requirements from within the supply network, and so on. SAP S/4HANA follows the principle of one as a key driver of simplification (see Chapter 1, Section 1.2.2). This means that it offers a single solution for any given

business process. Therefore, in SAP S/4HANA the various flavors of material master have been simplified into a single integrated product master. Let’s consider the example of maintaining and displaying material master data. SAP ERP provided three central transaction codes for this purpose: Transactions MM01 (Create Material Master), MM02 (Change Material Master), and MM03 (Display Material Master). Transaction MM01 is also used to extend existing material master data with new organizational data, such as new plant or sales data. There are separate transaction codes for maintaining retail article master data, such as Transactions MM41, MM42, and MM43. SCM/APO tools such as Transaction /SAPAPO/MAT1 are used to maintain advanced planning fields. If the material is relevant for EWM, the relevant warehouse data can be maintained using Transaction /SCWM/MAT1. As noted, the various representations of material master and its extended functionality have been simplified into a single integrated product master application in SAP S/4HANA. For example, various retail article types such as a single article, generic article, and structured article can be maintained from within the Manage Product Master Data SAP Fiori app. This app provides the required integrations to support this functionality, such as a variant matrix to configure generic article variants and bill of materials (BOM) integration as required by the structured article. Although having a single integrated product master application simplifies the end user’s experience, it increases the complexity of the data model itself. Let’s explore how product master data works in SAP S/4HANA. 8.1.1

Product Master Data Model

A new Manage Product Master Data SAP Fiori app has been provided in SAP S/4HANA. This is a draft-enabled application, which means that the entered field values remain persisted in temporary storage and are not lost in case of disconnection from the network or due to closing the application (for details on the draft feature, see Chapter 2, Section 2.2.1). Draft enablement is an important feature in the product master, especially because it may not be possible to fully define product master data in a single step. Defining a new product is often an innovation within an organization, so it is only appropriate that the entered data be stored in a temporary draft state without impacting real business processes, until it is ready to be activated. Let’s look at the data model of a product master in SAP S/4HANA. The product business object is modeled in a tree-like structure, with the root node of its CDS model being Product (see Figure 8.1).

Figure 8.1

Product Master Data Model in SAP S/4HANA

Before a product can be used in business processes, its general behavior is defined using some basic attributes stored in the product root node. These basic attributes include the product type, which has an influence on the user interface facets and fields that become available for input; the base unit of measure, which is the overall unit of measure in which stock is managed for this product; its dimensions; packaging data; whether the product is classified as dangerous from environmental, health, and safety perspectives; and so on. The data related to a product’s general attributes is still stored in database table MARA to maintain compatibility with SAP ERP. Several Units of Measure (UoMs) can be defined for a product. Apart from the base unit of measure, multiple alternative units of measure can also be defined for use in various processes. For example, purchasing can use a different UoM (order unit) than sales (sales unit). The UoM in which products are issued from a warehouse (unit of issue) can be different from the UoM in which products are managed in warehouse management (WM unit). For managing inventory, the system converts the quantities entered in the WM unit to the base unit of measure. So, if a product is normally managed using pieces as the base UoM, but several pieces are contained in a box, it may be more appropriate to define a WM unit that is more suitable for warehouse management purposes. A conversion factor is defined from each alternative UoM to the base UoM. Therefore, the base UoM must be carefully chosen such that this conversion results in a simple decimal. Data related to UoMs is stored in database table MARM. Products can be uniquely identified by the International Article Number, also known as European Article Number (EAN) or Global Trade Item Numbers (GTIN). These numbers refer to a unit of measure or type of packaging, such as a pack of 10 pieces or a box.

For each unit of measure defined for a product, it’s possible to assign one or more EANs/GTINs. The data related to EANs/GTINs is saved in database table MEAN. The elements of the product master data model influence several other downstream functions in SAP S/4HANA. Therefore, it’s important to talk about some of these nodes and attributes in the context of other areas that use them. A comprehensive discussion of all these would be beyond the scope of this book, but it’s worth explaining several important areas in the following sections. Sales Product master data defines the structure of the sales areas responsible for a product. It also influences important sales functions such as price calculation by defining classifications for country-specific taxes; defining grouping terms such as pricing group, rebate group, and pricing reference product; defining the application of discounts and promotions with reference to product hierarchies; and so on. Also, it defines quantity stipulations such as the minimum quantity that a customer may order. Sales-relevant attributes can be maintained both for the complete product and for a specific sales area, which is a combination of sales organization, distribution channel, and division. The sales organization is the organizational unit responsible for the sale of certain products, whereas the distribution channel defines the way in which products or services reach the customer, such as wholesale, retail, or direct sales. A division is a way of grouping similar products. For example, if a sales organization sells food and nonfood products through both retail and wholesale distribution channels, then each distribution channel could be further split into food and nonfood divisions. A product is always assigned to just one division. From the point of view of sales, the use of divisions allows sales to be organized around groups of similar products. This allows the people in a division who process sales orders and service customers to specialize within a manageable area of expertise. A sales area is also associated with a delivering plant that delivers inventory to it. The plant that is provided here is the plant that will be defaulted into sales documents. The data related to the sales area is saved in database table MVKE, though some salesrelevant attributes, such as those for packaging and shipping data, are stored in table MARA. Purchasing Product master data stores information to support the procurement processes. The purchasing group available at the plant level identifies the buyer responsible for the procurement of a product or a class of products. The value entered here could be defaulted in the purchasing documents, and this is the medium through which contacts with the supplier are maintained. An important attribute to mention is the purchasing value key defined at the product level. It provides a useful way of automating communication with the supplier. This key defines the reminder days, tolerance limits, shipping instructions, and order acknowledgment requirement of the product being procured. These values are defined in customizing and can be proposed in purchasing documents.

Inventory Management Inventory management deals with the management of product stock on a quantity and value basis. It includes planning, entry, and documentation of all goods movements. In the product master, attributes that support inventory management are mainly available in product storage data, plant, and EWM. The product storage data allows defining attributes such as the temperature and storage conditions, container types for storage and shipment, and shelf life data. At the plant level, there is information such as the unit of issue of goods and cycle counting categories. Cycle counting is a physical inventory procedure in which products are counted at regular intervals during a fiscal year. At the plant level, products can be grouped with regard to their logistical handling, such as fragile, dangerous, bulky, or liquid. This is then used in the calculation of working loads such as placement into stock and picking. In EWM, the system uses the put-away control indicator and stock removal control indicator, maintained in the product master, to influence the put-away strategy and stock-removal strategy. This controls the determination of the storage bin and the optimal picking bin. Put away refers to the process of moving a product from a received shipment and putting it into warehouse storage. Each time a warehouse task is created, the system reverts to put-away and stock-removal strategies. During goods receipt, the system uses put-away strategies to utilize the available warehouse capacity by automatically determining suitable storage bins for the new products. During goods issue, the system uses stock-removal strategies to determine the optimal picking bin. Finance It’s essential that data about product inventory valuation be accurately recorded in the organization’s financial accounting system as it impacts strategic decision-making processes. The organizational level at which the product is valuated is called the valuation area, and this is usually a plant. This makes sense because production and procurement techniques, applicable taxes, and overhead costs can vary from one plant to another. The product master has several attributes that determine how the inventory is valuated. For example, the price control indicator controls whether inventory valuation is done using a standard price or moving average price. A standard price is a constant price at which a product is valuated without taking goods movements and invoices into account. The moving average price is a price that changes due to goods movements and the entry of invoices and is calculated by dividing the value of the product by the quantity of product in stock. It’s automatically recalculated by the system after each goods movement or invoice entry. The valuation class allows the stock values of products of the same product type to be posted to different general ledger accounts, or stock values of products of different product types to be posted to the same general ledger account. The decision about how a product is valuated is an actual accounting function. The valuation class just helps direct inventory value to the appropriate general ledger account. The structure of your general ledger itself is defined by the financial controller or accountant (see Chapter 14, Section 14.2). 8.1.2

Product Hierarchy

Organizations can offer a huge variety of products in sales processes. It’s often necessary and helpful to organize products in so-called product hierarchies, which allow a structuring of the products along multiple levels, like for pricing or analytical purposes. So instead of analyzing sales data or doing price planning and maintenance for hundreds of products individually, you can perform investigations or actions on a higher level. Product hierarchies play a significant role in sales. For example, a product hierarchy is used within price maintenance and calculation. This means that the product hierarchy level can be added as a field in the pricing condition table to define the validity of a price or discount. The hierarchy level, along with the validity period, is an input to the pricing application in all relevant documents, such as the sales order, billing document, and so on. Users can also use a product hierarchy in sales embedded analytics. The product hierarchy is not a new concept and is also available in SAP ERP, but in SAP S/4HANA Cloud product hierarchies have been designed and implemented from scratch with a completely new architecture to introduce new features. For this, product hierarchy information about hierarchy assignment is no longer stored as part of the product data model. Instead, the information about product assignment to a hierarchy is stored within hierarchy runtime tables. Externalizing the persistence of hierarchy assignment outside the product data model helps to achieve a very important objective, which is introducing the concept of time dependency in the hierarchy. Time dependency is very useful considering the usual practice that certain pricing conditions, such as discounts or promotions, are usually applied to a given set of products only for a limited time period. Using a validity period, product assignments to hierarchies can be defined in a flexible manner, and changes over a period of time can be easily maintained. Another important enhancement in the new product hierarchy in SAP S/4HANA Cloud is that there is no restriction on the number of hierarchy levels. This is a useful improvement over SAP ERP product hierarchies, in which a maximum of three hierarchy levels are available by default. Users sometimes found this to be insufficient. The Manage Product Hierarchies SAP Fiori app shows the product hierarchy ID and hierarchy version information (see the left side of Figure 8.2). This information is also displayed on the right-hand side above the hierarchy tree structure. Also displayed are the validity from and to dates. Note that the validity periods of the various hierarchy versions belonging to the same hierarchy ID must be nonoverlapping.

Figure 8.2

Manage Product Hierarchies App

Looking at the tree structure on the right side, the difference from the SAP ERP material product hierarchy field is more evident. Whereas a product hierarchy in an SAP ERP material was an 18-character attribute in tables MARA and MVKE, product hierarchy information in SAP S/4HANA Cloud is not an attribute in the product data model and is displayed as a tree structure in a separate SAP Fiori app. Also note that products can only be assigned at the leaf nodes (lowest-level nodes) in this hierarchical tree structure. 8.1.3

Data Migration

Migrating data is one of the first activities that an organization performs when it begins using SAP S/4HANA. In the context of a product master, new migration object s have been provided that allow an organization to efficiently handle the data-migration process for product master data and for product hierarchies. For migration of a product master, two migration objects are available, both on-premise and in the cloud: one to create new products in SAP S/4HANA, and the other to extend existing products with new organizational data. The signatures of these migration objects accept the complete product data model structure, along with parameters to support field extensibility. As for migration extensibility, these data migration objects support all the extensibility business contexts made available for the product master in the extensibility registry of SAP S/4HANA (for more on in-app extensibility, see Chapter 5, Section 5.1). With this, field extensibility is supported during migration for components such as product basic data, plant, storage location, sales data, valuation, and so on. For migration of product hierarchies, two migration objects also have been provided in the cloud: one to create the product hierarchy node structure and the other to assign products to the hierarchy. There are some prerequisites to meet before product hierarchy data migration can be executed. First and foremost, the migration of product master data should already have been completed. This should be obvious because before you assign any products to a hierarchy those products need to exist in SAP S/4HANA. The other prerequisite is that the hierarchy node values must already have been configured in SAP S/4HANA using a self-service configuration provided for this

purpose (for details, see Section 8.1.6). If you are wondering why two migration objects were needed to migrate hierarchies, well, there is a good reason for it. The hierarchy node structure normally doesn’t change often. However, it’s common for new products to be created and assigned to the hierarchy. Therefore, having two separate objects ensures that the node structure is kept unchanged and stable when the user only wants to assign new products to the hierarchy. 8.1.4

Product SOAP Service API

To replicate product master data between multiple systems, the asynchronous ProductMDMBulkReplicateRequestMessage SOAP service has been provided. It is based on the data replication framework (DRF; see Chapter 6, Section 6.8.2). The entire process determining which business object will be replicated to which target system through which interface and when is managed by DRF. Note that DRF strictly applies a push mechanism. A prerequisite is to have an outbound interface for the SOAP service. Once the interface has been registered in DRF by creating an outbound implementation, a replication model can be created. In SAP S/4HANA on-premise, the replication model and target systems are configured using Transaction DRFIMG. In SAP S/4HANA Cloud, the replication model and target systems in DRF are derived from the communication arrangement and communication system, and there are communication scenario SAP Fiori apps to maintain them (for details on communication scenarios, see Chapter 6, Section 6.4). An important topic in the context of data replication is filter objects and filters. A filter object defines the selection criteria used to determine the data objects to be replicated. It sequentially combines one or more filters. Maintenance of the selection criteria is done by the master data steward. A filter carries out the comparison of a given set of objects against the maintained filter criteria. It returns the list of objects that match the filter criteria. Explicit filters (simple and complex) and segment filters are available in the product SOAP service. Explicit filters are configured explicitly by the user: Simple filters are defined for attributes on a single entity root table, such as the MATNR, MATKL, and MTART fields of table MARA. The evaluation of simple filters is generic in the way that it can be enhanced easily just by adding another attribute to the filter using append technology (no code change necessary). Complex filters are not directly related to the entity root table but need to be evaluated by certain function modules or methods, for example for selected nodes of the article hierarchy or merchandise category hierarchy. The semantic interpretation of complex filters is coded using the corresponding APIs. To enhance complex filters, code changes are necessary. Segment filters are special filters that generally do not limit the number of objects. Rather, they are used to exclude parts (segments) of the business object from replication. 8.1.5

Product Master Extensibility

Organizations deploying SAP S/4HANA often require adding custom fields to the product data model, and such fields need to be supported from end to end: in the database tables, in the user interface, in data migration objects, and in integration APIs. This activity is usually performed by a key user, who has additional authorizations to adapt artifacts like UIs, views on services, processes, and so on to the needs of their assigned user group (for details, see key user extensibility, Chapter 5, Section 5.1). Extensibility business contexts are provided for product master data in the SAP S/4HANA Extensibility Registry. Field extensibility for product general data, plant and storage location data, sales data, valuation data, and so on is enabled through the PRODUCT, PRODUCT_PLANT, PRODUCT_SALES, PRODUCT_STORAGELOC, and PRODUCT_VALUATION extensibility business contexts, among others. The corresponding database tables and ABAP Dictionary (DDIC) structures include a so-called extension include (that is, a DDIC structure with only one dummy field) to provide a stable anchor point for the field extension. The ABAP logic in the application generically transfers extension fields between structures with MOVE-CORRESPONDING. The corresponding CDS views provide a so-called extension include association to an extension include view. This extension include view is a CDS view on the database table containing only the key fields and—later—the custom fields from the extension include. The extension include CDS view acts as a stable anchor point for CDS view extensions and makes the extension fields accessible on the extensible CDS views through the extension include association. The custom fields are added to the extension include view as soon as they are added to the persistency. The key user can use the Custom Fields and Logic app in the SAP S/4HANA Fiori launchpad to add custom fields to the required extensibility business contexts. This automatically makes these custom fields available for adaptation in the UI, and automatically extends the database tables, CDS views, OData services, data migration objects, and CDS-based enterprise search. Extensibility may also be required in the business logic; for this, several BAdIs available for the material master in SAP ERP are also supported for the product master in SAP S/4HANA. Also, several BAdIs have been provisioned that can be implemented by key users in a cloud environment. 8.1.6

Self-Service Configuration

As we discussed earlier, product master data is used across various functional areas within SAP S/4HANA. This is also why it needs to be highly configurable. Of the large number of configuration options, we’ll explain a set of central ones: Defining the format for product IDs This is where the length and display template for a product ID are defined. With SAP S/4HANA, the domain of the product ID has been extended and can have a maximum length of 40 digits. The lexicographical indicator defines the way numeric product IDs are stored in the database. Defining attributes of product types Here you define the product types. Whenever a product master record is created, it must be assigned to such a product type. Further, you can configure information such as the user departments—purchasing, sales, accounting, and so on—for which data

of this product type is relevant. The price control—standard or moving average price —that is used to valuate the product stock of a given type is another important piece of information that is configured here. Defining attributes of EANs/UPCs In this configuration, the International Article Number (EAN) categories are defined along with their properties, such as the number range object and interval assignment, check-digit algorithm, EAN length, and so on. Defining valuation classes A valuation class is a grouping for products with the same account determination. When a user creates a product, she or he must enter its valuation class in the accounting data. The system uses the values configured in this customizing to check whether the entered valuation class is allowed for the product type. Configuring nodes for product hierarchies The node names that are consumed within the Manage Product Hierarchies SAP Fiori app are configured here. These node names define the structure for all product hierarchies to be created in the application. These are just a few important ones out of the many configuration options available for product master data in SAP S/4HANA.

8.2

Bill of Materials, Characteristics, and Configurations

This section gives an overview of the concepts of the bill of materials (BOM), classification system, and variant configuration. These are central functions for managing the structure and characteristics of products. Known from SAP ERP, these are important concepts also in SAP S/4HANA. SAP S/4HANA comes with the advanced variant configurator, a new integrated variant configurator optimized for the SAP HANA database. 8.2.1

Bill of Materials (BOM)

A product can be purchased or it can be produced. If it is produced, a BOM will list the raw materials and assemblies from which the final product is created. The structuring of a product can vary in different phases of the product lifecycle. During the design phase, for example, the engineering team can structure a product in one way, and for production the product can be structured in another way to suit manufacturing on the factory lines. For this reason, there can be an engineering BOM and a production BOM for the same product. A BOM is a complete, formally structured list of items that make up a product or an assembly that is used as a part of a product. BOMs can exist for products, but also for other type of objects, such as sales orders. For our purposes, we focus on BOMs for products. The BOM for a product is additional master data linked to the product, but it isn’t part of the product master itself. Figure 8.3 presents a simplified conceptual information model for the product BOM. For products that are produced by the organization, we have a BOM that consists of the BOM header and the item list. The BOM header contains properties such as the assignment to a plant or the validity period of the BOM. Assigned to each BOM header is a list of BOM items. In our simplified model, each BOM item refers to a product. As we will discuss later, the item list of a configurable product can contain placeholder items that are replaced by a concrete product based on configuration. In addition to the product’s ID, the BOM item also stores the quantity and unit of measure.

Figure 8.3

Bill of Materials: Simplified Conceptual Model

BOM items can be simple items or assemblies. Simple items are raw materials or parts purchased from suppliers. Assemblies are produced products that have their own BOMs. This way, a nested hierarchical product structure can be represented. You can construct the complete hierarchical structure of a product by recursively replacing the assemblies with their BOMs. This process is called BOM explosion. In production, it’s sometimes required to define an assembly as an intermediate product that does not physically exist in stock, but only during production. Such assemblies are called phantom assemblies. As Figure 8.3 shows, there can be multiple BOMs for the same product. One reason is that different BOMs may be used depending on the usage—for example, an engineering BOM and a production BOM. Furthermore, there can be BOMs for different alternative ways to produce the same product—for example, depending on the plant where it is manufactured. In general, you need to identify a BOM by specifying the BOM usage (such as production) and the BOM alternative (identified with a number). 8.2.2

Classification System

A product master data record does not describe all the characteristics of the product itself—for example, the power, voltage, current, and noise of a power supply. Different products can have completely different characteristics. The classification system is a flexible mechanism for defining such characteristics without changing the software or the database structure.

With the classification system, you define characteristics of products and you classify the products using these characteristics. Products of the same class share the same characteristics. The classification system supports class hierarchies with subclasses that inherit properties from their superclass. Screws, for example, could have the characteristics length, diameter, head shape, and drive. To describe them, a user would define a screw class. Whenever the company maintains a new type of screw as a product, this new product is assigned to the screw class, and the actual values for the characteristics can be specified. For example, screw S04476 might have the following characteristic values: length = 60 mm, diameter = 5 mm, head shape = round, and drive = square. Technically, the classification system can be viewed as a dynamic data model that allows you to dynamically assign custom properties without extending or modifying the underlying database tables. The classification system is not only used for products, but also for other objects, such as BOMs, documents, batches, customers, or suppliers. Figure 8.4 presents the information model of classification system. Each object has an object type that indicates whether it is a product, a BOM, a work center, and so on. Objects can be assigned to classes. Each class has a class type that restricts the object types of the objects that can be assigned to the class. Our screw class has the class type 001, which is reserved for products (materials). Our object S04476 is of the material object type and therefore can be assigned to the screw class.

Figure 8.4

Classification System Information Model

Classes can be arranged in class hierarchies with inheritance of characteristics from one or multiple superclasses. Dependencies can be created between the characteristics. For example, a company can restrict the length of star-type screws to 10 mm. 8.2.3

Variant Configuration

Some products have multiple variants that can be selected based on configuration. A well-known example is a car, for which the customer configures the color, engine type, power, seat material and color, and so on. A variant configuration allows you to define such configurable products. Based on selected configuration values, the system

determines a concrete instance of the configurable product, with the resulting BOM and the resulting routing. The routing is needed for production and specifies the sequence of steps that are needed to create the product. In SAP S/4HANA a new integrated variant configurator has been built, which is optimized for the SAP HANA database: advanced variant configurator (AVC). It is included in the solutions SAP S/4HANA for advanced variant configuration and SAP S/4HANA Cloud for advanced variant configuration. The advanced variant configurator is the successor of the traditional variant configurator. The core of the AVC is a completely new configuration engine, which has been developed together with Fraunhofer Institute Kaiserslautern. It is based on the award-winning constraint solver Gecode. The AVC comes along with an SAP Fioribased user interface. It is embedded along the make-to-order process for single-level and multilevel models. Furthermore, it includes a new SAP Fiori-based simulation environment, which can be used to check whether a new or changed variant configuration model works correctly. The advanced variant configurator uses the same master data and the same storage of configuration results as the traditional configurator. In SAP S/4HANA on-premise, both configurators are available. Here it’s possible to have configuration models for the traditional configurator and the AVC in parallel. In SAP S/4HANA Cloud, only the advanced variant configurator can be used. The master data used by the variant configuration is based on the following elements: Variant classes BOMs and routings Configuration profiles Object dependencies User interface/grouping Figure 8.5 shows a conceptual information model with the elements of variant configuration and their relationships.

Figure 8.5

8.2.4

Elements of Variant Configuration in SAP S/4HANA

Variant Classes

Variant configuration uses the classification system to define the configuration parameters and their allowed values. As you can see in Figure 8.5, each configurable product is assigned to a variant class. A variant class is a class of a specific class type that is reserved for variants. The characteristics of a variant class are the configuration parameters for which the values are selected during the configuration process. For example, for a configurable car, the CARS variant class could be defined, with enginetype and version characteristics. For each characteristic, the allowed characteristics values are defined—for example, electric and petrol engine types and standard, premium, and sport versions. When a car is ordered, the customer needs to choose from these values. The result is the actual configuration of the ordered product. It contains the values of the characteristics of the variant class. Note that there is a significant difference between how classes are used in configuration and how they are used in classification. In classification, the characteristics values are specified when the product is assigned to a class. This happens as part of master data maintenance. For variant classes, this is different: the characteristics values need to be specified for every instance of the configurable product that is sold. 8.2.5

Super BOM

Different variants of a product are usually produced from different parts. Variants structures can be defined by creating a super BOM. A super BOM contains the union of all possible BOM items across all variants. The variant items can be optional or items that need to be selected from a set of choices. On the left side of Figure 8.6, you can see a fragment of the simplified super BOM for the CAR configurable product.

Figure 8.6

Representing Choice of Parts in BOM

The BOM contains two items for two possible engines, one for PETROLENGINE product and one for the ELECTRICENGINE product. Both items have mutually exclusive selection conditions. The selection conditions refer to the enginetype characteristic of the CARS variant class to which CAR is assigned. When a car is configured with a concrete engine type, the corresponding item is selected. Like super BOMs, super routings can be defined to reflect variants in the steps for producing a product. 8.2.6

BOM with Class Items

Instead of including all selectable items in a super BOM, a BOM can contain items that are configurable placeholders for variable components. This option is shown on the right side of Figure 8.6. Here an additional ENGINES variant class is defined, which also has an enginetype characteristic. The PETROLENGINE and ELECTRICENGINE products are assigned to this variant class, and their enginetype characteristics are set to electric or petrol. In the BOM for the CAR product, a single item exists for the engine, which refers not to a product but to the ENGINES variant class. Such a BOM item is called a class item. During BOM explosion, the class item is replaced with a concrete product of this class. 8.2.7

Variant Configuration Profiles

The variant configuration profile contains settings that control the interactive configuration process and the configuration user interface, and it may contain dependencies that help assure consistency and derive characteristic values automatically. If a configurable product is to be configured with the advanced variant configurator the Processing Mode attribute must be set to Advanced Variant Configuration. 8.2.8

Object Dependencies in Variant Configuration

We already mentioned that object dependencies can be defined for classes and characteristics. In classification, this is used to support the specification of consistent values when objects are assigned to classes. For variant configuration, the same dependency mechanism is used to define the dependencies between different elements of the configuration model. Object dependencies are used in variant configuration to

ensure consistency, to restrict allowed values of one characteristic based on the choice made for another one, and to determine the actual structure of products based on the characteristic values selected in the configuration. As shown in Figure 8.5, there are different types of object dependencies: preconditions, selection conditions, procedures, and constraints. The diagram also shows which types of dependencies can be assigned to which entities of the configuration model. A special object dependency syntax exists to write the source code of conditions, procedures, and constraints. When the object dependency is released, the source code is compiled into an internal representation that is easier to evaluate at runtime. In some cases, there are differences between the traditional configurator and the advanced variant configurator regarding allowed syntax or engine behavior during highlevel configuration. For example, in the advanced variant configurator, new syntax has been introduced and the handling of decimals is more precise. To distinguish which configurator shall use an object dependency, the Processing Mode attribute must be set to Classical or to Advanced Variant Configuration. 8.2.9

User Interface and Grouping

If configuration models contain many characteristics, you need a way to specify how they should appear on the user interface. In the traditional configurator, this can be done by defining a user interface, which is an optional part of the configuration profile. In the advanced variant configurator, a new characteristic group master data object has been introduced. Characteristics can be arranged into characteristic groups, which are displayed as tabs. The sequence of the tabs and the sequence of characteristics inside a tab can be defined, too. Unlike the user interface of the traditional configurator, a characteristic group can be reused in several configuration profiles. 8.2.10

Extensibility

If the dependency logic cannot be expressed with object dependency syntax, it is possible to extend the logic by ABAP implementations. In the traditional configurator, variant functions can be used as part of object dependencies. In the advanced variant configurator, BAdIs are offered as an extension option, which can be used to modify the results before and after the engine does its calculations. 8.2.11

High-Level and Low-Level Configuration

The interactive configuration done during sales is called high-level configuration. The results of the high-level configuration are the characteristic values that were selected by the user or derived from user input. These resulting characteristics are stored in a set of tables called the configuration database and are used by subsequent processes. Figure 8.7 shows the architecture of the advanced variant configurator for high-level configuration. As you can see, the advanced variant configurator has a part that is implemented in ABAP, and a part that is implemented in C++ as an application function library for SAP HANA. The C++ part runs directly in the SAP HANA database and uses its capabilities to ensure fast data access. The C++ part calculates the allowed values

based on the user input and the object dependencies in a performant way. For that, it embeds the open-source, constraint-resolving library Gecode (https://www.gecode.org).

Figure 8.7

High-Level Architecture of the Advanced Variant Configurator

When the BOM and the routing are exploded for a configurable product, the actual BOM items and routing activities are selected. This selection process—without user interaction—is called low-level configuration. It uses the characteristic values determined during high-level configuration as configuration parameters and evaluates the object dependencies assigned to BOM and routing items. Low-level configuration uses a dependency of type selection conditions to decide whether a component or an activity is chosen. Procedures are used to set additional values, such as the quantity of a BOM item. Dependencies assigned to characteristics, characteristic values, and configuration profiles are not evaluated in low-level configuration. They are relevant only during high-level configuration. The advanced variant configurator uses the same lowlevel configuration as the traditional configurator. 8.2.12

Embedded Analytics for Classification and Configuration Data

In the context of embedded analytics, there is a flexible solution available for real-time analysis of classification and configuration data. A user can analyze, for example, what characteristic value combination has been sold the most often. The basis of this concept is the VDM (see Chapter 2, Section 2.1) and the embedded analytics architecture (see Chapter 4, Section 4.1.2), which are based on CDS views. The VDM views that are relevant here represent classification and configuration data and the related business objects that provide the context in which the configuration was done (see Figure 8.8).

Figure 8.8

Embedded Analytics for Configuration Data

CDS views for classification and configuration have characteristics as fields. Because these characteristics are created by the user as master data, CDS views for classification and configuration cannot be provided by SAP. Therefore, SAP has introduced a concept in which CDS views for classification and configuration data can be generated by the key user. Figure 8.8 shows how it works. In step 1, the key user uses report BSCL_CLASS_VIEW_GENERATION to generate base CDS views of data category CUBE with fields that represent characteristics by providing a class name as the input. The content of these views are the configurations or classifications that have been created in the context of a business object like a sales order or a batch. Furthermore, the configuration-related views contain a special Product Configuration field, which represents an identifier for a configuration. A similar field is part of the CDS view of the business object (for example, a sales order item) for which the analysis will be done. The field can be used to join these two CDS views 2 so that the resulting CDS view contains fields from the configuration and the business object. In the case of classification, instead of the product configuration field, the classification object key is used for joining. The resulting combined CDS view is of data category CUBE. In step 3 an analytical query CDS view (with data category QUERY) is defined on top of the combined CDS cube view. This analytical query view can be consumed using several tools, such as Analysis Path Framework (APF) service or SAP Analytics Cloud. More details can be found in SAP Note 2490167 (Generating CDS Views for Classification/Configuration in SAP S/4HANA Cloud).

8.3

Business Partner

A business partner is a person, organization, group of persons, or group of organizations in which a company has a business interest. The business partner is the legal entity with which a business transaction is performed. Business partners are classified into three types: Person A business partner of type person is an individual who is performing a defined role in the organization for which the individual works. Example: An employee or contact person is a business partner of type person who works for or is a contact person for an organization. Organization A business partner of type organization represents a company with which a business relationship is established. Example: SAP SE is a business corporation with which other organizations conduct business. Group A business partner of type group allows you to map complex organizational structures of a business partner, like joint holder or cooperative society. Example: DSAG (the German-speaking SAP user group) is an organization of type group because it represents a group of companies using SAP. In SAP S/4HANA, the business partner is the leading business object and single entry point to centrally maintain the business partner, customer, and supplier (formerly known as vendor) master data. Although the business partner is the leading object in SAP S/4HANA, the business partner master data is synchronized with customer and supplier data for the common entities to provide a smooth and non-disruptive evolution of the data model. The customer and supplier as known from SAP ERP continue to coexist in SAP S/4HANA as an extension to the business partner model. The customer and supplier master data of SAP ERP have several limitations, which the business partner data model has overcome (for details about the differences between SAP ERP and SAP S/4HANA, see SAP Note 2265093, S4TWL Business Partner Approach). In comparison with the customer and supplier data model of SAP ERP, the business partner data model in SAP S/4HANA has the following advantages: It’s possible to maintain multiple addresses in a business partner record that can be used in a specific business context—for example, ship-to addresses and sold-to addresses. One of the addresses maintained in the business partner is flagged as the standard address and is transferred by default to the customer/supplier entity pertaining to the business partner. However, the customer/supplier data model only supported maintenance of a single address, which was a limitation in SAP Business Suite and resulted in increased data footprint. As part of data model extensions, new tables are planned to be introduced in customer and supplier entities to support additional business partner addresses.

The business partner supports a large number of business roles, such as customer, supplier, contractor, employee, shareholder, internet user, and owner. Instead of having multiple business partner instances, one legal entity is stored as one business partner with different roles. For example, a company that acts as a customer as well as a supplier is stored as one business partner with two business partner roles. This reduces data redundancy and footprint because organizations do not have to create multiple customer/supplier master data objects pointing to the same legal entity. The business partner supports time dependency. At the sub-entity level, such as for business partner roles, addresses, relationships, and bank data, users can define a validity period. Multiple business partners can be linked with each other using the business partner relationship. For example, it’s possible to maintain contact person relationships between business partners. One of the business partners could be an organization and the other partner could be a contact person for the organization. The business partner supports both business-to-business (B2B) and business-toconsumer (B2C) processes with the organization and person business partner types. Whereas the former customer/vendor data model predominantly described an organization (B2B), the business partner looks at the individual human being, too— the organization’s counterparts in B2C processes. 8.3.1

Architecture of Business Partner Master Data

The center point of the business partner data model is the common attributes stored in table BUT000 (see Figure 8.9). The business partner type differentiates between organization and person. The business partner general data of a person includes attributes such as first name, last name, title, gender, birthdate, and language. Attributes such as organization name, organization title, legal entity (like a private or public company), and industry sector describe an organization.

Figure 8.9

Business Partner Data Model

A business partner can have different business roles based on the process in which the business partner is involved. Business partner roles are stored in table BUT100. The complete business partner data model is exposed as a CDS view in the VDM. The customer role FLCU01 is used to maintain the customer general data and sales area data for a business partner. The customer financial accounting role FLCU00 is used to maintain the customer company code details for a business partner. The supplier role FLVN01 is used to maintain the supplier general data and supplier purchasing organization details for a business partner. The supplier financial accounting role FLVN00 is used to maintain the supplier company code details for a business partner. Consider an example in which the business partner is a company—SAP SE, enabling the digital transformation for an automobile organization. In this case, we create SAP SE as a business partner of type organization with the supplier role. We also create a business partner of type person—Mr. Krishnan, with the contact person role—and establish a contact person relationship with the automobile organization. The business partner role could be a customer, supplier, employee, contact person, prospect, sold-to party, ship-to party, or payer. The business partner roles are assigned to a role category. When a user creates a business partner of role customer or supplier, the business partner maintenance triggers the customer-vendor integration (CVI) which automatically creates corresponding customer or supplier instances to be used in sales and procurement processes. The bank details of a business partner like the bank account number, bank account name, bank country, bank key, and International Bank Account Number (IBAN) are stored in table BUT0BK. The business partner payment card model (table BUT0CC) is used to support the maintenance of credit card master data details, like card type, validity, card number, and issuer. The business partner identification model (table BUT0ID) is used to maintain the business partner identity, like social security number (SSN), driver’s license number, or passport details. The business partner identification supports maintenance of the business partner identification number, validity, and country. The business partner communication model is used to maintain the address-specific and address-independent business partner communication data, like telephone number, mobile number, fax number, email, and website URL. The business partner address model is used to maintain the address details for the business partner. It’s possible to create multiple addresses and classify them using an address usage. The address usages also have a defined validity. For one business partner, one selected address is the default or the standard address. The address of a business partner can be of three types: Organization/company address Private address

Contact person address (workplace address) Addresses for business partners are maintained in table ADRC. The architecture of the business partner functionality in SAP S/4HANA is shown in Figure 8.10, including customer-vendor-integration and the integration with external systems.

Figure 8.10

Architecture of Business Partner Application

Employee Role Business Partner Integration Master data is the heart of digital business, and different applications require consistent master data across business processes. As part of one domain model, it is imperative to have one uniform view of the master data that is shared by applications in a distributed environment. The SAP Cloud Platform Master Data Integration service enables customers to share consistent master data across multiple cloud application tenants (see also Chapter 6, Section 6.8.3). It isn’t possible to directly create a business partner with the employee role in SAP S/4HANA; it can only be replicated from an HR application. The employee data is fed from SAP SuccessFactors Employee Central or another human resource information system, like the SAP ERP Human Capital Management (SAP ERP HCM) solution, into SAP Cloud Platform Master Data Integration, where it can be consumed by other systems. The employee master data from SAP SuccessFactors Employee Central provides employee (worker) data to the SAP Cloud Platform Master Data Integration service, which is then consumed as business partner master data in SAP S/4HANA. To create an employee as a business partner in SAP S/4HANA, we need to have the following configuration in place:

The employee business partner role (BUP003) and supplier role (FLVN00) should exist in SAP S/4HANA. The supplier role is used for processing payments to an employee. The workforce private address type (HCM001) must be configured in business partner address determination to maintain the employee’s private address data. The identification types HCM001 and HCM028 have to be configured when replicating the employee master data into SAP S/4HANA. The ABAP report /SHCM/RH_SYNC_BUPA_FROM_EMPL can be used to replicate the SAP ERP HCM employees to the SAP S/4HANA business partner data. For more information, refer to SAP Note 2340095 (S4TWL—Conversion of Employees to Business Partners). Time Dependency in Business Partners Certain business partner data is time-dependent, which means it has its own validity periods. For the following business partner entities, validity periods can be defined: Roles To control time dependency, the business partner roles (table BUT100) have fields that specify the start and end of the validity of the role (fields VALID_FROM and VALID_TO). A business partner could be a customer from 02/01/1999 to 04/03/2020. After April 3, 2020, the business partner is no longer a valid customer associated with the organization. Address A business partner has a billing address (table BUT020) at the Walldorf location from 03/01/2020 to 12/31/9999. If the business partner moves to the Karlsruhe location on 06/01/2020, the billing address validity changes. Time dependency for address usage is stored in table BUT021_FS. Bank details You must control the validity of the bank details (table BUT0BK) because a business partner could no longer have an active account in a bank. For that, table BUT0BK also has fields to indicate the start and end of the validity. Payment cards Similar to bank details, it’s possible to have different payment cards (table BUT0CC) associated with a business partner, and these payment cards have their own validity periods. Relationships, including contact person relationship It’s possible to control the validity period at the relationship level (table BUT050). Consider for example, a person who is the contact person for the German sales business unit in an organization. Effective 06/24/2020 the person gets promoted and is now the contact person for the entire European business unit. In the system this person is reflected as a business partner with two contact person relationships, one for the German unit with validity from 01/01/2018 to 06/23/2020 and one for the European unit with validity from 06/24/2020 to 12/31/9999. Identification numbers It’s possible to have different business partner identifications (table BUT0ID) over a

period. For example, perhaps a business partner has a passport that is valid only for 10 years. This can be represented using the validity fields in the table. 8.3.2

SAP S/4HANA System Conversion Scenarios

A prerequisite for moving from SAP ERP to SAP S/4HANA is the usage of CVI in SAP ERP. The customer and supplier master data objects have to be synchronized to business partners before the system conversion process starts (see Figure 8.11). It’s recommended but not mandatory to have the same business partner ID and customer/supplier ID. In SAP S/4HANA Cloud, the business partner and customer/supplier have the same ID.

Figure 8.11

Conversion Scenarios

During the preparation phase, the prechecks are performed in SAP ERP to identify the steps required to make the system compatible with the conversion process. All customers and suppliers with a deletion flag can be archived and need not be converted to business partners. The following tools are offered to ease the system conversion process: Transaction CVI_COCKPIT is the single point of entry to access all the required conversion tools. It also helps to check the current state of the system before conversion. Transaction CVI_PRECHK is used to check the consistency of the master data. The tool provides in-line and mass edit capabilities to fix the inconsistencies in the customer and supplier master data. Transaction BP_CVI_IMG_CHK is used to check the consistency of the customization before performing the system conversion from SAP ERP to SAP S/4HANA. The SAP Readiness Check tool can be used to check the current state of the SAP ERP system to determine the effort that would be required to perform the system conversion. The SAP Readiness Check tool provides information about the SAP HANA sizing required, custom code analysis, volume of customer/vendor records that are inconsistent, and which records need to be synchronized with business partners.

During the synchronization phase, the customer and supplier master data is synchronized to business partner master data. The synchronization can be performed both from the customer/supplier transaction for individual master data records or from Transaction MDS_LOAD_COCKPIT. During the conversion phase, the SAP ERP system is upgraded to SAP S/4HANA. The conversion process is first tested using productive data in a quality assurance or preproduction environment, followed by running it in the production system. During the postprocessing phase, after the successful initial load of the business partner and system conversion process, the system is configured to make the business partner the leading business object. Refer to SAP Note 2713963 for more details on the system conversion process and to access the SAP S/4HANA Cookbook Customer/Vendor Integration. SAP Business Partner, Customer, and Supplier Data Model The data model in Figure 8.12 provides details about how the business partner data is synchronized with the customer/supplier data for common entities like general data, address, and bank.

Figure 8.12

8.3.3

Business Partner and Customer/Supplier Data Model

Data Protection and Policy

Data protection and privacy regulations apply to the business partner record, which stores—especially for person-type business partners—quite a lot of personal data (see also Chapter 7 about data protection and privacy). The business partner functionality uses SAP Information Lifecycle Management (SAP ILM) to calculate residence and retention periods for a business partner instance. SAP ILM informs all applications

registered for the business partner instance if a residence or retention period ends. If all applications confirm the end of purpose, SAP ILM blocks the business partner. As a consequence, the business partner application prevents viewing and maintaining this business partner instance. Finally, SAP ILM archives or deletes the business partner. 8.3.4

Extensibility

Often, enterprises using the business partner application want to extend certain entities with custom fields, which are required to support their individual business processes. The business contexts in the following list can be used to extend the business partner, customer, and supplier entities with custom fields using the key user extensibility tool (for details, see Chapter 5, Section 5.1): BP_CUSTVEND1

This business context is used to extend the business partner model (table BUT000). CUSTOMER_GENERAL

This business context is used to extend the customer model (table KNA1). CUST_COMPANYCODE

This business context is used to extend the customer company code model (table KNB1) CUST_SALES

This business context is used to extend the customer sales area model (table KNVK). SUPPLIER_GENERAL

This business context is used to extend the supplier model (table LFA1). SUP_COMPANY

This business context is used to extend the supplier company code model (table LFB1). SUP_PURORG

This business context is used to extend the supplier purchasing organization model (table LFM1). In addition, the following cloud BAdIs are offered to the customer to validate the extension fields added: CMD_VALIDATE_BP

This BAdI can be used to validate the custom fields extended in the business partner general data entity. CMD_VALIDATE_CUSTOMER

This BAdI can be used to validate custom fields extended in the customer general data, company code, and sales area data entities. CMD_VALIDATE_SUPPLIER

This BAdI can be used to validate custom fields added to the supplier general data, supplier company code, and supplier purchasing org. data entities.

8.3.5

Business Partner APIs

In SAP S/4HANA, integration with the business partner functionality is supported using the following APIs: SOAP APIs, for asynchronous business partner master data replication with the SAP S/4HANA system using the data replication framework (DRF; see Chapter 6, Section 6.8.2). OData APIs, for synchronous CRUD operations. IDoc, a format used widely with SAP ERP. It is supported for compatibility reasons only but is not the preferred communication mode for integrating with SAP S/4HANA. The OData service can also be extended with new custom extension fields using key user extensibility (see Chapter 5, Section 5.1).

8.4

Summary

In this chapter we gave on overview of selected important master data in SAP S/4HANA. We started with product master, its data model and its various aspects that affect functions and processes in many areas of SAP S/4HANA. We also discussed product master APIs, extensibility, and data migration. In the next section, we explained the data that specifies how products are assembled, characterized and configured. Here we covered bill of materials, classification and variant configuration, including the new advanced variant configurator in SAP S/4HANA. In the third section we gave an overview of business partner master data, which represents persons or organizations in different roles such as customer, supplier or employee. We discussed the architecture and data model of business partner master data, including aspects such as time dependency, and employee data integration. We looked at conversion scenarios for business partner data from SAP ERP to SAP S/4HANA and at customer-vendorintegration. We briefly discussed data protection aspects and gave an overview of extensibility options and APIs for business partner master data. We have now introduced important master data that is used in many areas of SAP S/4HANA. One of these areas is sales, which we cover in the next chapter.

9

Sales The sales process consists of many individual steps, involving users in different organizational areas and requiring seamless integration and monitoring capabilities to ensure customers get what they’re asking for: reliable processing and delivery of goods and services as agreed upon in the sales order or contract.

In this chapter, we focus on the processes of company-internal sales processes to sell products and services. In Section 9.1, we provide an architectural overview of the main components, business objects, and integration points. Section 9.2 focuses on the general structure of the data model and Section 9.3 describes authorizations in sales. Then, Section 9.4 through Section 9.9 describe in more detail the main processes such as sales inquiry and quotation management, sales order processing, sales contract management, returns and refund management, and billing. In Section 9.10, we emphasize how SAP S/4HANA Sales leverages the in-memory capabilities of SAP HANA to provide analytical insights in real time and how machine learning is used for predictions about potential processing delays or issues. Finally, in Section 9.12, we describe the integration to SAP and third-party systems based on asynchronous SOAP or synchronous OData APIs.

9.1

Architecture Overview

SAP S/4HANA Sales covers a wide variety of sales processes, offering very flexible business configuration for integration with procurement, logistics, and finance. When companies sell physical products, there are different possibilities: products are either sold from stock or, if products are not available, they can be produced, procured, or organized from another warehouse (stock transfer). The sales process within SAP S/4HANA can start with a sales inquiry, followed by a quotation, which then can be used as a reference to create a sales order (see Figure 9.1). Sales orders can also be created based on long-term agreements such as sales contracts or scheduling agreements. Sales contracts are outline agreements to provide products in certain quantities or for a certain value within a certain time. Scheduling agreements agree on delivering a product in a certain quantity at regular intervals. Sales order processing includes the calculation of prices by the pricing component and checking the availability of ordered products using the available-topromise (ATP) check.

Figure 9.1

Overview of Sales in SAP S/4HANA

When a sales order includes products that aren’t available in stock, this can lead to inhouse production or procurement from third-party vendors. Master data, such as business partner and product master data, and organizational units such as sales organization, distribution channels, and divisions, play important roles during sales processing. Outbound deliveries are created to get the products shipped to the customer as promised in the sales order. Billing usually represents the final step within the sales process. It creates billing documents, which are sent to customers to invoice them for products or services that have been sold. In addition to billing of sales order-related deliveries, there are also other billing scenarios, including omnichannel convergent billing (see Section 9.9) or the creation of preliminary billing documents. Finally, billing documents are posted to financial accounting, resulting in the creation of corresponding Universal Journal entries (see Chapter 14, Section 14.2). Sales monitoring and analytics features are available across the core sales business processes. Analytical CDS views reading directly from the SAP HANA database tables allow real-time aggregation and calculation of key performance indicators for sales managers. Machine learning regression algorithms allow predictions of future sales potential or processing delays.

9.2

Sales Documents Structure

The sales process chain in SAP S/4HANA Sales spans inquiries, quotations, sales order and contract management, returns and claims processing, and billing. Typically, each step in this process results in a corresponding business document of a certain document category: Sales inquiries and sales quotations during the presales process Sales contracts and sales orders during contract and order management Customer returns or credit memo requests in returns management Billing documents in the billing phase All these sales documents have a similar data structure, so they share many database tables. While each document category has specific characteristics, all sales documents typically share the following characteristics (see also Figure 9.2): Consist of one document header and one or several document items Have one or more business partners assigned as partner functions: sold-to party, ship-to party, payer, or contact person Have items with a request for certain products or services to be delivered at certain points in time and for a certain price Have predecessor and successor documents, which represent the process flow of documents Have texts or notes, assigned on the header or item level The specific characteristics in the data model and in the processing logic of the different sales documents are defined by certain document categories and document types on the header and item levels. Each sales document header has exactly one sales document category and one sales document type. Sales document categories are used to categorize different sales, billing and delivery document types. Examples of sales document categories are inquiry, quotation, order, scheduling agreement, contract, returns, delivery, invoice, and credit memo request. Sales document categories are defined by SAP.

Figure 9.2

Structure of Sales Documents

Sales document types are used to define the processing and data flow of sales documents with regards to processing steps like pricing, shipping, billing, and so on. Examples for sales document types that are provided as part of best practices are standard order, debit memo requests, free of charge delivery, returns, and value contract. For each sales document type, it can be defined which item categories are allowed in documents of this type. The item categories are used to define on the item level the processing method with regards to pricing, shipping, billing, availability check, and so on. Sales document types and item categories are key configuration elements to control the processing variants possible in SAP S/4HANA Sales. SAP offers predefined best practices configuration settings for these configuration entities and also provides finetuning capabilities to define your own variants in processing certain types of sales documents and item categories. The ID of a sales document can either be defined by the end user when creating a sales document in the UI, or be determined automatically based on number ranges. The corresponding number range intervals can be defined for each sales document type.

9.3

Authorizations

Authorizations for processing of sales or billing documents are defined based on authorization objects, which allow you to distinguish read or write authorizations on the level of sales document types, billing document types, or sales organizations. To support administrators in defining the needed authorizations for business users, SAP S/4HANA Cloud offers business role templates—for example, sales manager, internal sales representative, returns and refund clerk, and billing clerk. As described in Chapter 17, each business role template contains a set of business catalogs, which defines the set of applications that will be accessible in the SAP Fiori launchpad for a user with this role and the authorizations needed to use these applications. An administrator can create specific business roles based on the provided business role templates and assign these business roles to business users as required. To grant the necessary authorizations tailored to the responsibilities and tasks of a certain user, the administrator can restrict the authorizations for, say, sales order processing or billing on the level of the document types or sales organizations for which a user is authorized to create, change, or display sales documents. In addition, a user can get specific additional authorizations for processing master data such as business partners or products or for specific processes such as accessing payment cards or pricing conditions.

9.4

Sales Inquiries and Sales Quotations

Sales inquiries are requests from customers for certain goods or services. An internal sales representative can create a sales inquiry that captures the required products and quantities. Sales quotation s, in turn, are responses to sales inquiries or requests for quotation from customers. They represent legally binding offers to customers for the supply of goods or the provision of services under specific conditions. Sales quotations can be created with reference to an inquiry or from scratch. If the customer accepts the offer, then the sales quotation can be converted to a sales order. Sales quotations have a certain validity. SAP S/4HANA Sales uses the in-memory capabilities of SAP HANA to detect customers that have not yet placed orders in response to quotations that are expiring soon so that you can start follow-up activities proactively. Here are two examples how low conversion rates of quotations is shown to the user: Sales managers can use real-time, embedded analytical applications to monitor quotations with low conversion rates expiring soon. This is based on analytical queries and CDS views, which read the corresponding sales quotation data directly from the database (see analytics architecture in Chapter 4, Section 4.1). SAP S/4HANA Sales uses the Intelligent Situation Handling framework (see Chapter 4, Section 4.3) to notify sales representatives about quotations with low conversion expiring soon (see Figure 9.3). The situation instances are displayed to the user within the SAP Fiori object page of the sales quotations and also are visible in the notifications inbox or in the My Situations app. Key users can use the situation template SD_Expiring_Quotation and configure situation types to define conditions for when situation instances will be created—for example, if a quotation expires in the next seven days and its conversion rate is equal to or lower than 10%.

Figure 9.3

Use of Intelligent Situations Management in Sales Quotations

9.5

Sales Order Processing

Sales order s are the main document category in the sales process and are created when a selling company and a customer have come to an agreement on goods or services to be delivered for a certain price at a certain time. A sales order can be created from scratch or by referencing a purchase order, a quotation, a sales contract, or another sales order. If a sales order is created with such a reference, data such as customer and product data is derived from the predecessor document. Additional data can be copied from predecessor documents, depending on a business configuration called copy control. If a sales order is created from scratch, then a customer in the role of a sold-to party and the products and quantities to be sold must be specified during the order creation. As explained in Section 9.2, the sales order document is structured with a header and items and typically has business partners, texts, and prices on the header or item level. Each sales order item that contains products to be delivered in an outbound delivery also includes at least one schedule line. The item can have several schedule lines when the ordered quantities are split into several partial deliveries at different times or ship-to locations. The characteristic of a schedule line is determined using the schedule line category. The sales document type defines which kind of sales order is created and which item categories (such as standard item or free goods item) are allowed in this sales order. The item category then determines subsequent process steps such as price determination, checking the availability of the requested product, and partner determination to identify involved partners. To determine the price of products or services sold within a sales order, the pricing engine (see Section 9.11) is typically called to perform the calculation. If the requested products are in stock, sales order processing informs outbound delivery processing to deliver the products to the ship-to location. If the products need to be procured from a third-party vendor, sales order processing creates a purchase requisition, which is converted to a purchase order and sent to the vendor. The vendor then either provides the material to the selling company or ships the goods directly to the customer and bills the selling company (in a drop shipment process). If the products ordered by the customer need to be produced in-house—such as in a make-to-order scenario—the production process is triggered by an MRP run (see Chapter 12, Section 12.5.3). Delivery processing prepares the shipment of the ordered products to the ship-to location by creating the outbound delivery document. While creating the delivery document, the shipping point is determined. Delivery processing also triggers picking of the ordered products from stock, packing, and finally shipping them. These steps are documented in the delivery document. Delivery processing initiates goods issue processing, which will create goods issue documents so that the reduced quantity is reflected in the inventory (see Chapter 12, Section 12.5.1).

9.6

Sales Contracts

Sales contracts represent long-term agreements with customers to sell goods or services under certain conditions. Sales contracts are valid for a certain time period. SAP S/4HANA includes the following categories of sales contracts: A master contract is a document to group sales contracts. The master contract contains the general terms that apply for all contracts grouped under this master contract. A quantity contract is an agreement to order a certain quantity of a product during a specified period. The contract contains basic quantity and price information, but no schedule of specific delivery dates or quantities. A value contract is a contractual agreement with a customer that contains the materials and services that they may receive within a certain time period and up to a target value. A value contract can be restricted to certain products. A customer fulfills a sales contract by issuing sales orders to the contract; this is called a release order. When a release order is created, the application checks whether: the released products conform to the rules of the value contract, the release order is within the contract’s validity period, and the released value (in the value contract) or the released quantity (in the quantity contract) exceeds the remaining open value or the remaining open quantities in the contract. A release order referencing a contract updates the released value or quantity in that contract. The sales contract fulfillment rate indicates the percentage of the released values in release orders in relation to the target value of a sales contract. An internal sales representative or a sales manager can use the Sales Contract Fulfillment Rate analytical app to analyze how well the contracts in their sales area are being fulfilled. This helps to identify contracts that need to be renewed soon, as well as contracts that are not being fulfilled as expected.

9.7

Sales Scheduling Agreements

Sales scheduling agreement s represent long-term agreements with customers that outline the overall expected quantity to be delivered to the customer over a specific period of time. The processing of scheduling agreements is depicted in Figure 9.4. To release a quantity of the product the customer sends in sales scheduling agreement releases, referred to as delivery schedules, at regular intervals. Delivery schedules are documents sent by customers to release a certain quantity of one or more products outlined in the sales scheduling agreement. Usually, the customer (represented as the buyer system in Figure 9.4) sends delivery schedules through Electronic Data Interchange (EDI) at regular intervals to request the release of a quantity of the outlined products. The system then automatically creates new (or updates existing) delivery schedules in the sales scheduling agreement, and already delivered schedule lines are stored in the schedule line history. As soon as outbound deliveries are created from the schedule lines of sales scheduling agreements, this is written into a table to keep track of resulting outbound deliveries for each schedule line.

Figure 9.4

Processing of Sales Scheduling Agreements

9.8

Claims, Returns, and Refund Management

Customers can return a product if it doesn’t fit their expectations or is damaged. In complaint and returns processing, customer returns are created whenever a customer sends back goods. Customer returns typically are created with reference to a preceding document, such as sales orders, sales contracts, contract release orders, or billing documents. When a customer return is based on a complaint, the returned goods can be posted for inspection to the warehouse. The inspection can result in one of the following activities: Approval of the complaint and creation of a credit memo Approval of the complaint and creation of a free-of-charge subsequent delivery Rejection of the complaint Advanced returns management provides some extended features for returns processing, such as recording of the product inspection results, deciding what the logistical follow-up activity should be, refunding customer returns with a credit memo, delivering replacement products free of charge, and monitoring the process.

9.9

Billing

Billing provides functions for a variety of billing scenarios, including omnichannel convergent billing, invoice list processing, and the creation of preliminary billing documents. Usually, billing is scheduled to run automatically, but billing documents can be created manually, too. After billing documents are created, they are posted to financial accounting. A billing document is created with reference to a predecessor document—typically a sales order, a debit memo request, or an outbound delivery document, but it also can be another item of the billing due list, such as a billing document request. A billing document request is a request object that can contain itemized billing data from internal and external sources. Billing document requests that contain data from external sources are referred to as external billing document requests. External billing document requests represent billing data that has been received from an external source, such as an integrated system, like SAP Subscription Billing or SAP Sales Cloud solutions, or a manually imported spreadsheet containing billing data. Convergent billing describes the process of billing different types of sales documents together to create a single customer invoice for sales orders, deliveries, or professional services simultaneously. Its architecture is depicted in Figure 9.5.

Figure 9.5

Convergent Billing

Multiple billing due list items—which can have differing billing types—are collected into a single customer invoice. By default, convergent billing tries to converge as many

reference documents as possible into as few billing documents as possible. Whether multiple reference documents can be converged into a single, collective billing document depends on whether the values of certain header and item fields (including business partner fields) of the reference documents are identical. Those items of the billing due list that cause different billing document header field values are split into separate billing documents.

9.10

Sales Monitoring and Analytics

Sales monitoring and analytics provides analytical features across the sales business processes, from quotations and sales contracts to sales orders, including their fulfillment, to invoices. The analytical applications are based on the sales-specific CDS views of the VDM and follow the embedded analytics architecture (see Figure 9.6 and Chapter 4, Section 4.1). Multidimensional reports such as Incoming Sales Orders – Flexible Analysis and Sales Volume – Flexible Analysis provide key figures and the possibility to drill-down or filter to view detailed information for selected sales organizations, products, material groups, sold-to parties, sales document types, and so on. The sales overview provides an overview of sales data including open sales quotations, open sales orders, blocked credit memo requests, customer returns, and customer information. The data displayed in the overview page depends on the roles assigned to the user and can be filtered in the user settings. SAP Smart Business–based analytical apps for sales managers show key performance indicators and provide flexible drill downs to gain insights into sales situations, ranging from conversion rates of quotations and contracts, to incoming sales orders and back orders and to delivery performance and sales volume. SAP Smart Business is a framework for developing or configuring analytical applications tailored to the needs of business users for checking KPI values and performing analyses (see also Chapter 3, Section 3.1 and Chapter 4, Section 4.1). SAP S/4HANA Sales also provides analytics embedded in transactional apps to support the user in focusing on the management of exceptions, for example the SAP Fiori app Sales Order Fulfillment Issues. The app uses the real-time analytical capabilities of SAP HANA to track the processing status of sales orders regarding its fulfilment, and the same application provides actions to resolve issues, to ensure that sales orders can be fulfilled in time.

Figure 9.6

Sales Analytics Using VDM

As shown in Figure 9.6, the analytical apps for sales embedded in SAP S/4HANA are based on CDS views that read the requested data in real time from the transactional database tables (mainly sales documents, billing documents, and delivery documents). The in-memory capabilities of SAP HANA make data replication to a dedicated data warehouse obsolete for many analytical use cases. The corresponding CDS views are based on the VDM (see Chapter 2, Section 2.1), which can also be used by SAP S/4HANA customers for extensions by building their own analytical views and queries using key user tools such as the Custom CDS Views or Custom Analytical Queries. The VDM for sales contains specific CDS views (sales analytics core layer in Figure 9.6), which provide calculated attributes (such as SalesVolumeNetAmount, IncomingSalesNetAmount) to the cube and query views. Sales functionality includes predictive analytics SAP Fiori apps. Examples are Predicted Delivery Delay, Sales Performance – Predictions, and the app Quotation Conversion Rates – Valid / Not Completed with its predict quotation conversion rates feature. These apps use embedded machine learning based on the predictive analytics integrator (PAI, see Figure 9.7). Training of the machine learning model is done based on CDS views with data from past sales documents, delivery documents, goods issues, and master data. For example, the Predictive Delivery Delay app can predict if the deliveries for sales orders will be in time or how much delay can be expected based on historical data. Training of the model is done with a CDS view with past data to compare the planned goods issue date of all deliveries for which goods issue is completed with the actual goods movement date of the corresponding deliveries. The CDS view uses a table function and the techniques of ABAP-managed database procedures (AMDPs) to consume the SQLScript procedures exposed by the previously trained machine learning

model of PAI according to the machine learning architecture, as described in Chapter 4, Section 4.2.

Figure 9.7

Overview Analytics, Including Predictive Analytics, in SAP S/4HANA Sales

9.11

Pricing

A typical sales order or quotation represents an agreement between a selling company and a customer to deliver products or services for a certain price. The different factors that influence a price are called price elements—for example, prices, taxes, surcharges, or discounts. The framework providing very high flexibility to set up pricing rules is called the condition technique (see Figure 9.8). It allows you to define pricing master data (condition records persisted in condition tables) and rules for how to use it. How the price for an item is calculated and which master data are used is based on pricing procedures, condition types, and access sequences. The pricing procedure defines the calculation sequence by specifying the order of condition types and subtotals. Subtotals are intermediate results of the price calculation, such as the sum of discounts or total value. A condition type is a representation of a price, a discount, and so on. It controls how the condition value (price, discount) is calculated. For most condition types, there can be different condition records for a combination of price-relevant attributes (such as customer, customer group, and product). A condition record is valid for a specified time period. An access sequence is a search strategy that the condition technique uses to find valid condition records during pricing. The access sequence consists of one or more accesses. Their sequence defines which condition records have priority over others. The accesses specify in which table to look first, second, and so on, until the system finds a valid condition record. When a sales order is created, sales order processing determines the pricing procedure dependent on parameters like sales organization, sales document type, and customer. Each sales order has exactly one pricing procedure. When a sales order item is created, sales order processing sends a request to the pricing application. The pricing application reads the relevant customizing data, such as condition types and access sequences used in the pricing procedure. In a second step, the price condition records that are relevant for the sales order are determined using the condition access reuse layer. In the last step, the pricing result is calculated according to the logic defined in the pricing procedure. The result of the price calculation is stored within the sales order document. When a sales order is billed, typically no new price condition determination (except taxes) takes place, but the price result is copied from the sales order and a price recalculation is done in the billing document—for example, to adjust for quantity changes.

Figure 9.8

Architecture of Pricing

9.12

Integration

SAP S/4HANA provides APIs for sales functionality to connect business processes across systems and integrate with applications outside SAP S/4HANA. The APIs in SAP S/4HANA Cloud are based on either OData or SOAP, whereas in SAP S/4HANA onpremise, traditional SAP interfaces such as RFCs, BAPIs, or IDocs also are provided to support integration with SAP on-premise systems using existing interfaces (for details on the API technologies, see Chapter 6, Section 6.1). In SAP S/4HANA, OData APIs are used mainly for synchronous calls, in which the caller receives an immediate response to its HTTP request, whereas SOAP APIs are used for asynchronous, message-based communication. For sales, there are OData and SOAP APIs for its main business objects, such as quotations, sales orders, sales orders without charge, sales contracts, customer returns, billing documents, billing document requests, and customer invoices. In SAP S/4HANA Cloud, all application interfaces of the ABAP backend are integrated with SAP Application Interface Framework (AIF; see Chapter 6, Section 6.3) for monitoring and error handling. For data integration to SAP Business Warehouse or SAP Data Intelligence, specific CDS-based extraction views are used. The integration of sales orders and customer returns with external buyers allows suppliers and buyers to exchange documents using web services based on the SOAP protocol (see Figure 9.9). For integration with buyers’ external SAP systems based on IDocs, the SAP Cloud Platform Integration service is used to map SOAP services to the data exchange format used. SAP S/4HANA Cloud can also connect with buyers that use Ariba Network through the SAP Ariba Cloud Integration Gateway.

Figure 9.9

Integration Architecture of SAP S/4HANA Sales

There exist several integration scenarios to other systems on the seller side. For thirdparty systems, the previously mentioned OData or SOAP APIs are offered. For integration to other SAP solutions, such as SAP Commerce Cloud, SAP Marketing Cloud, SAP Subscription Billing, SAP Cloud for Sales, or SAP Fieldglass solutions, corresponding integration packages and best practices content packages are offered; these are described on the SAP API Business Hub. For example, the integration of SAP S/4HANA with SAP Commerce Cloud allows you to create or change master data such as business partners, prices, and products in SAP S/4HANA Cloud and use the data

replication framework to replicate them to the corresponding SAP Commerce Cloud solution. An order can also be placed in the web shop and sent to SAP S/4HANA Cloud for order fulfillment.

9.13

Summary

We have now described the main business processes and business objects of sales in SAP S/4HANA and how they are used to record the multiple steps of the end-to-end sales process. We started with an architecture overview that showed the main components and data flows. Next, we discussed the data model for business objects in sales, which is based on a common abstraction, the sales document. We introduced the various business objects such as sales inquiries, sales quotations, sales contracts, sales orders, and sales scheduling agreements. We also gave an overview of the architecture for pricing and for billing, with focus on convergent billing. We showed how SAP S/4HANA applies machine learning, embedded analytics and situations handling to enable intelligent sales processing, proactive prediction of issues and smart management of exceptions. In the next chapter we cover service operations in SAP S/4HANA, which manages the offering and execution of services.

10

Service Operations The offering and execution of services, often combined with sales, plays an increasing role in many business areas. In this chapter, you’ll discover how service operations are modeled in SAP S/4HANA and integrated with other components, from planning to financials.

In this chapter, we introduce service operations in SAP S/4HANA. Service operations comprise all processes for order management, commercial aspects, and execution of services for customers. In Section 10.1, we give an overview of the main components, business objects and integrations, which are then explained in subsequent sections. In Section 10.2, you’ll learn about the important business objects and processes in service operations, and in Section 10.3 we introduce the main master and organizational data that are relevant for service operations. In Section 10.4, we explain how the business objects are implemented in service operations. Here you’ll learn about the business transactions framework and the data model for business objects. Finally, integration with other SAP S/4HANA applications and with external applications is discussed in Section 10.5.

10.1

Architecture Overview

Let’s start with an overview of the service operations architecture on the business level, with its main components, business objects and integrations (see Figure 10.1). A customer’s request for the execution of a service is captured in a service order. The execution of the order is documented in a service confirmation. In the in-house repair process, the service is executed in house in a service center, and the process is coordinated through an in-house repair. Service contracts are used by a company to agree on a regular execution of services with the customer. Finally, solution orders allow the bundling of one-time services and periodic services together with the sales of physical items. The processes associated with these objects are explained in detail in Section 10.2. Integration to other internal SAP S/4HANA processes like pricing, billing, and procurement is accomplished using internal APIs, whereas public OData and SOAP APIs are available for remote integration and are also used for integration to the SAP Field Service Management solution. These integration aspects are explored in Section 10.5.

Figure 10.1

Service Operations Overview

All business objects of service operations are modeled within a common business transactions framework, which allows for flexible modeling at design and runtime. The persistence of the business transactions has been optimized for the SAP HANA database. SAP Fiori is the leading UI technology in SAP S/4HANA Service Operations, but the traditional Web Client UI is still used for a couple of applications. Look and feel, however, has been adapted to the SAP Fiori design to offer a seamless experience to users.

10.2

Business Objects and Processes in Service Operations

In this section, we introduce the main processes and business objects of service operations. The main processes that represent the heart of service operations are as follows: Field service A service is executed at a customer site, for example, installation or repair of a machine. In-house repair Repair of a device is performed in a service center. Service contract A service (like maintenance of a machine) is executed and billed periodically, according to a service contract. Solution business A complete solution, comprising physical goods, one-time services and periodic services is sold to the customer. The most important aspects of these processes are highlighted in the following sections. 10.2.1

Field Service

Field service captures the business process in which a service technician is onsite with a customer and executes a service. The classical example is the installation, maintenance, or repair of a machine or device. In SAP S/4HANA, the customer’s request for the service is captured in a service order, which is the central business object in service operations. The individual services to be executed are maintained as line items within the service order. The fulfillment of the services is documented in a service confirmation, which is a separate document, created as a subsequent transaction to the service order. The service confirmation contains the actual work spent and the service parts consumed for the service. The service order is usually priced and finally billed to the customer. There are basically two options: in the fixed price scenario, the service order is priced and billed to the customer, whereas in the resource-related scenario, the working time and service parts consumed are billed to the customers; that is, the service confirmation is the billingrelevant transaction. Figure 10.2 shows the detail page of a service order. Prior to the service order, a service quotation can be issued to the customer. The quotation contains an offer of the services with a price and a validity period. If the customer accepts the quotation, a service order is created with reference to the quotation. SAP S/4HANA on-premise also features service requests, lean documents without items that allow for quick capture of a request.

Figure 10.2

10.2.2

Service Order Detail Page

In-House Repair

The repair of a device can also be performed in house in a service center of the service provider company. This is the in-house repair process, and in SAP S/4HANA it starts with an in-house repair transaction, which serves as the anchor object for the process and essentially captures the list of devices that have been sent in by the customer. In a precheck step, for each device an employee in the service center determines whether it should actually be repaired, or whether an alternative action should be taken: send the device back, provide a loaner device, or issue a service quotation for the repair. In the case of a repair, a service order is created as a follow-up transaction to the in-house repair. Such service orders are called repair orders. 10.2.3

Service Contracts

A common business model is the periodic execution of services, such as monthly or yearly maintenance of machines. In SAP S/4HANA Service Operations, this process can be modeled with a service contract that has a validity period and contains line items representing the services to be executed on a regular basis. The service contract also holds a periodic billing plan that determines the individual billing period—for example, every three months on the 15th. 10.2.4

Solution Business

Many companies are moving away from the one-dimensional business model of purely selling physical goods or services and toward a model in which they offer bundles or solutions to their customers. For example, an imaginary Computing Center solution could comprise physical goods (servers), one-time services (installation and test), and periodic services (maintenance and backup). This type of service is covered by the solution order, which can contain different types of items (sales, service, and service contract) and bundle them. In a solution order, you can bundle single products

individually for one customer. Upon release of a solution order, individual operative documents (sales order, service order, service contract) are generated, which trigger the execution of the order. As depicted in Figure 10.1, this happens with the help of the data exchange manager, which we discuss in Section 10.5.1. 10.2.5

Interaction Center

In on-premise SAP S/4HANA, service operations functionality offers in addition the interaction center, a component designed for the direct interaction of service agents with customers over the phone. The interaction center features a telephony integration, dispatching and routing of incoming calls, and a unified inbox search for agents, comprising all types of business objects, including workflow items, a comprehensive interaction history for each customer, and email response management. An important scenario that makes use of the interaction center is the shared service scenario that is part of the finance functionality.

10.3

Master Data and Organizational Model

This section discusses master data objects and organizational units that are central for service operations. 10.3.1

Business Partner

Service operations uses the business partner object of SAP S/4HANA throughout all processes (see Chapter 8, Section 8.3). Business partners are assigned to business transactions, such as service orders, in various roles—like sold-to party, ship-to party, bill-to party, payer, employee, or service technician. For employees, the business partner representation of an employee in SAP SuccessFactors Employee Central can be used (see Chapter 8, Section 8.3.1). A business partner used in any of the sold-to party, ship-to party, bill-to party, or payer roles requires a customer master record. Attributes needed in transaction processing (such as, payment terms) are derived from the corresponding customer master data on the sales area level. 10.3.2

Service Products

Service operations uses the SAP S/4HANA product master (see Chapter 8, Section 8.1) to model the services that are required by and executed for a customer. A service product is defined by the service product type group assigned to the material type in customizing. The selection of views in the product master is obviously limited for a service product; for example, attributes for storage or production are not required. A few service-specific attributes, such as the usual duration of the service or the response profile, can be maintained independent of the sales area on the client level. In addition to individual service products, it’s also possible to define product bundles that consist of a list of individual products, possibly of different types. Product bundles are used in the solution business, as explained in Section 10.2.4. The definition and use of predefined product bundles are only available in SAP S/4HANA on-premise. 10.3.3

Organizational Units

Service operations reuses the organizational concepts from sales operations: sales organizations, distribution channels, and divisions. In SAP S/4HANA on-premise, it’s also possible to define a hierarchy of service organizations and service organizational units. 10.3.4

Service Teams

In SAP S/4HANA Cloud, it’s possible to define service teams as flexible groups of employees. Service teams are defined in responsibility management, a reuse component within SAP S/4HANA that allows for flexible definition of members and their responsibilities. There is a dedicated service team category and a configurable team type. Customers can assign certain attributes to a team, such as a storage location for service parts or a postal code interval for the customers that the team would serve. The

responsible service team can automatically be determined in a service order, for example. 10.3.5

Technical Objects

Service operations reuses the technical objects from plant maintenance. Equipment objects and functional locations can be used as reference objects in service orders, service confirmations and service contracts. The equipment objects are used to model the damaged devices in the in-house repair process.

10.4

Data Model and Business Transactions Framework

All transactional business objects in service operations are modeled and implemented with a unique framework. The framework was formerly known as the one order model because it was carried over and adapted from the corresponding framework within SAP Customer Relationship Management (SAP CRM). Concerning the representation of the business objects, we distinguish between the design time and runtime of the model on the one hand, called the business transactions framework, and the persistence on the other hand. These two layers are explained in the next sections. 10.4.1

Business Transactions Framework

The business transactions framework of service operations follows a highly modular design principle. A business object is composed of several components. We distinguish the following types of components: Root components For the header and item, we have the root components ORDERADM_H and ORDERADM_I. These contain general attributes that are required for all business objects, such as the object type, transaction type, transaction ID, item number, created and changed timestamps, and so on. Header and item extensions Additional attributes that have a one-to-one relationship to the header are arranged in header extensions. We distinguish simple (or flat) extensions, with only one record per header, and complex (table-like) extensions. Similarly, there are (simple and complex) item extensions. For example, PRODUCT_I is a simple item extension that contains attributes of the product, such as the unit of measure or product groups, whereas SCHEDLIN is a complex item extension for the schedule lines of an item. Sets Attributes that can either be assigned on the header or item level are arranged in sets. These can again be simple or table-like: the PRICING set is a simple set, whereas PARTNER is a complex set. If a set is assigned to the header, the attributes are valid for all items—unless an item has its own set assignment, such as when an item has a different ship-to party. 10.4.2

Data Model

The data model of the service business transactions has been optimized for the SAP HANA database. Every business object (such as service order, service confirmation, inhouse repair, and so on) has exactly one root database table with the naming convention CRMS4D_XXXX_H, where XXXX is a four-character acronym for the business object. Different business objects can also share the same database table. The assignment between business object and acronym is defined in table CRMS4C_ACRONYM. Likewise, the individual types of items (such as service item, service confirmation item, expense item, service contract item, and so on) are also modeled as business objects, and each item business object has one root table assigned (naming convention CRMS4D_XXXX_I).

Complex components of the data model are stored in separate database tables, such as CRMS4D_PARTNER or CRMS4D_REFOBJ. Figure 10.3 shows the schematic representation of the data model with the primary key fields of the tables (the client field is omitted).

Figure 10.3

Data Model of Service Business Transactions

Dedicated attributes stored in the complex sets have been added redundantly to the header and item root tables for performance reasons. For example, the IDs of the most important partners are also available in the header and item tables, in order to avoid a JOIN operation during search or in an analytics application. The same applies to the most important statuses and dates. For the same reason, selected header fields that are the most relevant for business operations (like the process type or the posting date) have been added redundantly on the item level to avoid a JOIN between the item and header tables. 10.4.3

Transaction Type and Item Category

Central concepts for all business transactions within SAP S/4HANA Service Operations are the transaction type and item category. For each business object, it’s possible to define different transaction types in customizing to represent different types of this business object. For service orders, for example, there could be standard service, repair service, and inspection service types. In a similar fashion, the item category allows you to differentiate types of items for a given item business object. The set of attributes that can be maintained for the transaction or the item (such as partners or statuses) and the process control of the transaction or item (such as pricing or billing relevance) can be controlled by the customizing for the transaction type or the item category. For example,

with different item categories for service items, it’s possible to control whether the service is billed with a fixed price or according to resource consumption. 10.4.4

Common Functions for Service Transactions

All business transactions within SAP S/4HANA Service Operations use a couple of reuse components within SAP S/4HANA. The corresponding functions can be configured in customizing for the transaction type and/or the item category. Let’s discuss not all, but the most important concepts in the following sections. Status Management Each header or item can carry one or several binary statuses. We distinguish system statuses set by the system (for example, Released, Contains Errors), and user statuses defined in customizing. Partner Functions Every business transaction needs business partners with different partner functions, such as sold-to party, ship-to party, employee responsible, and executing service technician. The allowed partner functions, along with a rule to determine the partner in a transaction, are grouped in a partner determination procedure assigned to the transaction type or item category. Transaction History The relations of a business transaction (for example a service order) to a predecessor transaction (for example, an in-house repair) or a successor transaction (for example, a service confirmation) use the common framework of binary relationships in the ABAP platform. Pricing The business transactions within service operations reuse the pricing functionality from sales (see Chapter 9, Section 9.11). This flexibility, including the definition of condition types, access sequences, and pricing procedures, can also be used for SAP S/4HANA Service. Similarly, service contracts allow for maintenance of price agreements, which are also shared with SAP S/4HANA Sales. Advanced Variant Configuration The advanced variant configurator is a new configuration engine that allows you to define configurable attributes, their values, and interdependencies for a product (see Chapter 8, Section 8.2.3). You can use it to configure service products within a service order. This function is currently only available for service items, and only in SAP S/4HANA on-premise . 10.4.5

Virtual Data Model

The VDM (see Chapter 2, Section 2.1) for service operations starts with the basic CDS views for the individual object types: I_ServiceDocument is the basic view for the header fields of a service document,

which comprises service orders, service confirmations, service contracts, and business solution orders. This is basically a wrapper around the database table CRMS4D_SERV_H. The view is equipped with associations to the basic views of all referenced master data or customizing entities. For example, there is an association to the I_BusinessPartner basic view for all fields containing a business partner ID, as well as an association to I_Currency for the transaction currency. I_ServiceDocumentItem is the basic view for the item fields of a service document,

which comprises service items, service parts and expense items, service confirmation items, service contract items, and, for business solution orders, also sales items. This is basically a wrapper around the database table CRMS4D_SERV_I. The I_ServiceDocumentEnhcd and I_ServiceDocumentItemEnhcd views represent an enhanced view of the data model for service transactions. The root is the basic view of the header or item, equipped with associations to the items (for the header view), the reference objects, the transaction history, and the custom fields. On top of the Enhcd views, different consumption views have been established that form the basis for SAP Fiori apps and analytical reports. For in-house repairs, we have the separate I_InHouseRepair and I_InHouseRepairItem basic views. 10.4.6

Public APIs

For most business objects within SAP S/4HANA Service Operations, public APIs are available and published on the SAP API Business Hub. As explained in Chapter 6, Section 6.1, OData services are used for synchronous APIs, whereas SOAP services are used for asynchronous messaging. Here is an overview of the currently available APIs (with the supported operations for OData and the supported messages for SOAP): Service confirmation: OData (create, change, read); SOAP (create/change, confirmation, notification) Service contract: OData (read); SOAP (create/change, confirmation, notification) Service order: OData (create, change, read); SOAP (create/change, confirmation, notification) Service quotation: OData (create, change, read) In-house repair: OData (create, change, read) Note that the functional scope of the APIs doesn’t cover every single feature available for the corresponding business object. However, SAP is continuously working on enhancements of the APIs.

10.5

Integration

Service business transactions integrate with other components of SAP S/4HANA in various ways. The most important integrations are those into financial accounting for cost controlling of service orders and contracts, into billing, into procurement (of service parts), the time sheet (service confirmations), and SAP Field Service Management for planning and scheduling of service orders. Details of these integration scenarios are explained in the next sections. 10.5.1

Data Exchange Manager

The transfer of service transactions to subsequent processes within SAP S/4HANA, such as billing or procurement, is achieved by a small component called the data exchange manager. This engine is invoked every time a business transaction is created or changed. Depending on their status, the transaction items are forwarded to the individual components of the follow-up processes. Usually, internal APIs of the follow-up components are called to create or change the follow-up document. This is done in a synchronous mode (except for the solution order, for which the coupling is less tight). The following handlers have been registered for the data exchange manager: Accounting The accounting handler creates an account assignment object for cost collecting and controlling. In SAP S/4HANA Cloud, dedicated account assignment objects for service orders and confirmations and service contracts have been introduced; in SAP S/4HANA on-premise, internal orders are used as account assignment objects. Billing Service orders and confirmations are forwarded to billing as soon as they have been completed. The data exchange manager creates a billing document request, which flows into the billing due list and is later picked up in a collective billing run and converted to a billing document. For service contracts billed periodically, the process is slightly different: a periodic job generates the billing document requests, depending on the billing period defined in the billing plan of the service contract. Procurement Depending on the configuration of the item category, there is the option to procure service parts or services. In this case, a purchase requisition or a purchase order is created through the data exchange manager. Sales Within the solution business, as described in Section 10.2.4, a combined solution can also comprise a physical product that is shipped to the customer through the sales functionality of SAP S/4HANA. For those items, the data exchange manager creates a sales order, which is then forwarded to delivery processing. Time sheet Upon completion of a service confirmation, the actual work times recorded by the executing service technician are forwarded to the time recording functionality. 10.5.2

Backward Integration

Service transactions that initiate follow-up processes need to receive selected information from those processes. Typically, status changes to the follow-up documents or subsequent process steps triggered by the follow-up documents are communicated to the origin transaction, in order to ensure full process visibility, but also to control certain functions within the service transaction. For example: When a billing document is created for a service order, the billing component transfers the ID of the billing document, along with the billed quantities and values, to the service order. A link between the service order and the billing document is created, and the billing status of the service order changes to Completely Billed. A purchase order has been created from a service order for a service part that needs to be procured externally. When the goods receipt for the purchase order is posted, the ID of the material document, along with the received quantity, is passed to the service order. The service technician can be informed that they can continue with the service execution. In a bundle business, a physical product is sold together with a service—for example, a machine and its installation. When the goods issue has been posted for the delivery created for the physical product, the solution order is updated. A link to the material document is created, and the goods issue quantity and value is captured on the item. This can be used to schedule the subsequent installation of the machine. The transfer of process-relevant data from the follow-up documents to the original service transaction is also referred to as backward integration. Technically, this integration uses a similar technique as forward integration—that is, the creation or change initiated by the service transaction. A dedicated exit upon saving the follow-up transaction is used to receive the transactional data, extract the relevant change information, and transfer it to the service transaction. A standard internal API is used to update the service transaction with the relevant data (link to the follow-up document, quantities and values, and status). 10.5.3

Integration with SAP Field Service Management

SAP Field Service Management is a front-office component that allows resource management, dispatching, and scheduling of service orders. A service order entered in SAP S/4HANA Service Operations—either through the UI or through a public API—can be replicated to SAP Field Service Management upon release through the corresponding SOAP notification service. The front-office service team lead will then plan, assign, and dispatch the service order to a technician. The technician receives the assignment on their mobile device and reports the time spent and parts consumed. Upon completion of the work, a service confirmation is generated in SAP S/4HANA through the corresponding SOAP service. The service confirmation is finally released for billing. 10.5.4

User Interface Technology

Like for all other SAP S/4HANA areas, SAP Fiori is the leading UI technology for service operations. However, it’s worth pointing out that some applications use the Web Client UI as legacy UI technology. The Web Client UI is a framework in the SAP S/4HANA

foundation layer that has been developed on top of the BSP framework and that has already been deployed in SAP CRM. From a look and feel perspective, Web Client UI applications have been harmonized with native SAP Fiori apps to a large extent. The UI elements used and the arrangement of UI controls like buttons are the same as in SAP Fiori. Also, navigation from and to native SAP Fiori apps is supported in a seamless manner. The SAP Fiori launchpad can contain tiles representing SAP Fiori apps and tiles with Web Client UI applications behind them in a unified manner. The end user will hardly notice a difference. There are some slight differences regarding personalization and configuration options of the individual pages.

10.6

Summary

We have highlighted the key business processes of service operations—field service, inhouse repair, service contracts, and solution business—and the involved business objects. We also introduced the master data and the organizational model for service operations. We explained the ideas behind the data model and introduced the concepts of the business transaction framework, the common framework on which all business transactions in service operations are modeled. The service operations processes have numerous integration touchpoints with other components of SAP S/4HANA, like billing, accounting, and procurement. Here we explained, how the data exchange manager enables integration with other parts of SAP S/4HANA, and covered the integration with SAP Field Service Management. Finally, we discussed UI technologies and that all service operations UIs are seamlessly integrated into the SAP Fiori launchpad through SAP Fiori and Web Client UI applications. In the next chapter, we cover the procurement solution in SAP S/4HANA. We will discuss procurement processes such as direct and indirect procurement, explain the architecture for central procurement, and address topics such as analytics and intelligent procurement.

11

Sourcing and Procurement SAP S/4HANA Sourcing and Procurement is the powerful one-stop-shop covering all purchasing aspects required by enterprises. It comes with strong low-/no-touch capabilities for ordering materials and services, catalog-based shopping for employees, and centralization of procurement operations. Transactional SAP Fiori apps, machine learning use cases, and situations support the execution of the purchasing process.

This chapter covers the application architecture of the SAP S/4HANA Sourcing and Procurement solution. Sourcing and procurement processes are used when external goods and services are required to keep the operations of an enterprise running or to provide external knowledge to the enterprise—for example, for product innovations or the discovery of new market segments. A major task of the SAP S/4HANA procurement functionality is to minimize the purchasing costs while ensuring a stable supply and compliance with purchasing standards defined within the enterprise. The solution serves a variety of needs: Usage by multiple personas Support for different procurement scenarios with purchasing of direct (productionrelevant) and indirect materials or services Tight integration into processes across SAP S/4HANA functionality such as sales, production planning, financials, and logistics Access using transactional SAP Fiori apps, APIs, or analytical applications, Integration with SAP Ariba and SAP Fieldglass solutions Enablement of a centralized purchasing department In this chapter, we start with an architectural overview and introduce the major functional blocks of SAP S/4HANA Sourcing and Procurement solution, including central procurement (Section 11.1). We also show how SAP S/4HANA Sourcing and Procurement interacts with other SAP S/4HANA applications. We introduce the business objects as the technical building blocks underlying the procurement processes. Then we describe two sample procurement processes to show how different personas— user roles or groups—collaboratively execute the business process (Section 11.2). The next section features a detailed look at the architecture of a single business object enabled for the SAP S/4HANA core technologies SAP Fiori and OData (Section 11.3). Key parts of the procurement strategy include central procurement (Section 11.4) and integration with SAP Ariba and SAP Fieldglass solutions (Section 11.5). Central Procurement centralizes purchasing operations and connects to multiple backends of different types. It can also be used as central integration point for SAP Ariba and Fieldglass solutions in a legacy system landscape. Then we describe details of analytical capabilities (Section 11.6). We end with a brief view on the innovation technologies used by SAP S/4HANA Procurement—for example, supporting the insightto-action, notification-based working style with the introduction of situations and machine learning-based support for decision-making (Section 11.7).

11.1

Architecture Overview

Figure 11.1 shows the architecture of procurement in SAP S/4HANA and the side-byside integration with central procurement. Within SAP S/4HANA, the procurement solution is embedded between goods/service requesting upstream applications such as sales and production planning (the top of Figure 11.1), and downstream applications for financials and logistics (the bottom of Figure 11.1). Requesting parties generate purchasing demands by creating purchase requisition s. In technical terms, the purchase requisition is the procurement API for inbound purchasing requests, which inform the enterprises’ purchasing departments about a demand to procure a product (that is, a material or a service). Requests are either created in a fully automated way or created manually by end users. It is a task of the purchasing department to identify and assign the optimal sources of supply to the purchasing requests; a source of supply denotes a supplier in conjunction with purchasing conditions agreed upon with the supplier. This mapping of requests to sources of supply leads to a bundling in purchase order s sent out to suppliers, typically after successful completion of an approval workflow. This straightforward purchasing process is called operational procurement. Definition of rules for source of supply assignment allow you to run the process in a low-/no-touch mode. Maximizing the automation level of repetitive procurement tasks is important for reducing purchasing spend and decreasing time to delivery—that is, the processing speed of purchasing requests. A maximum level of automation gives the purchasing department the time to focus on the high-value strategic tasks of sourcing and contracting. For source of supply assignments, purchasing contracts and the purchasing info record master data object can be used. The purchasing info record serves as a source of information for the purchaser and is specific to a combination of supplier and product (material or service). It can be created either manually or automatically from transactional documents such as purchase orders or supplier quotations. The info record allows the purchaser to quickly determine which vendors have previously offered or supplied a product in conjunction with detailed information such as pricing conditions. Source lists and quota arrangements come into play if a product can be obtained from various sources of supply. Quota arrangements are used to allocate quotas to each individual source of supply and specify the distribution of requests among different sources of supply. The quota arrangement can be assigned to a validity period. Source lists allow you to define which suppliers can be considered to procure a specific product, therefore acting as a product-specific supplier include list.

Figure 11.1

Architecture of Procurement in SAP S/4HANA

Operational procurement relies on the availability of appropriate sources of supply. Strategic procurement with sourcing and contracting deals with the provisioning of this source of supply base. Sourcing is used for collecting new quotes. For this purpose, requests for quotations (RFQs) for relevant products are sent to a list of potential suppliers. A supplier replies with a quotation. The purchaser can then decide which quotation to award and create a follow-on document such as a contract or a purchase order with the supplier. A contract is created with a supplier for specific products or product groups and describes the terms and conditions that an enterprise has negotiated with the supplier such as pricing and quantity. It does not contain information on a delivery schedule and no concrete ordering request. A scheduling agreement is a modified form of a contract that also describes the details of a delivery schedule. It sends out call-offs to the supplier in accordance with this schedule without the need of creating purchase orders, and thus acts as a combination of contract and purchase order. In contrast, call-offs from contracts are always realized by creating purchase orders. For all mentioned documents, approval workflows can be defined. The handling of the inbound delivery and fulfillment process depends on the type of the procured product. A goods receipt document is created in inventory management, referencing either the purchase order or scheduling agreement and tracks their fulfillment (see Chapter 12, Section 12.5.1). The goods receipt and purchase order/scheduling agreement both serve as the basis for the invoice verification, which validates the invoice sent by the vendor. If the verification was successful, the payment is made to the vendor, which is triggered by payables management in SAP S/4HANA Financials (see Chapter 14, Section 14.6). For procured services, service entry sheets are created.

The purchasing process can either be operated in SAP S/4HANA, or partially or fully delegated to a central procurement system (Section 11.4) or to SAP Ariba and SAP Fieldglass solutions (Section 11.5.1).

11.2

Procurement Processes

In this section, we provide a view of procurement functionality via two sample processes. If there is a specific revenue in the company’s books that a procured good or service can be directly assigned to, the process is called direct procurement. Typical examples of direct procurement processes include the following: A manufacturer buys semifinished material to produce a specific finished good A company orders goods or services from a supplier to be provided to one of the company’s customers, who pays the company for the goods or services Indirect procurement is broader in scope; the ordered good/service is not used exactly for one specific revenue. Typical examples include office supplies, maintenance, and repair operations. 11.2.1

Direct Procurement

As stated, one example of the direct procurement process is a procured semifinished material needed for production of a finished material, which later is sold to a customer. One example of such a process is shown in Figure 11.2.

Figure 11.2 Ordering Process for Production-Relevant Raw Materials, Spanning Multiple Personas/User Roles

The process starts in production planning in SAP S/4HANA, is executed in procurement, and continues in logistics and finance. An inventory manager determines via a material

requirements planning (MRP) run which goods are needed to be procured from a supplier for the production processes. Now, SAP S/4HANA generates a demand manifest in a purchase requisition, which is later used to determine a source of supply. Once you have determined the suppliers, prices, and delivery dates, SAP S/4HANA sends a purchase order to the supplier. In response, the supplier sends the goods to the specified warehouse of the enterprise, where the goods are put in storage locations and a goods receipt is created. Later, the supplier sends an invoice to get payment for delivered goods. 11.2.2

Indirect Procurement

Indirect procurement is the sourcing of all goods and services for a business to enable it to maintain and develop its operations. One typical scenario for an indirect procurement process is the ordering of consumable goods. This purchasing process is mainly driven by the requestor persona, wherein a casual user fetches an item from their electronic procurement catalog, which is then automatically turned into a purchase order—after an optional approval workflow. The request can be automatically invoiced after the successful goods receipt confirmation of the employee. This sort of basic requisitioning flow can either be executed within SAP S/4HANA alone, or it can be carried out in conjunction with the guided buying capability of the SAP Ariba Buying solution. Guided buying comes with a consumer-grade look and feel and supports the requestor with a range of additional features such as spot buy, access to marketplaces such as eBay, support for buying policies of a company’s procurement department, and the ability to use electronic ordering forms. Regardless of the choice of user interface, the final purchase requisition in an indirect procurement process is then typically accounted to a cost object in SAP S/4HANA for consumption and thus integrated into follow-on controlling and finance processes.

11.3

Architecture of a Business Object in Procurement

Business objects are the building blocks and structuring elements for implementing business processes in procurement. They represent real-world objects and are the technical representation of a semantically rich model of the business domain. Each business object has its own lifecycle with identifiable instances. It bundles functionality in the form of business logic (rules and behavior) and data, and it consists of hierarchically organized entities. For example, the purchase order business object contains entities for the header (the root entity), item, account assignment, and delivery schedule. Figure 11.3 depicts an architectural view of a business object. The main component is the business object core, where the domain logic is implemented. The decoupling of this layer from the consumption layer ensures that user interface enhancements and the introduction of new consumption or communication channels can be performed safely without impacting the core domain layer.

Figure 11.3

Architecture of a Business Objects in Procurement

SAP S/4HANA Sourcing and Procurement provides modern UIs based on SAPUI5 and SAP Fiori elements, in combination with the SAP Fiori design guidelines (see Chapter 3, Section 3.1). Traditional UIs based on Dynpro technology complement the modern UIs and provide access to the full scope. They offer the option of a smooth transition from SAP ERP to SAP S/4HANA on-premise. A stateless enablement layer is introduced with the application infrastructure, comprising the SAP Gateway technology, Service Adaptation Definition Language (SADL), and BOPF to implement the OData protocol as RESTful communication channels and to interact with the business object layer (see Chapter 2, Section 2.2). It supports interactive transactional OData-based SAP Fiori apps by continuously saving all UI changes in a draft of the document under work. The draft is saved in the backend and is accessible from multiple devices. This approach enables a usage scenario in which an end user starts a document on the desktop and finalizes it on a mobile device. In addition, the application infrastructure is used to define stable APIs for application-toapplication communication, complementing the SOAP-based web services.

11.4

Central Procurement

Central procurement is a hub solution for centralizing and streamlining the purchasing needs of a company. It’s especially relevant for customers operating a multibackend legacy system landscape. In this section, we’ll focus on the key architectural aspects of central procurement. The interested reader can find a detailed description of central procurement in the E-bite Introducing Central Procurement with SAP S/4HANA (Ashlock, Paschert, Schuster, 2018, https://www.sap-press.com/4800). Without central procurement, procurement operations would need to be executed locally in the backend. Such a decentral approach would make it difficult to aggregate spend volumes across the organization, leveraging scale effects on pricing. Also, central procurement reflects the organizational setup in an enterprise, in which the purchasing department is typically a centralized organization, and it helps the department operate in a multibackend system landscape. Central procurement centralizes the following: The workplace of professional purchasers, by providing applications for sourcing and contracting in a single system. It allows companies to centrally set up the purchasing departments’ organizational structure, assign purchasing responsibilities with responsibility management, and define approval workflows using the built-in business workflow capabilities. Employee self-services for requisitioning. Use of related SAP solutions, including Ariba Network and SAP Fieldglass solutions, by providing a centralized integration (see also Section 11.5). Analytics, by allowing for real-time analytics with spend KPIs (see also Section 11.6). An innovation roadmap for the modernization of a company’s purchasing solution, such as by leveraging the situations concept and machine learning (see also Section 11.7). Figure 11.4 gives an overview of the architecture of central procurement. In a landscape with a hub system and connected backend systems, process execution transitions from one system to the other. This transition step in a procurement hub scenario comes along with the replication of transactional documents for requisitioning and ordering. For consistency reasons, it’s important to define a leading system if documents live in two different contexts. Here we call documents in the hub system central if these documents are leading and replicated if a backend is the leading system for the document. Documents replicated from a backend are purchase requisitions and purchase orders. For replicated purchase requisitions, the sourcing process can be executed in central procurement. It starts with improved capabilities for rule-based assignment of sources of supply and the Manage Purchase Requisition Centrally SAP Fiori app for manual assignment. Once this assignment is done, a purchase order is created directly in the corresponding source system. Replication of purchase orders can be used for a centralized approval process, centralized output management and analytics, for example.

Figure 11.4

Architecture of Central Procurement

To define specific purchasing needs, purchase requisitions can be created centrally too. One important aspect of central procurement is that it is decoupled from SAP S/4HANA applications for financials and logistics. Central procurement does not directly communicate with these applications. Instead, the integration is done via the procurement functionality in the respective backends, which is already integrated with other SAP S/4HANA applications such as financials and logistics (see Figure 11.1). Specifically, the purchase requisition acts as the inbound API and the purchase order acts as an outbound API. These two document types (or: business objects) are the entry and exit points of the hub and backend interactions. 11.4.1

Backend Integration

Integration with backends is at the core of central procurement. An integration solution must handle a number of different aspects, such as which kinds of APIs to use, how to handle different types of backends, how to handle distributed master data, and how to perform UI integration. Integration is available with SAP S/4HANA, SAP S/4HANA Cloud, and SAP ERP as backend systems. The on-premise systems require a dedicated integration add-on —HUBS4IC for SAP S/4HANA and HUBERPI for SAP ERP—to act as an adapter. This add-on approach offers two major advantages: First, as a separate software component, the add-on has its own lifecycle independent of the lifecycle of SAP ERP. So constant evolution of the integration over time is reflected in an update of the adapter only. Second, it’s a minimally invasive approach, which is important for the stability of a

legacy system landscape grown over time, in which the installation of an add-on software component and updating the same carries much less risk and testing effort than frequent upgrades of the whole SAP ERP system. For the data exchange between the central system and backend systems, synchronous OData APIs or asynchronous SOAP messages are used, depending on the interaction use case. Flexibility in the connection with multiple backend types is realized in the central system with an abstraction of the connectivity in form of a driver concept. Individual drivers can be defined to adapt to the technical capabilities of the different backend types. This is especially relevant when connectivity to SAP ERP based on older SAP NetWeaver releases is required. For end users, it’s important to see the data of original documents. For this purpose, central procurement provides a UI integration in which the SAP Fiori launchpad-based UI of central procurement allows for navigating to backend web GUIs displaying the document in the source system. Finally, master data handling is an important challenge in a distributed system landscape. The strategy of central procurement is not to become a player in the domain of master data governance systems. Instead, it tackles this challenge with a mix of replication, synchronous data retrieval on the fly complemented with caching, and integration with specialized master data governance systems such as the SAP Master Data Governance application. In particular, material master data is not replicated but retrieved on the fly—for example, for value helps. The hub system either assumes harmonized material master data in the backends realized by external means or allows you to define key mapping by using the universal key mapping service provided by SAP Master Data Governance.

11.5

APIs and Integration

This section covers the options to integrate with SAP S/4HANA Procurement. For system-to-system integration, each business and master data object of procurement provides an OData API for synchronous integration, as well as a SOAP API for message-based communication (that is, asynchronous integration). The list of available APIs includes APIs for CRUD operations and is found on the SAP API Business Hub (https://api.sap.com). In addition to these general-purpose APIs, tailored integration for SAP Ariba and SAP Fieldglass is available. 11.5.1 SAP S/4HANA Procurement Integration with SAP Ariba and SAP Fieldglass The SAP S/4HANA Procurement integration with SAP Ariba covers four scenarios: Guided buying The integration with SAP Ariba Buying provides next-level self-service requisitioning, leveraging the corresponding guided buying capability. In this scenario, requisitioners explore catalogs and create their shopping carts in SAP Ariba Buying. On submit, those shopping carts are replicated as purchase requisitions to SAP S/4HANA. There, the follow-on documents like purchase orders, goods receipts, and invoices are created. The requisitioners get updates about the follow-on documents in SAP Ariba Buying, providing full transparency of the order status. Sourcing The integration with SAP Ariba Sourcing solution allows you to request quotes and prices from suppliers to identify the best source of supply for a given demand. In SAP S/4HANA, purchasers bundle purchase requisitions into RFQs and select the suppliers that will be invited to the bidding event. In SAP Ariba Sourcing, a sourcing project is created based on the RFQ data, and the invited suppliers can submit their quotes using Ariba Network. Once the bidding event is over, a strategic buyer awards the winning quotes in SAP Ariba Sourcing and sends them to the SAP S/4HANA system to automatically create purchase orders. Contracts Using the integration with SAP Ariba Sourcing, strategic buyers can negotiate and establish contracts with suppliers. Once a bidding event is over, a strategic buyer can decide to create a contract instead of a purchase order—for example, if there is a recurring demand for a certain material. In SAP Ariba Sourcing, the contract is created and negotiated with the supplier and afterward replicated to SAP S/4HANA as an operational contract. Commerce automation Once connected to the Ariba Network, SAP S/4HANA Procurement can collaborate with millions of potential suppliers and automate the exchange of business documents like purchase orders and invoices. Figure 11.5 illustrates the components involved and their communication channels. Let’s highlight the most important aspects of the integration.

Figure 11.5

SAP S/4HANA Procurement Integration with SAP Ariba and SAP Fieldglass Solutions

In this integration scenario, a buyer (such as a purchaser, warehouse clerk, or accountant) interacts with SAP S/4HANA and with SAP Ariba solutions. A supplier interacts and collaborates with her or his customers (that is, buyers) via the Ariba Network. Both buyers and suppliers, have their own accounts on the Ariba Network. The communication between SAP S/4HANA and SAP Ariba applications is handled by the SAP Ariba Cloud Integration Gateway, an SAP-managed middleware based on SAP Cloud Platform Integration (see Chapter 6, Section 6.6). SAP-delivered SAP Ariba Cloud Integration Gateway mapping content makes an out-of-the-box integration available to SAP customers. In addition, it provides capabilities to monitor and extend the integration scenarios, such as for field extensibility. Transactional data is exchanged via SOAP APIs (asynchronous) and OData APIs (synchronous). In the case of SOAP, SAP Ariba Cloud Integration Gateway transforms the XML data to the corresponding protocol of SAP Ariba solutions and vice versa. Integrated transactional data includes the purchase requisition, purchase order, service entry sheet, and supplier invoice business objects. For master data, the SAP Cloud Platform Master Data Integration service is used as the exchange infrastructure: SAP S/4HANA pushes and receives the master data from SAP Cloud Platform Master Data Integration, which in turn communicates with SAP Ariba Cloud Integration Gateway. In the SAP Ariba solutions, the master data is stored and managed by the SAP Ariba master data service—except for supplier master data, which is not handled by the SAP Ariba master data service but by SAP Ariba Supplier Management solutions. On the SAP S/4HANA side, there are two deployment options for the guided buying and the sourcing and contracts scenarios. Option 1 is to connect SAP S/4HANA directly with SAP Ariba Cloud Integration Gateway.

Option 2 is to connect multiple backend systems through central procurement with SAP Ariba Cloud Integration Gateway. The backend system can be SAP S/4HANA, SAP S/4HANA Cloud, or SAP ERP (from release 6.06). The master data is always replicated from the connected backend systems and not via central procurement. For SAP ERP, the master data replication works differently and is therefore not included in Figure 11.5. SAP S/4HANA can be integrated with SAP Fieldglass solutions as well. This integration scenario covers the business process of contingent workforce procurement. Today’s external workforce planning and onboarding demands agility. The enterprise’s internal workforce capacity needs to be balanced by external workforces to adapt to everchanging demands. Buyers planning to procure their external workforce demand can do so using SAP Fieldglass solutions. Across SAP S/4HANA and SAP Fieldglass solutions, transactional and master data are kept in sync using SOAP APIs. A customer-managed SAP Cloud Platform Integration is required to map the SAP S/4HANA and SAP Fieldglass APIs. The corresponding mapping content is predefined by SAP. Both buyers and suppliers have their own accounts for SAP Fieldglass solutions. Going into more detail, Figure 11.6 visualizes the building blocks of the Ariba Network integration for the commerce automation scenario. One SAP S/4HANA Cloud tenant is used for procurement, interacting with an SAP S/4HANA Cloud tenant used for sales in a buyer-supplier interaction scenario with the involvement of the Ariba Network. In this scenario, buyers and suppliers collaborate by exchanging their business documents via the Ariba Network. If both the buyer and supplier run SAP S/4HANA, they can automate the procurement process from end to end by using the integration capabilities provided by SAP. The buyer and supplier can also choose to directly integrate their SAP S/4HANA Cloud tenants.

Figure 11.6

Ariba Network Integration

In SAP S/4HANA Cloud, the Ariba Network integration is realized through the following: Output Management Communication arrangements for integration with SAP S/4HANA Cloud and the SOA manager for SAP S/4HANA on-premise SAP Application Interface Framework We’ll explain Figure 11.6 using an example process. A purchaser creates a purchase order using the business application of SAP S/4HANA Cloud. This triggers an output to the corresponding supplier via the EDI output channel of output management (see also Chapter 18, Section 18.4). While issuing the output, the logical port (SAP Ariba Cloud Integration Gateway endpoint URL) is retrieved from the communication arrangement framework. The logical port is passed on to the SAP Application Interface Framework (see Chapter 6, Section 6.3), where the mapping of the purchase order data to the OrderRequest SOAP XML message is performed. Once the mapping is completed, the AIF calls the webservice infrastructure to push the message to SAP Ariba Cloud Integration Gateway. In SAP Ariba Cloud Integration Gateway, the XML message is transformed into a commerce eXtensible Markup Language (cXML; see cxml.org) message and then pushed to Ariba Network. The cXML standard is a streamlined protocol intended for consistent communication of business documents between procurement applications, ecommerce hubs, and suppliers. On the Ariba Network, the message is routed via the buyer account to the supplier account. Here, the supplier finds the purchase order in the inbox. After reviewing the purchase order, the supplier can confirm it by creating an order confirmation on the Ariba Network. This is routed to the buyer account and pushed to SAP Ariba Cloud Integration Gateway as a cXML message. In SAP Ariba Cloud Integration Gateway, the cXML message is transformed into an XML message and then pushed to the SAP S/4HANA Cloud tenant. There, the message is processed by AIF. This finally updates the purchase order with the confirmation data. Suppliers can choose to integrate their SAP S/4HANA Cloud instance with Ariba Network as well. The architecture on the buyer and supplier sides are almost mirrored. Following the example again, the purchase order data is forwarded from the supplier account to SAP Ariba Cloud Integration Gateway as a cXML message, SAP Ariba Cloud Integration Gateway transforms it into an XML message and pushes it to the supplier’s SAP S/4HANA Cloud tenant, and from there a sales order is created. This triggers Output Management to send an order confirmation back to the buyer via SAP Ariba Cloud Integration Gateway and the Ariba Network.

11.6

Analytics

Embedded analytics is a framework that enables key users to model, create, and share views, reports, KPIs, and apps. The modeling capabilities with a set of key user tools is complemented by the delivery of standard content from SAP (for details, see Chapter 4, Section 4.1). Embedded analytics for procurement provides deep analytical insight into procurement processes by delivering multiple KPIs, analytical list pages, and monitoring apps. These analytical applications cover the breadth of purchasing business objects such as purchase requisitions, purchase contracts, scheduling agreements, purchase orders, and invoices. In SAP S/4HANA Procurement, embedded analytics makes it possible to: perform real-time operational reporting and monitoring; gain spend visibility; control purchasing spend by monitoring off-contract and unmanaged spend; evaluate supplier performance on various hard facts like price, quantity, quality, and time; and integrate a purchasing spend dashboard into SAP Analytics Cloud, embedded edition. This last feature is available in SAP S/4HANA Cloud only. Here are some concrete examples. With the SAP Fiori app Overdue Purchase Order Items, you can analyze the delivery dates of the purchase orders sent to the supplier. The KPI compares the actual date with the delivery date on an item level. All purchase order items for which the delivery date has passed and which have not yet been delivered are calculated. You can also use this app to analyze the delivery dates of purchase order items sent to the supplier. With the Purchase Order and Scheduling Agreement Value app, you can display the purchase order net amount and the scheduling agreement amount based on a time period and dimensions such as material, supplier, and plant. In addition, you can use ABC classification views to identify important suppliers, material groups, and purchasing groups. In procurement processes, suppliers are evaluated based on various hard and soft facts. To support such a supplier evaluation, there are multiple supplier evaluation KPIs provided: supplier evaluation by price, by quantity, by time, by quality based on quality notification, and by quality based on quality inspection lot. Furthermore, procurement analytics includes the following capabilities: APIs for supplier evaluation scores CDS views for purchasing and off-contract spend released for custom consumption Analytics for central procurement with analysis global purchasing spend data, central contract consumption, and monitoring purchase orders of a multibackend system landscape in a central procurement hub system

CDS-based data extraction for purchase order items for use in SAP Business Warehouse The foundation for these capabilities is a layer of annotated analytical and transactional CDS views (cube views and dimension views) in the procurement VDM. In the UI, SAPUI5-based UI frameworks interpret the annotations to expose KPIs, analytical list pages, and monitoring applications (see Chapter 4, Section 4.1).

11.7

Innovation and Intelligent Procurement

We envision the evolution to autonomous procurement as a transformation journey to increase process automation and end user guidance through integrated and intelligent systems (see Figure 11.7). In this vision, humans focus on defining the strategies and oversee system-driven execution of those strategies. Typical user tasks will change during the transformation process as the procurement application takes over more and more operational and tactical activities. Processes will be increasingly driven automatically without direct human interactions. And ultimately, the procurement application will assure compliance, support the user proactively, and free users to focus on their primary tasks. The key to autonomy in procurement lies with information and data, which provide the foundation for autonomous procurement and grow in importance as the application becomes more intelligent and autonomous.

Figure 11.7

Innovations Map for Procurement

On the way to autonomous procurement, various technologies can be used. Machine learning helps to recognize patterns in data (for example, for clustering free text orders to propose the creation of catalog items) or make predictions (for example, for delivery dates of purchase orders or predicting when a contract will be fully consumed). Intelligent situation handling helps purchasers to be informed proactively when their attention is required. In addition, it gives related information and proposes actions. With SAP Conversational AI, SAP is opening a completely new and intuitive interaction channel with the procurement application. Combining these different technologies enables intelligent procurement via smart triggers, such as Internet of Things (IoT) sensors or machine learning, which can be consumed by situations and might lead to smart system recommendations and automations. You can find more information at the following sources: How SAP S/4HANA intelligent technologies help requisitioners (https://youtu.be/T9Wfm-Xci0M), operational purchasers (https://youtu.be/104hSZpZRAs), and strategic purchasers (https://youtu.be/t9ijHv7QLJE)

Reduce free-text items with SAP S/4HANA Procurement: https://youtu.be/ 4ZlydAF2MQY Reduce off-contract spend with SAP S/4HANA Procurement: https://youtu.be/GD7FTtHvBM4 (prototype with Microsoft Hololens: https://youtu.be/4kL2mxWIHso) Avoid delivery deficits with SAP S/4HANA Procurement https://youtu.be/ 75m0GxhcPW8 Optimize corporate purchasing with machine learning https://youtu.be/zj3u2SJluMY Read more information about the SAP Procurement Intelligence application: https://help.sap.com/viewer/product/SAP_S4HANA_CLOUD_INTELLIGENT_INSIGHTS

11.8

Summary

This chapter offered a whirlwind tour of the capabilities of the procurement functionality within SAP S/4HANA. We covered the integration of procurement processes within the SAP S/4HANA-wide processes while introducing the procurement business objects, and went into the details of the architecture of a business object. Then we highlighted the importance of central procurement and its integration into a multibackend system landscape, presented the integration concepts for the interaction with SAP Ariba and SAP Fieldglass solutions, and showcased analytical capabilities implemented for procurement. We closed by presenting the current usage of innovation technologies such as machine learning and intelligent situations management in the SAP S/4HANA purchasing solution to give a view into the vision of an autonomous procurement engine. SAP S/4HANA comes with optimized and innovative logistics functions enabled by SAP HANA, for example, live material requirements planning and real-time inventory, as we see in the next chapter.

12

Logistics and Manufacturing SAP S/4HANA comes with optimized and innovative logistics functions powered by SAP HANA, for example live material requirements planning and real-time inventory. Learn about the business objects, engines and cross functions in manufacturing and logistics in SAP S/4HANA.

This chapter explains how SAP S/4HANA implements the core logistics and manufacturing processes. It starts with an introduction to logistics and manufacturing, reviews the organizational units used, briefly defines the master data objects required in logistics and manufacturing, and continues with the transactional business objects. Finally, calculated business objects, engines, process control, and cross functions are discussed. We define logistics as follows: As planning, controlling, optimization, and execution of physical or logical movements of products within an organization. Logical movement entails changing the availability of a product for a certain business process—for example, reserving stock for quality checks. As a generic term that comprises procurement, material flows within production, inventory management, quality management, picking, packing, and sales and distribution. Logistics and manufacturing cover an essential part of the external procurement and sales processes and is thus external-facing, involving suppliers, customers or third-party logistics providers. It also covers an essential part of the internal production and distribution processes, with a strong link to warehouse management. To understand the handling of logistics and manufacturing processes in SAP S/4HANA it’s essential to understand the semantics of the involved organizational units and business objects. Organizational units structure an enterprise organization according to business or legal requirements. Business objects either represent master data—objects that are referenced in multiple business transactions—or they represent transactional data, objects created within the context of a business transaction. How manufacturer configure the core logistic processes in SAP S/4HANA essentially depends on their business model. SAP S/4HANA supports the following production processes: Engineered-to-order (ETO) production processes for highly individualized products Make-to-order (MTO) and make-to-stock (MTS) production process for mass production Wholesale or retail-like processes with no production, just sales and distribution In real life, a manufacturer’s business processes are always a blend of these processes.

12.1

Architecture Overview

Figure 12.1 depicts the core logistics data flow, including production planning with company-internal and external procurement, production execution, inventory management, and sales. Details of the sales process are described in Chapter 9. Details of the procurement processes are described in Chapter 11. Logistics and manufacturing deals with products and materials. Typically, these are tangible objects like machines or fluids that are assembled or produced, stored, and transported. However, logistics deals also with nontangible products such as software applications or music, if they are moved by downloading or streaming. In the context of logistics and manufacturing, we use the term material as it appears on several corresponding user interfaces. However, the business object refers to the product master (see Chapter 8, Section 8.1). As soon as your business processes include handling of tangible or nontangible materials, SAP S/4HANA creates a material document (centered in Figure 12.1). Material documents record any material movement in the enterprise. By summing up those records, any stock at any point in time is calculated on the fly. SAP S/4HANA calculates stock in real-time without using persisted aggregates. This is the cornerstone of inventory management (Section 12.5.1) and a significant difference from stock calculation in SAP ERP. Inventory changes can be triggered by goods receipt from external procurement (see Figure 12.1, bottom, and Chapter 11 for details) or from internal production (see Figure 12.1, center). The internal production is triggered by a direct requirement element, such as a sales order or by planned independent requirement s (PIR) created manually or automatically in SAP S/4HANA. Planned independent requirements are processed by material requirements planning (MRP), which triggers the production or external procurement of the required materials. Alternatively, material requirements planning can also be triggered by, say, a sales order in the make-to-order process. Material requirements planning relies on several types of master data. It uses a product master and bill of materials because it needs to know how materials are composed of components and ingredients (see Chapter 8, Section 8.1 and Section 8.2). For production planning, material requirements planning also needs to know which steps are required to produce a material and what production capacities are available. For that, it uses routing and work center master data. Several algorithms are available for material requirements planning. As materials typically consist of other materials as components or ingredients, the planning algorithms are applied recursively for each included material. The results of the planning process are purchase requisitions for materials that need to be purchased or planned orders in case the components are produced internally.

Figure 12.1

Logistics Architecture Overview

Purchase requisitions are eventually converted into purchase orders to execute the external procurement, and planned orders are converted into production orders or process orders to execute the production process. In both cases the company gets new material, which SAP S/4HANA records by creating a material document that represents a goods receipt. The top of Figure 12.1 depicts the sales process in which a sales order initiates logistics execution by creating an outbound delivery document. As the sold material decreases the inventory, goods issue processing creates a corresponding material document. Details of the sales process are explained in Chapter 9. Other inventory changes are internal movements, scrapping, or posting of physical inventory differences (see Figure 12.1, center). Note that common cross functions, such as quality management, batch management, handling unit management, serial number management, and inter/intracompany stock transport, are not shown in Figure 12.1. They are detailed in Section 12.6. Section 12.7 describes integration scenarios to external systems, such as warehouse management and manufacturing execution systems in more detail. This section explained the flow of the core logistics processes and their links to other SAP S/4HANA processes. The next section introduces the organizational units in core logistics.

12.2

Organizational Units

The logistics and manufacturing processes in SAP S/4HANA rely on organizational units (see Figure 12.2). Across logistics, plant, storage location and shipping point are used, whereas MRP area and production supply area are used in production planning and execution only. The organizational units have a common configuration, which is shared by several processes, such as description, address, currency, and a process-specific configuration, which is used by specific business processes only, such as inventory management, batch management, and logistics execution. In logistics and manufacturing, master data and transactional business objects are created with reference to organizational units. The MRP area and production supply area organizational units have a strong relationship to master data objects—in particular, the product master—as they support the production planning process.

Figure 12.2

Organizational Units in Core Logistics

12.3

Master Data Objects

In this section, you’ll get to know the master data business objects used in core logistics and manufacturing. These objects have already been outlined in the architecture overview in Figure 12.1. In SAP S/4HANA, the product master (see Chapter 8, section 8.1)—in logistics preferably called the material—is the main master data object used to create inventory and goods flow (see Figure 12.3). It represents a tangible product, such as a screw or a truck, or a nontangible product or service, such as a software application or a movie to be streamed. Materials are created in the product master. A material object includes views which collect business process specific data linked to a specific organizational unit. Most important for core logistics are the plant view and the MRP views. A bill of materials (BOM; see Chapter 8, Section 8.2.1) is a directed hierarchy of materials describing which (child) components are contained within the parent. BOMs have a time-dependent validity, a (business) usage, and an alternative, which is an additional key allowing you to define several BOMs for one parent valid at the same time. A routing (called a master recipe in the process industry) describes how a material is created in a plant. A routing contains a series of operations called routing steps, which represent the work on the shop floor. Routings can be nested and have a relation to one or more work center(s). A work center (called a resource in the process industry) represents a machine, production line, or employee in production, where operations are carried out. It contains data on capacity and scheduling. The production version is the time-dependent relation between a routing and a BOM, which describes how a defined lot size of a material must be produced at a certain point in time.

Figure 12.3

Master Data Business Objects with Relationships

All master data objects are involved in production planning, production execution, inventory management, and logistics execution. They are referenced by the transactional business objects explained in the next section.

12.4

Transactional Business Objects

We start with an overview of the transactional business objects already outlined in Figure 12.1 and focus on the relationship between the business objects (see Figure 12.4). Planned order s are the result of the production planning process. They represent the planned production or procurement of a material. They can be converted into purchase requisitions if the material is to be procured externally or into production orders if the material is to be manufactured internally. Although it’s possible to customize SAP S/4HANA in such a way as to create a purchase requisition from a planned order, we recommend implementing the process flow as described and create purchase requisitions via the production planning process. In a repetitive manufacturing process, the planned order is directly reduced by a production confirmation using the first in, first out (FIFO) approach. Purchase requisitions represents a planned external procurement. If they are converted into a purchase order, the procurement is initiated. The procurement process ends with the material document goods receipt either based on the purchase order or on an inbound delivery created from the purchase order (see Chapter 11, Section 11.2 for more information about the procurement processes). If a material is procured externally on a regular basis, a scheduling agreement defines the fixed conditions of the procurement process. Each external procurement of this material is initiated based on the scheduling agreement. Production order s initiate the internal manufacturing of goods in a discrete manufacturing process, as do process orders in process industry. Based on the routings, BOMs, and work center master data objects, the manufacturing is executed. The production process creates a production order confirmation to report the successful manufacturing of the product. The manufactured product is recorded as inventory by a goods receipt. The physical inventory itself is verified on a regular basis by physical inventory documents. In the case of a posting difference, their posting can create a material document. When a product is sold, a sales order is created. When the sales process completes, it creates an outbound delivery. When the outbound delivery is posted, the delivery process creates a corresponding goods issue. Figure 12.4 shows the material document and the related business objects that are involved in procurement, sales, and production of goods. In SAP S/4HANA, all logistics processes involving the physical or logical movement of goods are recorded by material document postings. Material documents represent a journal of material movements. Section 12.5.1 explains how the journal entries are used to calculate the stock quantities. Unlike other documents mentioned in this section, material documents are immutable. They cannot be deleted. To cancel a material movement described by a material document, the creation of a new material document is required, which compensates the movement of the original document.

Figure 12.4

Transactional Business Objects and Process Steps

The material reservation is special kind of transactional object. Reservations are transient business objects created by different business processes in order to indicate that a certain quantity of inventory stock is possibly consumed in the near future by a dedicated logistics process, such as MRP (Section 12.5.3) or available-to-promise (ATP; Section 12.5.2). All transactional business object instances are linked to each other by business process steps and store a reference to their predecessor or successor business object instance. This section digs deeper into the semantics of each transactional business object and explains the keys of the transactional business objects and their most important business semantic attributes. This additional level of detail is shown in Figure 12.5. Let’s start with the external procurement, which you see on the right-hand side of Figure 12.5. The purchase order has one key and the business semantic on the header level is defined by the purchase order type. The purchase order item business semantic is defined by the purchase order item type and the account assignment category. The purchase order item type controls the procurement process, whereas the account assignment category influences the goods receipt, invoice verification, and account determination process. If the purchase order item is set for confirmation, an inbound delivery creation for the purchase order is mandatory for the goods receipt.

Figure 12.5

Transactional Business Objects with Business Keys and Semantic Attributes

The internal production is based on the production order (one key) and the order type defining the business semantics on the production order item (see top-center of Figure 12.5). The material document in the center of Figure 12.5 has two keys: material document number and fiscal year. The business semantics are controlled by the movement type on item level. For example, movement type 101 is used for goods receipt against purchase order and movement type 321 is used to transfer goods from quality inspection to unrestricted stock. In most cases, the items of one material document posting belong to a dedicated business process step referencing the items of one predecessor document. For example, when creating a goods receipt for external procurement, the movement type can be selected from several options, whereas the movement type of a goods receipt for production order as part of the production order confirmation is configured. The outbound delivery is shown on the right side of Figure 12.5. It has one key and the business semantics are controlled by the delivery type (such as outbound, inbound, returns) on the item level, which directly controls the movement type of the goods issue. The physical inventory document (Figure 12.5, bottom-right) has two keys. Each physical inventory document item represents a physical inventory-counting process of one material in one plant in one storage location. The process comprises the mandatory steps of counting and posting and optionally a recount. The movement type of the corresponding material document is hard linked to the physical inventory document item. As explained, there are various mechanisms that determine the business semantic attributes during the flow of transactional documents in SAP S/4HANA, such as user input, configuration, coded rules, or customer enhancements.

Now that we’ve explored the transactional business objects in the logistics functionality of SAP S/4HANA, let’s get to know a set of calculated business objects—engines—in the next section.

12.5 Calculated Business Objects, Engines, and Process Control The instances of calculated business objects are not persisted in a database table. If business logic accesses a calculated business object instance, the corresponding algorithm calculates the values on the fly. The nature of the business object—whether calculated or persisted—is hidden for the business object consumer and normally based on implementation considerations. Calculated business objects often represent figures, the output values of which depend on many input variables and need to be consumed in real-time. Engines represent complex algorithms executed on a regular basis to mass-process input variables based on configuration settings to reach the next step in a business process. 12.5.1

Inventory

In previous products, such as SAP ERP, inventory was persisted in a number of database tables. In SAP S/4HANA, this is no longer the case. Instead, in SAP S/4HANA inventory is a calculated business object. As mentioned, the material documents serve as a journal to record any movements of material in SAP S/4HANA. Therefore, any inventory figures can be calculated from this journal at any given date. Figure 12.6 outlines the differences of the data model. Previously, inventory was a persisted business object based on a key figure model; in SAP S/4HANA it’s a calculated business object based on an account model in the database table. This simplification improves transparencies, reduces redundancies, and significantly accelerates the material document posting.

Figure 12.6

Inventory Data Model in SAP ERP and in SAP S/4HANA

A combination of logical keys identifies all different inventory stocks These keys are the so-called stock identifying fields (SIDs) in the material document table MATDOC (see Figure 12.6). SAP S/4HANA calculates approximately 60 different types of inventory stocks based on these SIDs. The SIDs reference organizational units like plant and storage location, which describe where the inventory belongs. Furthermore, the SID contains business-related attributes like InventoryStockType and InventorySpecialStockType, which describe the business process(es) the inventory belongs to. InventoryStockType describes standalone stock types, such as unrestricted-use stock

or quality inspection stock (Section 12.6.2). Contrary to that, InventorySpecialStockType can only be used in conjunction with additional SID fields, like supplier (supplier consignment stock), customer (customer consignment stock), sales document and sales document item (order-at-hand stock). for example, a supplier consignment stock posting has the stock type consignment, and among others the SID attribute supplier is filled in the material document. Some inventory stock types are not managed by the material documents but by other functionalities. Examples are nonvaluated goods receipt, blocked stock (managed by purchase orders) or reserved stock (managed by material reservation documents). 12.5.2

Available-to-Promise

The available-to-promise (ATP) check is a service that calculates the availability of a specific product in a specific plant. The most prominent use cases are sales order

processing (including delivery creation and goods issue) and production planning (in particular, the conversion of planned orders into production orders). The result of an ATP request is a confirmation. It can be a simple confirmation of quantity and date, or it may be a time series consisting of quantity/date pairs. For example: 10 pieces of product A are requested in plant 0001 for next Monday. The ATP check can confirm 5 pieces on Monday and 3 pieces on Wednesday. The remaining 2 pieces cannot be confirmed. SAP S/4HANA uses two different algorithms for calculating a confirmation for a given requirement in an ATP check: Product availability check (PAC) Product allocation check (PAL) The PAC algorithm calculates product availability based on existing supply and demand elements, which are represented by business objects stored in the database, such as stocks, sales orders, or production orders. The algorithm aggregates all relevant stocks and supply elements in daily buckets in real time and compares them with the aggregated demand elements. If the cumulated supply exceeds the cumulated demand, the resulting positive ATP quantity is used to confirm the requirement being checked. Working with cumulated quantities rather than dedicated supply and demand elements ensures that the confirmations calculated by the PAC algorithm are robust and near immune to the minor changes that often impact transactional business objects. This degree of stability makes an ATP confirmation a reliable statement with a high business value. In contrast, the PAL algorithm is based on planning figures rather than real supply elements and can consider almost every additional attribute assigned to a demand element. For example: a certain group of products has export restrictions to certain countries. A time series with appropriate granularity, such as per month, can be created to reflect that restriction; for example, 100 pieces can be shipped each month. In addition to the previous sales-driven example, it’s also possible to model capacity restrictions such as transport and/or production capacity. The planning figures in PAL could, for example, represent the weekly transport capacity of a ship to a foreign country or the capacity of a machine that restricts the quantities that can be produced each day. One ATP check can include both algorithms. Thus, both allocation types, sales and capacity product allocation, can be checked in a single ATP check. The sales PAL check is always performed before PAC. The capacity PAL check is performed after PAC. The requested quantities and dates of the current check are based on the result confirmation(s) of the preceding check. There can be multiple characteristics for one restriction (either a sale or a capacity restriction), and on top multiple characteristic combinations that can be logically combined (with AND and OR). The restriction is called a product allocation object (PAL object), and the quantities maintained per bucket are the product allocation quantities (PAL quantities). During the ATP check for a concrete requirement, all relevant product allocation objects are identified, and the system checks if the product allocation quantities are sufficient to confirm the request. Obviously, the confirmed quantities need to be consumed; for this reason, the PAL algorithm has dedicated persistent database tables.

From a business perspective, PAC decides to which extent a requirement can be confirmed based on supply. The PAL check calculates if a requirement should be confirmed from a business point of view or based on capacity restrictions. Another difference is that the first come, first served principle of PAC can be overruled by PAL to reach a fair share-oriented approach. For example, if allocation quantities are maintained for different customers, the sales order for the first customer cannot use all available stock. Instead, the first customer can get only the maximum quantity of the maintained allocation quantity of stock. The remaining stock will be used to fulfill later sales orders for other customers. Besides the two basic ATP check methods, additional features and functions can be used to manage and optimize the confirmation capabilities of ATP: Supply protection (SUP) Backorder processing (BOP) Alternative-based confirmation (ABC) Characteristic catalog Supply protection is integrated in the PAC algorithm. While ATP compares cumulated supply with aggregated demand elements, SUP works with virtual demand elements. The supply protection object defines the required safety stock. If supply is lacking, SUP prevents the confirmation of requirements that do not match the supply protection object. For requirements matching with a supply protection object, ATP calculates a positive ATP quantity and confirms accordingly. With this logic, SUP allows the definition of supply protection groups with protected quantities in different time buckets. Other requirements that do not match the defined attributes of any supply protection object have to respect the complete protected quantities as restrictions. Requirements matching with one supply protection object must respect the restriction of all other supply protection objects. You can define multiple protection groups within one supply protection object; the protection groups can have different priorities, such as the customers of supplying group A have priority over the customers of supplying group B or C. All equal or higher prioritized groups must be respected as restricted quantities. Requirements matching a protection group of a supply protection object reduce the protected quantity according to the confirmed quantity calculated by the PAC algorithm. From a business perspective, SUP helps retain supply for high-priority groups even in times of high demand from lower-priority groups. It protects minimum quantities, whereas the PAL check defines the upper limit. The ATP confirmation of a single order is triggered when the order is created or changed. The confirmation always considers existing confirmations of all other orders that are currently processed. Companies often want to change confirmations for multiple orders simultaneously. To do this, ATP offers mass processes in backorder processing or release for delivery. Backorder processing (BOP) is a batch job that can be configured flexibly according to the business strategy. The user can select the orders to be included in the BOP run, define the priority of the orders, and indicate special confirmation behavior. For example, the gain confirmation strategy means that assigned requirements at least

retain their confirmation or—if possible—improve. The BOP run selects orders according to attributes of the transactional business objects. Furthermore, in SAP S/4HANA, the BOP run can be combined with supply assignment (ARun), which assigns the most appropriate supply-to-order requirements in a supply shortage situation. ARun executes fixed pegging of supply and demand elements. BOP recomputes the confirmations of the orders quickly and updates the orders efficiently due to parallelization. Release for delivery can be used to change the confirmations for multiple orders manually, with the goal of directly creating deliveries for those orders. Alternative-based confirmation (ABC) can be used to check for possible alternative plants and/or products with which a requirement can be confirmed. For example, the availability of a product in a requested plant is insufficient to confirm the requirement. With ABC, logistics determines all possible alternative plants automatically and substitutes the originally requested delivery plant with one or multiple alternative plants. ABC can also substitute the requested product with products that have the same form, fit, and function or with a successor product. Combining the alternative plants with alternative products, you can improve the fulfillment of the customer requirement, for example, by providing more quantity or an alternative delivery date that better fulfills the customer's requirement than the originally requested product in the originally requested plant would have. In ABC, you can define a substitution strategy so that alternative plants or products are to be determined and whether the system should prefer full confirmation, on-time confirmation, or confirmation of as much requested quantity as possible on the requested date. You also can configure the strategy to consider alternatives whenever an ATP check is performed, when new requirements are created, or when requirements are posted. ABC can therefore be used in the online processing of a sales order requirement—for example, when a sales order is created or changed or even during backorder processing. ABC can be activated with a high degree of flexibility in specific situations by choosing a combination of attributes of the sales document together with process characteristics. For example, ABC can be activated for customers from Italy when an order is created. The different ATP features (PAL, BOP, ABC, or SUP) use various attributes of the requirement and its underlying transactional business object for different purposes: within ABC, for example, attributes—more precisely, their values—can be used to determine if ABC should be activated and, if so, how ABC should behave. In BOP, attributes and their values are used to define filter and sort criteria to determine which requirements are to be included in a BOP run. The fields from the requirement documents used within ATP are called characteristics. In addition to the fields from the requirement document, classification characteristics from variant configuration (see Chapter 8, Section 8.2) can also be used. These classification characteristics and their values belong semantically to the requirement document but are stored technically in an independent persistence. In ATP it’s possible to group the different characteristics together in a characteristic catalog that can be defined for each process using ATP and the document type—for example, sales orders. The characteristic catalogs, together with their characteristics

and their properties, are persisted in independent database tables and can be used within the corresponding ATP applications. 12.5.3

Material Requirements Planning

Material requirements planning is an engine that calculates the net requirements of a product—which is supply reduced by demand—based on its planned demand and planned independent requirements for a given plant. Planning uses BOMs, routings, work centers, and material masters as additional input parameters (Section 12.3). The selected MRP type determines the planning algorithm, which defines the formula to calculate the net requirements. The plant and MRP area determine the organizational units of the planning calculation. Based on the algorithm, MRP runs recursively along the low-level code of each material until all components of a product are planned. MRP in SAP S/4HANA follows a different architecture pattern than the traditional MRP known from SAP ERP. The new MRP Live executes most of the planning directly in the SAP HANA database. This design decision speeds up the access to the various data required for an MRP run and makes the calculation very fast. MRP Live makes planning results available in real-time. Nevertheless, planning runs can also be scheduled as background jobs in SAP S/4HANA. The optimized algorithms used in MRP Live are based on a strict master data setup for production planning. If the setup is incomplete, the planning switches back to the traditional MRP. SAP Note 1914010 (MD01N: Restrictions for Planning in MRP Live on HANA) explains the restrictions for each release. For configurable products, the evaluation of the BOM, the so-called BOM explosion (see Chapter 8, Section 8.2.1), needs to be based on variant configuration object dependencies, such as constraints or selection criteria. This is called low-level configuration (for details, see Chapter 8, Section 8.2.11). To optimize the performance of MRP Live also for configurable products, the BOM explosion and the evaluation of the related object dependencies are also directly executed on the SAP HANA database. The object dependencies are stored in a special format to be interpreted by algorithms that run directly in the SAP HANA database (see Figure 12.7). When planning is started, MRP goes top-down through the BOM of the material to be produced. Initially it plans the net requirements of the material to be produced. Then the planning continues on a detailed level, which means all components listed in the BOM are considered. For each component listed in the BOM, MRP creates a dependent requirement. MRP checks for each requirement if there is enough supply of each component for the production. If not, MRP makes sure that the right quantity is available at the right time in the inventory. To do so, it creates either a planned order to produce the missing components or a purchase requisition to procure them. During MRP, material master data is evaluated. For example, the MRP views of the material master define the procurement type (in-house production or external procurement) and the goods receipt processing time. In addition, the routing and BOM master data is also used by the MRP logic.

Figure 12.7

Optimized Low-Level Configuration in SAP HANA

The MRP logic has to make sure that the same BOM material is not planned again. This is ensured through a planning file, which is an input queue for the MRP logic. The planning file is a record that consists of the materials that are relevant for planning. The MRP run happens only for the materials in the planning file. Once the material is planned, the entries are deleted from the planning file. MRP logic calculates the net requirement of material in order to satisfy current or future stock shortages (see Figure 12.8). First MRP reads the requirements and receipts. Next it calculates net requirements, lot sizes, and sourcing and explodes the BOM. Finally it creates or deletes receipts or dependent requirements. Receipts are either a planned order (in case of in-house production) or purchase requisition (in case of external procurement). The dependent requirements are then processed in the next iteration. During the MRP run, SAP S/4HANA recognizes critical situations that must be assessed manually in the planning result. MRP creates exception messages that are collected and logged at every phase of the MRP run. The MRP controller can access the exception messages in the MRP Live cockpit. It’s important to understand that the result of the MRP is infinite planning that is subsequently leveled by capacity planning (Section 12.5.8).

Figure 12.8

12.5.4

MRP Run

Demand-Driven Material Requirements Planning

Demand-driven material requirements planning is a new planning algorithm offered in SAP S/4HANA as a separate MRP type, which reflects recent advances in production planning methodology. Demand-driven MRP is based on the assumption that a supply chain is more effective if certain critical steps within the production process are decoupled by buffering the subassembled product as stock. The prerequisite to plan with demand-driven MRP is defining the to-be-buffered product (buffer positioning) and the buffer sizing in the product master. Demand-driven MRP can be combined with existing planning algorithms so that within a supply chain some products are individually planned according to selected MRP types, ad hoc planned, or buffered according to demand-driven MRP. For more information about demand-driven MRP, see, for example, https://blogs.sap.com/2019/02/17/s4hana-demand-driven-mrp-ddmrpfunctionality. 12.5.5

Kanban

Kanban is a robust and simple method to continuously supply small quantities of a material to production processes based on the actual consumption. The conceptual information model of the kanban architecture in SAP S/4HANA is shown in Figure 12.9. With the kanban process, a demand source asks for replenishment from a supply source once a specific quantity of a given material is needed. The demand source is located close to and consumes material from one or more production supply areas. The replenishment process itself is modeled in a kanban control cycle. The material is

transferred from the supply source to the demand source in kanban containers, usually labeled with a printed kanban card that includes a barcode and other information.

Figure 12.9

Kanban Architecture

There are two categories of kanban control cycles: classic and event-driven. In classic control cycles, a constant number of kanban containers circulate between supply source and demand source. In event-driven control cycles, the demand source triggers the creation of new kanban containers once material is needed—for example, based on material requirements of soon to be executed production orders. Such kanban containers are automatically deleted after one cycle and thus the number of containers can vary under this category. Each kanban control cycle is configured to use a specific replenishment strategy from the stock transfer, external procurement, or in-house production categories: For stock transfer, the supply source could be a storage location (Section 12.2) or an embedded or decentral warehouse managed by extended warehouse management (EWM, see Chapter 13). For external procurement, the supply source is a supplier. For in-house production, the supply source is a production process. The customizable replenishment strategies define in detail which business documents are created when the demand source requests the material. These business documents are called replenishment elements in the kanban process. Some examples of the many possible variants to create replenishment elements for a given material and requested quantity are as follows:

For a stock transfer from a storage location, the kanban control cycle creates a stock transfer reservation document. For a stock transfer from an embedded EWM warehouse, the kanban control cycle creates an EWM warehouse task (see Chapter 13). For external procurement, the kanban control cycle creates a purchase order (see Chapter 11, Section 11.2). For in-house production, the kanban control cycle creates a production order. The lifecycle of the replenishment element and the corresponding kanban container are interwoven: setting a container to empty (classic control cycles) or requesting a new container (event-driven control cycles) creates the replenishment element automatically, so the production worker does not need to know process details. Later in the process, there are two possibilities: either the container is set to full, which automatically confirms the replenishment element, or the replenishment element is confirmed, which sets the container to full. 12.5.6

Just-In-Time Processing

Just-In-Time (JIT) supply is a process to efficiently replenish or provide discrete materials for manufacturing in quantity and on time. JIT supply is designed to suit simple and very complex scenarios; it is typical for the automotive industry with its highly configurable and complex products and its mass production. The inventory reduction is the most obvious result of JIT supply, which not only reduces costs but also saves space, which is a very limiting factor in automotive production with its high variance of components and their bulkiness. JIT entails a tight B2B integration to ensure a supply to production using, for instance, kanban (Section 12.5.5) or scheduling agreements (see Chapter 11, Section 11.2). In addition, outsourcing of Production to save costs and accelerate vehicle assembly in mass production requires the supply of complex, bulky, and configuration-dependent preassembled products like cockpits, seats, engines, and axles, specific to each of the vehicles and replenished with reference to individual vehicle planned orders or production orders and in exactly the sequence of vehicle production. That variant of the JIT process is also referred to as just-in-sequence (JIS) supply. Depending on the number of variants expected for these preassembled products, either individual material numbers are used representing each of the variants, or the preassembled products are requested, delivered, and settled by the individual components used for preassembly. Depending on that, the vehicle BOM contains either items for the components of the preassembled products or items for the variants of the preassembled products. The request for JIT supply is the JIT call. The processing of a JIT call is reflected by defined actions performed. Each action performed successfully results in an internal processing status of the JIT call. A specific action can be performed only if the JIT call has an internal processing status allowing that action to be performed. That relation of internal processing statuses allowing actions to be performed with resulting new internal processing statuses are configured as action controls. Multiple action controls can be maintained to suit different process variants.

In simple cases, a production operator requests the replenishment of a single component material by creating JIT calls for supply to production manually. The user would normally do so as soon as one of the containers for the material that was replenished before gets empty, and the user then requests just another container to fill up the material stock at the production line or work center. As the request is not specific to an individual planned order or production order, the JIT call created is of the summarized JIT call type. For controlling and simplifying such requests, the production planner prepares the JIT control cycle, specifying the material to be replenished, the production supply area as destination, the requested quantity used as default, the container expected to be used, the source of replenishment together with the replenishment strategy applied, and the replenishment lead time used to schedule the JIT calls (see also Section 12.5.5 for kanban control cycles). For each combination of plant, production supply area, and material, there can be one control cycle. To create a new JIT call, the production operator just needs to enter the control cycle number or use a barcode scanning device, when barcode labels are used. The requested quantity proposed is based on the container quantity specified in the control cycle. The requested date and time for the replenishment to be completed is based on a forward scheduling using the replenishment lead time. Immediate requests are also possible for emergency cases. The creation of JIT calls can also be planned and automated to relieve production operators from such tasks and to avoid peaks in requests. For production supply planning, the planning parameters must be specified in the JIT control cycles, including planning procedure, planning horizon, safety stock, and safety time. When using consumption-based planning as the planning procedure, new JIT calls for the planned JIT control cycle are created in case the current material stock at the production supply area is below the safety stock. When using demand-driven planning as the planning procedure, the planning also considers the dependent requirements from production assigned to the production supply area and a requirements date and time within the specified planning horizon. As soon as the projected material stock at the production supply area within the planning horizon—derived from the current material stock by considering all the demands and existing JIT calls still to be supplied—falls below the safety stock, a new JIT call is scheduled accordingly. The material stock considered is either represented by the material stock at the storage location assigned to the production supply area or by the material stock at the warehouse storage bin assigned to the JIT control cycle. The use of warehouse storage bins is recommended if component materials are replenished to multiple production supply areas at the same production line, and the same storage location is used for all production supply areas of that production line. Very often, the consumption posting of component materials is done much later than the physical consumption or usage in production. Reporting point confirmations reduce that time lag, but even if the reporting is done close to the physical usage, quite often the volume of components confirmed require a collective posting decoupled from the reporting of production confirmations. For that, the calculation of the physical stock of materials at the production supply area also considers reported production confirmations for materials, where a goods issue has not been posted yet. If the source of supply is an internal source, the JIT calls are used to transfer stock from a source storage location to the destination storage location, in one or two steps. If the stock is managed in a warehouse, the transfer is done using warehouse tasks created

based on the JIT calls. The JIT calls get updated from that stock transfer and the status is set to completed when the stock is received at the production supply area, either by having the stock available in the warehouse storage bin specified in the JIT control cycle, or having the stock available at the destination storage location. If the source of supply is an external supplier, the JIT call is sent to the supplier via an EDI message. With receipt of the material delivered, the goods receipt posting updates the JIT call and posts the stock directly to the storage location of the production supply area or to the warehouse storage bin specified in the JIT control cycle. The goods receipt posting also updates the purchase scheduling agreement item assigned to the JIT control cycle in case of external replenishment. In complex cases of just-in-sequence processing, the JIT calls are created specifically for individual planned orders or production orders. As those are processed in a certain production sequence, the request is of type sequenced JIT call. If a preassembled product is requested by its components, a JIT-specific BOM must be maintained including all possible component materials. In terms of JIT call processing, the preassembled product is represented as a component group material. The JIT control cycle for sequenced JIT calls is maintained for the component group material. The component materials are assigned to the JIT control cycle automatically based on the JIT BOM for the component group material. For each of the component group materials, a separate sequenced JIT call is created containing all related component materials for that specific planned order or production order. For each vehicle assembled, the system creates as many sequenced JIT calls as there are individual component groups determined for its components. Sequenced JIT calls are created some days before the vehicle assembly starts and are subject to multiple updates until then. Updates can be caused by scheduling and rescheduling of planned orders or production orders until those are fixed. Changes to the order based on a re-explosion of the BOM could also result in an update of the JIT call and a change of its components. Sequence numbers are assigned in manufacturing execution systems based on production sequencing and changed in case of resequencing. Status updates are also provided based on production confirmation. Reporting points at the start or end of production lines could trigger updates to the sequenced JIT calls. Often the start of assembly is considered the final status of the JIT call. Relevant JIT call updates are forwarded to the supplier. The supplier receives JIT calls from customers, either as summarized JIT calls or sequenced JIT calls. JIT-specific master data must be set up for processing customer JIT calls. 12.5.7

Predictive Material and Resource Planning

Operational material planning, like MRP, is executed daily. However, enterprises have to react to rising demands for their products and increase the production capacity of their plants. SAP S/4HANA provides predictive material and resource planning (predictive MRP) for such long-term preparational planning. Typically, enterprises use predictive MRP on a monthly or weekly cycle. The planning horizon of the preparation is long to midterm, which means it spans typically three to 12 months because the factory needs some time to implement proposed changes to the parameters. For example, adding an additional shift to increase production capacity requires an agreement by HR, and workers must be informed early enough.

Predictive MRP has its own persistency to facilitate simulation and planning. This simplified image for planning (see Figure 12.10) is based on the following data: Set of master data objects Relationships between these objects Top-level demand Existing stock for products

Figure 12.10

Architecture of Predictive MRP

For a certain set of objects, the predictive MRP engine calculates pegging relations between product demand, product receipts, capacity demand, and some given product stock, forming a pegging network. Predictive MPR uses the pegging network to predict required resources for in-house production (resource utilization), requirements for suppliers for externally procured products (schedule line forecast), and a forecast for externally procured products without a supplier. The predictive MRP scenario consists of three steps: 1. A batch job regularly—every night or every week—analyzes the operative master data, simplifies it, and copies this simplified image to the data model of a predictive MRP scenario. The batch job also transfers aggregated document data like forecasts and stock quantities. 2. When the planner interacts with the simplified image for planning, the predictive MRP engine calculates the result in real time. Then the planner adjusts settings in

the scenario, analyzes the results in real time, and takes further corrective actions. This is done in iterative cycles. To allow what-if analysis, the planner can work with multiple simulation versions. 3. If the planner is fine with the adjustments, the planner releases the results, like a confirmed product forecast, changed shift models, or updated sourcing rules, back to the SAP S/4HANA operative data. After that, the operational planning is executed using the updated planning parameters, like a confirmed product forecast, shift models, or sourcing rules. KPIs are calculated in real time to identify: Pegged top-level demand for any receipt or demand in the pegging network Pegged top-level demand for any capacity requirement The capacity requirement (indirectly) caused for inhouse production for each top-level demand The supplier demand (indirectly) caused for externally procured products for each top-level demand In principle, by traversing the pegging network, you can calculate any other KPI based on date and quantity. The time dimension is stored and processed in buckets. Buckets can be weeks or months, whereas the calculation of offsets (like production time) can be (work or calendar) days. 12.5.8

Capacity Planning

The result of the MRP is a list of planned orders creating internal requirement elements (Section 12.5.3 and Figure 12.1). The entire calculation is based on an infinite planning strategy, that does not take constraints of work centers, shifts or factory calendars into account. The MRP solely plans based on the material flow. Thus, all parameters in the master data defining any durations (such as lead time) are gross times. After an MRP run, capacity planning is used to dispatch the planned orders and converted production orders to the work centers so that any work center overload is avoided. Possible means to do this are as follows: Checking the overload situation in the work center and changing the work center capacities (adding shifts or adjusting efficiencies) Rescheduling production orders or planned order operations Changing internal sourcing (changing the production version, transferring stock from other plant) External procurement (purchasing or subcontracting) Unlike MRP, capacity planning is based on the net times defined in the master data. For efficient capacity leveling, the option to define pacemaker work centers (a decisive influence during production execution) or capacity buckets can be used. Different strategies for capacity leveling are supported, which include find slot and insert operation. For capacity leveling, the scheduling direction can be chosen, to control

whether the affected orders are scheduled backwards or forwards in time. Capacity leveling uses SAP liveCache as an efficient scheduling engine. 12.5.9

Production Planning and Detailed Scheduling

Production planning and detailed scheduling (PP/DS) is a tool that supports you with various optimization algorithms during production planning operations. Traditionally PP/DS functionality required a two-system landscape with PP/DS running in a separate SAP APO server in addition to the SAP ERP system. In SAP S/4HANA on-premise, PP/DS is embedded and an integral part of logistics. It can be used to refine the planning results created by MRP Live (Section 12.5.3). The embedded PP/DS uses the SAP liveCache functionality of SAP HANA. If materials are flagged for PP/DS in the material master, the material is replicated to the PP/DS object store in SAP liveCache. There, the planning and detailed scheduling operations are performed with high performance in memory.

12.6

Cross Functions in Logistics and Manufacturing

The logistics and manufacturing cross functions implement business objects that are used in multiple logistics processes. Typically, such cross functions require separate customizing and master data so that they can integrate with the standard business process. Often, dedicated cross functions are mandatory in special industry processes, such as batch management being mandatory in the pharmaceutical industry. 12.6.1

Batch Management

Batch es allow subdivision of one material’s stock into different groups based on common characteristics. For example, batch A of vitamin pills is produced from batch 001 of ascorbic acid with a shelf-life of three months and batch 101 of sodium carbonate with a shelf-life of six months. Some characteristics represent system fields, such as production date or shelf-life. Some characteristics are freely defined key/value pairs representing technical or physical attributes. SAP S/4HANA can track these groups and process them automatically based on their characteristics. Normally batches are maintained on the material or material/plant level for each material (product master settings). Each material batch is identified by a unique number, and its inventory has a dedicated lifecycle starting with the batch creation and ending with the consumption of the last material batch stock. Figure 12.11 shows the batch business object and its relation to other processes in SAP S/4HANA.

Figure 12.11

12.6.2

Batch Management

Quality Management

Quality management provides functionality to ensure the quality of the products that are produced, procured, and sold. It is based on business objects for the prevention, detection, and elimination of defects. It enables you to establish and document business processes that meet standards like EN ISO 9000. Because of its ubiquitous nature, quality plays a role for internal processes and for the exchange with suppliers and customers. Let’s start with the primary motivation to strive for good quality—the expectation of all customers. The quality info record in sales business object allows you to assign a

quality agreement and technical delivery terms by customer and sales organization using the document management system. It also allows you to define quality inspections for a specific material, which need to be successfully passed. If they’re not, the delivery of the material can be blocked (see Figure 12.12).

Figure 12.12

Quality Management in Sales

Along with the delivery papers, the customer often demands a quality certificate that proves certain characteristic values. The data retrieval for the required quality characteristic is implemented by the quality certificate profile business object. The assignment of certificate profiles to customers, materials, or other attributes of the delivery is managed in a flexible way with the condition technique. When you procure products from suppliers, you expect good quality. The quality aspects of the relation to a supplier can be documented in the quality info record in procurement business object. This is defined per supplier, material, revision level, and plant. You can assign the quality assurance agreement to it using the document management system. It also triggers quality inspections for goods receipt or supplier source inspections, the creation of quality certificates, and the release of certain process steps in the procurement process (see Figure 12.13). Incoming certificates sent by the supplier can be assigned and managed by the quality certificate in procurement business object. Quality inspections are done to monitor quality of the products sold to customers and the products received from suppliers. The inspection lot business object is used to cover a quality inspection for a certain business process like goods receipt or delivery. How the inspection will be conducted is defined in the quality view and especially in the inspection setup business node of the product master business object.

Figure 12.13

Quality Management in Procurement

The content of an inspection is planned by using a variety of master data. The material specification, inspection plan, and (for in-production inspection) production plan business objects contain the inspection plan characteristics business object, which defines the qualities to be measured. The inspection plan characteristics can refer to the master inspection characteristic business object. The inspection method business object can be used to describe how the characteristic must be measured. It allows you to attach documents using the document management system and can be assigned to a master inspection characteristic or directly to characteristics in the inspection plan. The sample procedure, sampling scheme, quality level, and dynamization rules business objects define the selection of the sample to be controlled. The inspection point business object node defines the location where the inspection takes place. In the process industry in particular, the inspection is done on numbered samples, which are represented by the material sample business object and planned by the sample drawing procedure master data business object. Getting a view of an element’s true quality is essential to react accordingly and ensure good quality if something goes wrong. The defect business object is used to document all quality issues that occur during internal processes like quality inspection. Measures to remove the defect or for the prevention of recurrence can be documented in the quality task business object. The problem-solving process business object guides you through a more detailed analysis of the defect according to a methodology of quality management like eight disciplines problem solving (8Ds). All incidents that your customers report or that you report to your suppliers can be documented in the quality notification business object. It can also be used to execute detailed analysis of or cost assignment to defects for which the defect can be converted into the quality notification item business object node. The failure mode and effect analysis (FMEA) is a standard approach to prevent or detect quality issues and is covered by the FMEA business object, which is built on top of audit management. Finally, there are objects to document compliance with quality standards like the audit business object. The content of the audit is maintained in the question list business object. Audits are planned within the audit plan business object. The business objects of audit management use the generic project planning (CGPL) engine. 12.6.3

Handling Unit Management

Handling unit management is used to reflect packaging in logistics processes. Virtually representing physical packaging provides several advantages. A handling unit contains a set of products packed together. Goods movement processing is made more realistic and efficient by moving the handling unit itself—the package—instead of the included materials. In discrete manufacturing, handling units can be planned against production orders. With the corresponding output management for handling units, handling unit labels can be printed and placed on the corresponding load carrier once a produced material is

placed inside. The handling unit and its unique handling unit ID can then be used to post a produced material goods receipt. Handling unit planning can also be performed for repetitive manufacturing processes. Backflushing of materials in the case of existing handling units can then also be conducted using the respective handling unit identifier. Besides creating handling units in production processes, handling units can also be directly received from suppliers via inbound EDI. In this case, handling units are part of the inbound delivery, from which they are put into stock or directly used within production. Although handling unit usage already at production provides significant benefits, it’s also possible to create handling units only at the time of shipping. In the sales order, customers can define a packing proposal, which can be later used to create handling units in the outbound delivery. Alternatively, handling units can be directly created in the outbound delivery without packing proposals from the sales order. To simplify handling unit creation and to avoid input errors, a set of packaging master data objects can be used: Packing instructions Determination records In the packing instruction, manufacturers can define how a handling unit should be packed by providing a packaging material as a load carrier, as well as the respective materials and quantities that should be packed onto the load carrier. To avoid that a potentially vast amount of materials packing instructions needs to be created, manufacturers can also decide to create packing instructions for reference materials. These reference materials in turn must be maintained in the product master of the materials the manufacturer would like to have packed according to the respective reference material packing instruction. The packing instruction itself cannot be found by the automatic packing functionality in the corresponding packing dialogs unless there is a corresponding determination record. The determination record, based on the condition technique known from pricing determination, creates the link between certain conditions and packing instructions. The easiest of all conditions would be the direct relation from a material to a packing instruction, which should be drawn in case this material needs to be packed. However, manufacturers have the choice to expand the conditions to include ship-to parties and various other fields provided in the field catalog of the corresponding handling unit configuration. This differentiation is beneficial as soon as one material is packed differently depending on certain conditions. For example, it’s possible to provide two different packing instructions for one material, depending on whether it is shipped to customer A or customer B. In addition to using handling units for internal process optimization, they are also used to notify ship-to parties of the exact packaging they are about to receive. Using outbound advanced shipping notification (ASN), manufacturers can ensure that the handling units, including their content, are communicated to their customers prior to the actual shipment arriving.

To uniquely identify which serialized material belongs to which handling unit of any given reference document (that is, outbound delivery), serial numbers of those materials reference the handling unit and are also provided in the outbound EDI message. 12.6.4

Serial Number Management

Serial number s allow the identification and differentiation between individual items of one material. This component therefore ideally supplements the product master record, which contains all data for describing and managing a piece of material, but does not enable you to differentiate between individual items of that material. Many logistic and manufacturing processes use serial numbers. The serial number profile defines how and when serial numbers are created and assigned during these processes. The serial number profile is a set of data that defines the conditions and business processes involved when assigning serial numbers to items of materials. The serial number profile is entered in the product master record at the plant level for the material to be managed. This means that an individual profile can be assigned to a piece of material per plant. In this way, a certain material can require serial numbers at one plant and not at others. If you use different profiles for a material in multiple plants, the plants must be logistically independent of one another because it is only possible to transfer stocks from one plant to another if the profiles are the same at both. 12.6.5

Inter-/Intracompany Stock Transport

Logistics uses stock transport orders to initiate stock transfers between different plants. There are four major process variants for stock transport orders: 1. Stock transport orders without outbound delivery This process creates stock transports without an outbound delivery between plants of the same company code or enabled for intercompany clearing. 2. Stock transport orders with outbound delivery via shipping This process creates stock transports with outbound delivery and shipping documents between plants of the same company code or enabled for intercompany clearing. 3. Stock transport orders with outbound delivery and billing document/invoice This process creates stock transports with outbound delivery and shipping documents and billing and invoicing, allowing free calculation of transfer price. 4. Stock transport orders using dedicated stock in transit This process creates stock transports that allow you to separate the transfer of quantity and the transfer of title. Which of the process variants is used depends on the organizational assignment of the plants, on the required shipping documents, on the cost assignments, and on the timing of transfer of quantity versus transfer of title.

12.7

Logistics Integration Scenarios

As described in this chapter, logistics functions are integrated with various other functions, both inside SAP S/4HANA and externally. As two important examples, this section addresses integration with warehouse management and with manufacturing execution systems. 12.7.1

Warehouse Management

The core logistics processes like goods receipt, goods issue, physical inventory, quality management, batch management, and stock transfer described in Section 12.1 can be integrated with extended warehouse management (see Chapter 13) In addition, it’s possible to set up a communication scenario in which an external warehouse management system of a third-party logistics provider is used. 12.7.2

Manufacturing Execution Systems

Manufacturing execution systems (MES) represent the interface between the production planning system (SAP S/4HANA) and the process control system in manufacturing according to supervisory control and data acquisition (SCADA). MES translate in real time the outcome of the production planning into instructions for shop floor elements of the production process, such as machines, input, workers, and transport devices, and captures the outcome of each element, sending it back to the production planning system in SAP S/4HANA. From a business perspective, the production or process order is copied and distributed to the correct destination (MES instance), using filter criteria provided by the business object. After completion of a single production order’s operations or of the entire production order, the data is relayed back to SAP S/4HANA as a production confirmation (see Section 12.4). While being processed by the MES instance, internal confirmations of the production order in SAP S/4HANA are blocked. In repetitive manufacturing, the planned order is copied to the MES system. From a technical perspective, asynchronous interfaces can be used to integrate SAP S/4HANA with an MES system using the SOAP protocol (recommended) or IDoc (discouraged as legacy technology). Alternatively, a synchronous interface via OData (recommended) or RFC (legacy technology) can be established.

12.8

Summary

In this chapter we gave an overview of the architecture of logistics and manufacturing functions in SAP S/4HANA. We started with an overview of the main components and data flows. Then we introduced the organizational units such as plant, storage location and production supply, the master data such as BOM, routing, work center, and production version and the transactional business objects such as production order, material document, outbound delivery, and physical inventory. Next, we covered calculated business objects and engines for logistics. Here we discussed how inventory is calculated on the fly, and we explained how the different flavors of ATP work. We described MRP, with special focus on the optimized MRP Live. We discussed demanddriven MRP, kanban processing, just-in-time supply, predictive material and resource planning, and capacity planning. Next, we gave in overview of cross functions that are used in logistics and manufacturing, but also in other areas of SAP S/4HANA. Here we introduced the concepts of batch management, quality management, handling unit management, serial number management and intercompany stock transport. Finally, we discussed integration with manufacturing execution systems and with extended warehouse management, which is the topic of the next chapter.

13

Extended Warehouse Management Extended Warehouse Management is recommended for real time support of the physical activities in the warehouse when bookkeeping inventory management is not enough.

Extended warehouse management (EWM) functionality in SAP S/4HANA supports the management of material flows in a warehouse, including the physical storage of tangible products precisely down to the level of storage bins. The term tangible products comprises material quantities and handling units for packed materials like pallets. EWM supports controlling the physical movements and operations of goods within the warehouse, like unloading, receiving, put away, picking, packing, loading, and warehouse internal transfer movements. EWM integrates with enterprise-level inventory management in logistics (see Chapter 12, Section 12.4), and it also supports logical stock postings like unrestricted use stock to stock in quality inspection, which might not have any impact on the physical stock.

13.1

Architecture Overview

Companies typically run a central instance of SAP S/4HANA for enterprise-wide functions that require a holistic view of the whole company, such as material requirements planning (MRP) or transportation management (TM). EWM can be used together with these enterprise-level applications in the same system or tenant. This is called embedded EWM. EWM can also be used by deploying a dedicated system near the geographical location of the warehouse. This second option is called decentralized EWM. The usage mode can be chosen per warehouse, and both options can be used in parallel for different warehouses with different business needs. In both options, embedded and decentralized, the same core functionality is provided, but there are also some differences in the integration to enterprise-level applications due to the technical difference of having a single central system/tenant or a separate warehouse system. Decentralized EWM serves regional or local system deployment for lowering network latency times, which are important for highly frequent user interactions with the system or if EWM controls warehouse automation hardware. This deployment option can also be used for scale-out for multiple high throughput warehouses with high system hardware sizing requirements. Decentralized EWM is tightly integrated with enterpriselevel applications on the application process level, but with loose coupling on the technical level using asynchronous messaging. The loose coupling on the technical level allows for warehouse operations independent of the availability of the system of the enterprise-level applications. In the decentralized scenario, EWM receives a workload of planned warehouse processes, such as outbound delivery orders, and these can then be processed independently in the EWM system. Embedded EWM provides a tightly coupled process integration with other enterpriselevel applications in the same system. For example, material postings invoked by

enterprise-level applications post the inventory and the warehouse stock with a single database commit. EWM functionality offers comprehensive integration capabilities to enterprise-level applications in SAP S/4HANA and SAP ERP, as well as many other SAP solutions, as depicted in Figure 13.1. In addition, a variety of interfaces for integration of third-party systems is provided.

Figure 13.1 Architecture Overview of Embedded and Decentralized SAP Extended Warehouse Management

Efficient complex warehousing requires adaptations to the specific physical processes in the warehouse. Therefore, a guiding principle of EWM architecture is openness for custom enhancements and adaptations. Several hundred extension points are provided as BAdIs. In addition, technical frameworks, which are described in Section 13.9, allow for tailored adaptations to very specific needs. A frequent example is the enhancement and adaptation of the mobile UIs, which are used on the shop floor to efficiently cope with the details of the physical process flow.

13.2

Organizational Structure

The warehouse structure is represented in EWM by the following entities. The warehouse number identifies each warehouse to allow for multiple warehouses in a single system while still separating all system user accesses and processes per single warehouse. A storage bin represents the physical space in which goods are stored. A storage bin has an identifier, which is unique within a warehouse number. Storage bins are used in physical movements of goods and their identifiers are used to guide the warehouse operatives to physical locations in the warehouse. A storage type defines a group of storage bins with equal process characteristics; the physical bin characteristics do not have to be equal. Examples of storage types are fixed bin storage, pallet storage, and bulk storage. Storage types are also used for defining intersection areas of the warehouse with adjacent business areas. Production supply areas, intersection points with automated high racks, and warehouse doors are examples of such storage types. A storage section is an optional organizational element and a subset of the storage bins of a storage type. Storage sections are used in put-away strategies and can structure the physical storage layout by separating fast and slow-moving materials. An activity area is a logical grouping of bins for a specific process activity like picking. Bins from several storage types and storage sections can be assigned to the same activity area. The activity area can be used to define a working area for a group of warehouse operatives. It influences the system guidance for the shop floor workers, as described in Section 13.7. A staging area can be defined for goods issue and receiving. In outbound processes, it’s used for staging goods for loading; in inbound processes, it’s used after unloading for staging goods for put-away. A warehouse door is assigned to a warehouse and is the physical location where goods arrive or depart from the warehouse. Doors can be grouped to a loading point. A work center is an area in the warehouse where activities like packing or value-added services are processed. Shipping and receiving offices can be assigned to a warehouse. They allow you to control privileges of system users for delivery processing. A yard can be modeled in the system by using a separate storage type. A checkpoint is assigned to a yard and is the gate to the public street through which transport vehicles enter or leave the company’s property.

13.3

Master Data

The following general master data entities are used, which in a decentralized EWM scenario are replicated from the enterprise-level SAP S/4HANA system to the decentralized SAP S/4HANA system in which EWM operates: Product master Batch master and batch characteristics Business partner—for example, for the ship-to consignee For details on product master and business partner, see Chapter 8. If product safety and stewardship is used, the dangerous goods master, hazardous substance master, and phrases data elements are also replicated. The major warehouse specific master data entities are as follows: EWM-related views of the product master, which are locally maintained in case of a decentralized EWM system. Assignment of fixed storage bins (of fixed bin storage types) to products. A warehouse resource is a user or piece of equipment that can execute work in a warehouse. The resource is used among others in system-guided processing on the warehouse shop floor; Section 13.7. A warehouse processor represents an individual warehouse employee. It is a business partner master record with the processor role and is used for labor management in EWM. The packaging specification defines work instructions for packing of materials into handling units and can also be used to define work instructions for value-added services. The inspection rule defines the rules for inspection document creation and supports simple logistical checks like counting.

13.4

Stock Management

One or multiple storage locations of one or multiple plants can be assigned to a warehouse number of EWM. Then EWM manages the stock at warehouse entity levels like storage bins. Storage locations of several enterprise-level SAP S/4HANA or SAP ERP systems can be integrated into a single warehouse managed by decentralized EWM in a localized SAP S/4HANA system. For managing the stock of several plants in a single warehouse, there is the concept of a dedicated entitled to dispose organization, which is entitled to dispose warehouse stock. This is represented by a business partner master record linked to each plant. All stock-related processes in the warehouse use entitled to dispose, and among other things, it allows you to control the system privileges of users to execute a process. EWM data objects like the warehouse stock and warehouse request items contain entitled to dispose as an attribute. For example, this ensures that outbound delivery processes for a specific plant can use only the warehouse stock of this specific plant, and it’s not possible to use the material stock of other plants. Thus, multiclient warehousing in a single warehouse is possible—for example, for the different branches of a company, which are modeled by different plants. The concept of entitled to dispose can also be used independent of the plant concept of logistics and allows EWM to support warehouse processes with warehouse stock that has no related inventory in inventory management (logistics). Stock management uses storage bins, resources such as a forklift, and transportation units, which represent, for example, the load capacity of a truck. Stock can either be unpacked material quantities or handling units on stock—for example, pallets that carry material stock. Although the business concept of a handling unit is a moving, tangible asset that can be handled by a person, the technical framework of handling unit management is also used to represent bins and transportation units by using a technical handling unit instance. Because bins and transportation units do not comply with the business definition of a handling unit, these technical handling units are made invisible for business users. The technical handling unit enables you to apply capacity checks for bins, transportation units, and handling units with a single technical framework. Moreover, the technical handling unit instance of a transportation unit allows for movements of transportation units in the yard by using warehouse tasks for the technical handling unit, like in the case of a movement of a pallet inside the warehouse. The smallest material stock unit in EWM is a quant, which is a material quantity with identical stock attributes, such as bin, material, batch, entitled to dispose, owner, and stock type. One separating quant attribute is the reference of a quant to a business document. Such a separating quant attribute induces the split of stock quantities into separate stock quant records. An outbound delivery order item reference exists, for example, in the quant that represents the picked stock of this delivery item after picking. This stock quantity is well separated from all other stock quantities of the same material in the same storage bin by means of the separate quant. The document reference in the quant establishes a hard assignment of the stock to a warehouse process instance—for example, an outbound delivery order item. A similar stock assignment exists for inbound

delivery items after goods receipt posting until the final put-away step is confirmed in the system. EWM differentiates between physical stock quantities and available stock quantities during warehouse task creation. The available stock quantity is calculated by using the physical stock reduced by the stock quantities already planned to be used by other open warehouse tasks. Material postings in the warehouse create a warehouse document that records the posting on the detailed level of the warehouse—for example, the bin. Some material postings, like post goods issue, integrate with logistics and create a material posting there. Among other things, this allows for integration with financial application components. The warehouse stock type allows you to differentiate stock like stock in quality inspection from unrestricted use stock. A stock type is also assigned to an availability group, which itself is assigned to a storage location. Warehouse storage types can be assigned to an availability group such that physical movements to this storage type can create a material posting between storage locations in logistics.

13.5

Application Components

The EWM functionality in SAP S/4HANA consists of several application components, which are depicted in Figure 13.2. Stock management allows you to manage stock-keeping units of material quantities. Handling unit management allows you to process physical units of packaging materials, such as pallets, which can contain packed material quantities. The inventory document of physical inventory management allows for recording the counting results of physical inventory and reporting quantity differences compared to book values on the bin level. The stock differences on the bin level are then analyzed and aggregated on the storage location level to enable postings to inventory management. Warehouse order processing supports the planning and execution of work packages for a warehouse operative. Several warehouse tasks may be bundled to a warehouse order that represents the work package—for example, several picks in a single pick path or several tasks for physical inventory. Exception management allows for reporting of exceptions during the execution of warehouse tasks.

Figure 13.2

Application Components of Extended Warehouse Management

The material flow system (MFS) allows you to connect to warehouse automation such as high racks by sending and receiving telegrams. A telegram consists of a defined structure and semantics for information exchange between two communication partner

systems. The telegram integrates with warehouse task processing. For example, by sending a telegram from EWM to the automation system, an EWM move task for a handling unit can be submitted to the automation system as an order to execute this movement. The automation system then sends back a telegram for confirmation of the movement when the physical movement has been completed. Upon receiving this telegram, the move task is confirmed in EWM. Thus, EWM can control the movements in the automatic warehouse areas. Warehouse request processing is the warehouse-level overall process planning and execution control. An example of a warehouse request is the outbound delivery order. Warehouse request processing is tightly integrated with enterprise-level applications that manage the enterprise-level process, such as a logistics outbound delivery. Wave management optimizes the work package creation for warehouse operatives. A wave defines the universe of warehouse request items used in one program run for warehouse order creation. Quality inspection in the warehouse can be done during inbound delivery processing by conducting simple logistical checks like counting. For sophisticated quality checks, such as in a laboratory, an integration to the enterprise-level Quality Management (QM; see Chapter 12, Section 12.6.2) can be used. Value-added service processing is performed at a work center and can be done in conjunction with inbound, outbound, and warehouse internal stock transfer warehouse request processing. Shipping and receiving allows you to manage loading and unloading activities of transportation units that represent a load capacity, like a truck or trailer. The loading and unloading activities integrate with inbound and outbound warehouse request processing. In conjunction with handling unit management and warehouse order processing, this lets you plan, execute, and track the movement and position of vehicles and transportation units in the yard. Dock appointment scheduling allows you to manage the capacity of loading points by the creation of time slots. Appointment creation and planning is performed against the time slot that represents the loading point capacity. Appointments can be created by warehouse clerks or by carriers in a self-service user interface. Warehouse billing allows you to count and to aggregate per time period the warehouse activities done by a specific group of warehouse operatives and done for a specific client of the warehouse. It also allows for accounting of capacity consumptions per client, like the number of occupied bins. These billing measurements are the basis for billing or self-billing processes, which technically reuse application components of transportation management. Labor management enables you to measure the work performance of individual warehouse employees against defined labor standards. Performance-based wage components can be granted by integration with payroll management in SAP Human Capital Management (SAP HCM).

13.6

Monitoring and Reporting

EWM includes several components for monitoring and reporting. The warehouse monitor lets you query and display the data of all EWM business objects within a warehouse—for example, warehouse stock or warehouse tasks. It enables operational monitoring of the execution activities within one warehouse with a large set of different reports, such as reports on exceptions in task execution. The warehouse monitor supports the insight-to-action interaction pattern by allowing you to trigger various functions and activities directly from the data reports. Operational reporting of key performance indicators for a warehouse is provided by several technical tools. Among others, analytical overview pages and Smart Business KPI applications are provided, which retrieve their analytical data via CDS views (see Chapter 4, Section 4.1). EWM also leverages interfaces to SAP BW/4HANA, in which EWM-specific content allows for strategic reporting of warehouse performance across several warehouses.

13.7

Process Automation

Multistep warehouse processes are supported in an automated way by storage control, an EWM framework that is tightly integrated with warehouse task processing. The system can automatically create follow-on warehouse tasks for the next step on confirmation of the execution of a warehouse task. Storage control can be based on a layout-oriented warehouse internal routing setup for fixed movement patterns between storage types. Storage control can also use a predefined storage process of multiple business process steps, like pick-pack-load, in which a warehouse internal routing can be set up for each step. The storage process and the current process step are stored in the handling units, which are routed through the process to enable process visibility per single handling unit. System-guided work procedures for the warehouse shop floor are provided by configuration of queues to which open warehouse tasks are assigned. Among others, the activity areas of source and destination can be used for queue determination during warehouse order creation. Resources, which represent warehouse operatives or automated equipment, can be assigned to these queues and will then execute the system-guided warehouse tasks, which might have been automatically created based on storage control. Exception management allows you to report exceptions in task execution and can create an alert based on an exception. The alert enables alert monitoring by shift supervisors, as well as pushing alerts to specific users by, for example, email. Exception management also allows you to configure automatic follow-up actions by using SAP Business Workflow. Exception management can automatically adapt related business documents to the exception that occurred, such as by reducing the planned quantity of an outbound delivery order item when a pick denial exception happens. The post processing framework (PPF) is used to automate steps that are not in the scope of storage control. It’s used, for example, for automated output control (printing) and wave assignment of warehouse request items.

13.8

User Interface

EWM provides several types of user interface for access. Analytical SAP Fiori apps like overview pages and Smart Business KPI apps support the shift supervisor in operational control of the shift. Integrated enterprise-wide search in the header area of the SAP Fiori launchpad supports the main business objects of EWM. Some SAP Fiori apps are tailored to suit desktop device form factors to be used in the warehouse office and at workstations in the warehouse. Other SAP Fiori apps are tailored to suit tablet form factors of mounted devices on a moving asset in the warehouse. They can provide graphical support for warehouse operatives in processes in which barcode validations are not enough. Mobile handheld devices are supported by using browser UIs based on Internet transaction server (ITS) mobile, which provide fast end-to-end dialog performance. This is required for the highly frequent synchronous interactions with EWM to support the flow of physical work steps in real time. Voice-enabling subsystems can be integrated by using ITS mobile with its XHTML+Voice interface for EWM transactions, which are dedicated for voice interaction patterns, such as pick-by-voice transactions. ABAP WebDynpro UIs are provided as applications on the SAP Fiori launchpad, such as a shipping cockpit for the office clerk. ABAP messaging channels are used to push relevant data changes of the backend system to the cockpit UI to enable monitoring of all shipping related activities in this cockpit. SAP GUI for HTML with harmonized SAP Fiori visual design is provided in the SAP Fiori launchpad. It’s used for general-purpose UIs that do not have a distinct use case and as such cannot follow the use case-driven design paradigm of SAP Fiori apps. An example is the warehouse monitor, which provides visibility to every EWM transactional document, warehouse stock, exception, and alert, and allows for many functions and actions, and thus provides a multipurpose general cockpit used by many different personas of the warehouse.

13.9

Technical Frameworks

The radio frequency (RF) framework of EWM is used for the technical implementation of transactions for mobile handheld devices. The transactions that run on such devices can merely be standardized on the logical level, like system-guided picking. They cannot be standardized on the detailed level of steps in the flow of the physical work with the required data and screens for each of the steps. Therefore, the RF framework introduced the concept of logical transactions, such as system-guided picking, which can be assigned to business user roles and which you can adapt and configure. Among other things, this configuration is influencing the functional step flow and the validations that must be performed. For example, a functional step flow could be the following: 1. The creation of a pick of the handling unit 2. A loop of steps for picking a single material into the handling unit 3. Staging the handling unit to the outbound staging area All these steps can belong to the same logical transaction, while each step has its own technical transaction in the system. Configured system validations could be, for example, a mandatory barcode scan of the picked material and a mandatory barcode scan of the storage bin used for picking. The RF framework also allows you to develop new logical transactions with custom-developed screens and functions, without modifying the delivered EWM application. The warehouse monitor framework of EWM is used for the technical implementation of the warehouse monitor. The user interface of the warehouse monitor provides a menu in the form of a tree control, which consists of elements called monitor nodes. Each monitor node is either a folder for structuring according to business areas or an executable report within such a business area folder. The warehouse monitor provides a wide range of user-specific personalization capabilities. You can, for example, create your own personalized monitor nodes (called variant nodes) with predefined report selection variants that you often use. The warehouse monitor framework also offers configuration capabilities that can be used for custom adaptations for many users, like providing new nodes, node-specific functions, and user role-specific setup of content. Among other things, configuration allows for hiding features in the monitor that aren’t used in a specific warehouse. Configuration also allows for custom features like adding custom monitor nodes, which could be based on a slightly adapted standard monitor node. For example, this can be used for providing a variant of the list view of the report display and the available functions in this report list view for all users in a warehouse. The warehouse monitor framework is a technical tool that supports enhancements by new monitor nodes with enhanced new reporting capabilities provided by custom development.

13.10

Warehouse Automation

Warehouse automation comprises automated hardware such as high racks, conveyor belts, and autonomous guided vehicles (AGVs). EWM offers two scenarios for warehouse automation. For the first scenario, application link enabling (ALE) with IDoc interfaces is provided to integrate EWM with warehouse control unit (WCU) subsystems. The real-time routing decisions for moving goods within the automated area are fully controlled by the WCU. EWM does not store or receive data about the detailed movements in the automation area and its detailed internal structure. The WCU is often a black box and requires the involvement of the hardware vendor for any small change. In the second scenario, the material flow system (MFS) (Section 13.5) enables EWM to assume control on the WCU level. The MFS provides an interface for direct communication with programmable logic controllers (PLCs), which control the automation hardware. This interface uses ABAP push channels to provide a direct TCP/IP socket communication. As explained in Section 13.5, the MFS communicates with other components via telegrams. With the configuration of telegrams, the business semantics of a specific interface call can be defined. High-throughput telegram interchange is provided by many parallel communication channels. The MFS provides near-real-time routing decisions based on the structure of the automated area, which is modeled in EWM—for example, conveyor segments. The MFS scenario provides the ability to adapt to hardware changes by changing the structure data of the automated area and by changing routing rules in EWM. The MFS can be used to control several automation areas equipped with hardware from different vendors with a single technical concept.

13.11

Summary

EWM covers the whole variety of warehouse types of a company. It provides comprehensive high-end functionality for complex large warehouses and also supports simple manual processes in small warehouses. In this chapter we presented an architecture overview of EWM and explained the difference between embedded and decentralized EWM. We discussed how the warehouse structure is described, gave an overview the master data relevant for EWM, and explained the concepts of stock management. Next, we have an overview of EWM application components, including, for example, warehouse order processing, exception management, material flow system, shipping and receiving, and dock appointment scheduling. We covered EWM monitoring and reporting, process automation and the different user interface options, including voice control. Finally, we looked at technical frameworks, such as the radio frequency framework and the warehouse monitor framework and explained the principles of warehouse automation. The financial functions can be viewed as the foundation of an ERP software. In the next chapter we will have look at this central application area in SAP S/4HANA, and explain the architecture of the functions for finance, governance, risks and compliance.

14

Finance, Governance, Risk, and Compliance Finance as well as Governance, Risk, and Compliance in SAP S/4HANA is tailored for the intelligent enterprise. Supporting steering effectiveness, process efficiency and continuous compliance in the best possible way requires a simplified, innovative and powerful business architecture.

From an architectural perspective, SAP S/4HANA Finance as well as governance, risk, and compliance is not just an evolution of the respective components in SAP Business Suite; it’s a big leap forward due to the introduction of new technologies, such as SAP HANA, machine learning, and robotic process automation (RPA), but also due to the modernization of core business architecture concepts within finance. Totals tables have been removed, allowing users to analyze data on the lowest level of detail with incredible speed. Many line item tables in core finance have been consolidated into the Universal Journal as the single source of truth for financial accounting and management accounting. By doing this, redundancy has been eliminated and the data footprint of the solution has become smaller. Group reporting works seamlessly on top of the Universal Journal, with aligned data models between the local close and group close. Machine learning scenarios have been introduced—for example, in the area of open item clearing in order to boost automation. Predictive accounting has been invented to deliver a new approach to handling finance data based on totally new business architectural concepts. As you can see in Figure 14.1, finance and governance, risk, and compliance (GRC) functionalities are highly integrated into SAP’s complete portfolio. SAP cloud solutions such as SAP Concur, SAP Ariba, SAP Fieldglass, and SAP SuccessFactors solutions send their respective business documents, like expense reports, invoices, or time confirmations, automatically to SAP S/4HANA Finance. SAP S/4HANA applications like procurement, sales, service, and logistics also transfer their business process data into finance as before. SAP Cloud Platform provides services for finance business processes like multibank connectivity, digital payments, market data interfacing, or a closing cockpit. Of course, both SAP S/4HANA Finance and the governance, risk and compliance solutions in SAP S/4HANA offer a complete set of best-in-class functionality. The respective main functional building blocks are as follows: Accounting and financial close help accountants record all business transactions and provide financial statement reporting according to all requested accounting principles for the single entity and the group. Tax and legal management help the legal and tax departments to take care of calculating all kinds of value-added tax, sales tax, and use tax and provide functionality for tax reporting in many countries of the world. Financial planning and analysis give the controlling departments full and detailed insight into all financial figures to steer the company effectively and in real time. Accounts receivable and accounts payable help the respective accountants to manage outgoing and incoming invoices and take care of the respective payments for the whole enterprise.

Treasury management provides a full-fledged solution portfolio for treasury managers to manage working capital, payments, cash, and financial risks with integrated, realtime solutions. Enterprise risk and compliance (GRC) solutions allows GRC experts to effectively handle risk, control, and assurance tasks within the organization.

Figure 14.1

14.1

Overview of Finance and Governance, Risk, and Compliance in SAP S/4HANA

Finance Architecture Overview

Now let’s take a more detailed look at the architecture of SAP S/4HANA Finance and zoom into the central box of Figure 14.1. From an architectural perspective, one of the key elements is the finance interface (also called the accounting interface; see Figure 14.2). This interface is the main entry point to post data from the most important core business processes like sell-from-stock, procure-to-pay, and make-to-stock into the SAP S/4HANA Finance core. Besides posting, the interface can be used for clearing purposes, such as matching open items with incoming payments. The implementation of the accounting interface contains many check routines to guarantee consistency of data across all receiving components like tax management, accounts payable and accounts receivable (AP/AR), accounting, and so on. Besides this, the internal runtime document is enriched with data not provided directly from sender components invoking the accounting interface. Examples of enrichment are the determination of profit centers or of other account assignments according to defined rules. Check and determination routines are provided by the respective expert component (such as AP/AR, asset, or general ledger, as depicted in Figure 14.2) and applied to the internal runtime document. This guarantees reconciled data in the different database tables that are written after data enrichment and determination have been fully processed.

Figure 14.2

Core Elements of the Finance Architecture in SAP S/4HANA (ABAP Platform)

Accounting, financial planning and analysis, and group reporting are the elementary components for the financial close (local close and group close). Figure 14.2 shows some of the most important tables: Table BSET stores tax details such as VAT, sales tax, and use tax, needed among other reasons for a proper tax declaration. Table BSEG stores AP/AR open items resulting from invoices and is needed to process payments and clearings on these open items. The Universal Journal for actual data, table ACDOCA, is the single source of truth for all accounting components, such as the general ledger, asset accounting, and material ledger, but also for components belonging to financial planning and analysis, like overhead cost or margin analysis. Table ACDOCP is the Universal Journal for plan data and from a data model perspective is fully aligned with table ACDOCA. Table ACDOCU belongs to group reporting and stores data resulting from consolidation processes like intercompany eliminations. The group reporting interface itself is not directly part of the accounting interface. There are a few other well-known finance components depicted. Contract accounting is used in industries like telecommunications, insurance, and high tech, where mass payments for many business partners must be processed. Contract accounting handles the respective account receivable processes and posts summarized data to the accounting interface. Therefore, it is not directly part of the accounting interface. An important line item table in contract accounting is table DFKKOP. The treasury management application is often operated as a separate SAP S/4HANA system (treasury workstation). By this it is possible to implement frequently needed software updates in the treasury application without changing other SAP S/4HANA applications. The treasury management application is loosely coupled with the other SAP S/4HANA Finance components and is not receiving data via the accounting interface as well. A rather new table is table FQM_FLOW, used for operational cash management purposes.

14.2

Accounting

The Universal Journal provides the common basis for all components of accounting and financial planning and analysis and is the heart of the new architecture of SAP S/4HANA Finance (see Figure 14.3). It provides a common persistence for transaction data: the database tables ACDOCA for actual data and ACDOCP for planning data. The harmonized data model contains all required details of the subledger processes and all reportingrelevant details, including journal entry header fields. The Universal Journal is designed to support integrated seamless reporting across the whole of accounting. You can drill down from aggregated reports like financial statements to more detailed reports in the subledgers and also to line items and journal entries to provide transparency and the full audit trail. The subledgers do not store copies of business transactions in application-specific line item tables as used to be the case in SAP ERP. The only exception is table BSEG, which is still needed as basis for open item management. Thus, there is no more need to reconcile financial figures, such as between controlling and the general ledger, because all financial reports rely on the same line items.

Figure 14.3

Overview of Accounting and Financial Planning and Analysis

The SAP HANA database can aggregate line items very efficiently for reporting or data processing, so there is no longer a need to separate the general ledger and the various subledgers. Therefore, there is no need for totals tables in the application. Totals tables used to be a reason for loss of details in reporting because they were restricted to 16 key fields. In addition, throughput benefits from not having the bottleneck of a central update of totals records, because massive parallel processing is now possible. The Universal Journal is organized in ledgers. Ledgers allow parallel accounting according to different accounting principles, such as for the local close according to the U.S. Generally Accepted Accounting Principles (GAAP), or to the German Handelsgesetzbuch (HGB), and for corporate group reporting according to IFRS. While standard ledgers contain the complete representation of business transactions and are therefore self-contained for financial reporting, extension ledgers provide a delta layer for manual adjustments to the actual data in table ACDOCA. This can reduce the

memory footprint significantly, if you require several ledgers with only slight differences. This is the case when one ledger can be derived from a base ledger by making a small number of adjustment postings. Finance provides interfaces for posting transactional data at several levels. Postings can be triggered from outside, such as by logistics or sales or from an external system, or from inside SAP S/4HANA Finance, like a manual journal entry or one of the period-end closing process steps of the subledgers. The logic of these processes is determined by one of the subledgers inside accounting or financial planning and analysis, like the depreciation run of fixed asset accounting or cost allocation of overhead costs. A common check and derivation logic of all attributes in the accounting interface guarantees consistency. As described, the Universal Journal-based architecture led to a harmonized data model and a significant reduction of the data model. For example, you can see in Figure 14.4 how the number of tables used for storing actuals has been reduced in SAP S/4HANA Cloud. Note that not all tables are depicted and that some of the tables are still used for non-actual data. In SAP S/4HANA on-premise some of the tables still exist for compatibility reasons.

Figure 14.4

14.2.1

Harmonized Data Model in Finance (Example View for SAP S/4HANA Cloud)

General Ledger

The online splitter functionality, known from the new general ledger accounting (New G/L) in SAP ERP, is also part of the common check and determination logic in the accounting interface. It can transfer account assignments like profit centers or segments from the offsetting item in the same postings or from preceding process steps to line items without account assignments like receivables, payables, or cash positions.

This might lead to a split of line items such as AP/AR items if more than one account assignment is used for the offsetting line items. Table ACDOCA contains the full details, so it contains the split representation of the open item, while table BSEG still contains the original AP/AR open item. Operational processes like payments, clearings, or dunning are still based on table BSEG. But the general ledger valuation processes (for example, foreign currency valuation, profit center allocation) will not update table BSEG any longer; table BKPF/ACDOCA is the new complete representation of all journal entries. The line item number of table ACODCA is a six-digit field. Therefore, you can post journal entries with up to 999,999 items into the Universal Journal; so long as table BSEG is not updated or updated with summarization, that three-digit line item number of table BSEG is not exceeded. A use case for an extension ledger would be the need for a slight adjustment to data of another ledger (for example, a legal ledger) with some manual postings, such as to provide a management reporting or provide data according to another accounting principle, if there are only few general ledger accounts affected. Instead of setting up another full standard ledger, in this case an extension ledger is ideal. 14.2.2

Fixed Asset Accounting

Fixed asset s such as buildings, vehicles, or machines are an essential part of a company’s assets and its financial activities. It’s crucial to be able to rely on complete, accurate, and up-to-date data when processing or analyzing fixed assets. National regulations and international accounting standards, as well as internal reporting requirements, have always been a key driver for technical developments within the asset accounting subledger. In the past, we saw evolutionary steps from traditional asset accounting in SAP ERP to the new asset accounting for SAP S/4HANA. Real-time postings for all relevant depreciation areas and connected accounting principles have been one of the main features. This has been the first time that separated asset posting tables (such as ANLC, ANEA, and ANEK) were integrated from an architectural point of view into the Universal Journal represented by table ACDOCA. Since then, it’s no longer necessary to persist total values; instead, the values are aggregated on the fly using SAP HANA technology. As a consequence, reconciliation postings are no longer necessary with closing operations because general ledger accounting and asset accounting are reconciled anyway due to the Universal Journal. Moreover, the Universal Journal provides new options for custom extensibility. However, the new asset accounting still faced architectural boundaries, ultimately leading to functional product limitations. Examples of such limitations are depreciation areas that were only needed to reflect a parallel currency in the general ledger or limitations for the management of alternative fiscal year variants per ledger. Now, intelligent asset accounting goes even farther beyond the concepts of new asset accounting. With SAP S/4HANA Cloud 1911, data structures were already radically simplified: the new master data record tables and the customizing tables were freed from redundancies with general ledger configuration. Consequently, it was also necessary to

provide new APIs to access the corresponding asset accounting functionality. Looking at the tables in SAP S/4HANA Cloud, we see that intelligent asset accounting now uses not only actual data in table ACDOCA, but also planned values in table ACDOCP; internal line item storage in table FAAT_DOC_IT has also been removed and fully integrated into the Universal Journal architecture. If we dive more into the details of the customizing structure, we see that from a technical perspective all entities defined in customizing for intelligent asset accounting are connected to customizing entities of other components, such as general ledger and controlling. One example is the new entity of the valuation view in asset accounting, which is connected to the accounting principle of the general ledger. The settings for ledgers or accounting principles are only made once. This means that, for example, currencies defined within a ledger are only maintained once in the general ledger without redundant configuration in the subledgers. This leads to a simplification and reduction of customizing activities on the one hand, and to a much closer connection to and higher dependency on customizing settings on the other. From a functional perspective, the direct connection of the asset accounting and general ledger entities leads to the following benefits: All currencies defined in the ledger settings are fully supported by intelligent asset accounting. Currently, it’s possible to define up to 10 parallel currencies within each ledger in the general ledger, compared to only the three parallel currencies supported in the past. In future, intelligent asset accounting will be able to support alternative fiscal year variants for postings and for reporting. Processes that have a close integration to other areas, such as settlements to assets under construction or contract and lease accounting, benefit from this architecture revision. For example, it will be possible to perform ledger-specific controlling settlements. Intelligent asset accounting has implemented these functional improvements using CDS views and BOPF to provide the technical basis for SAP Fiori apps (for programming model details, see Chapter 2, Section 2.2.1). Currently these are as follows: SAP Fiori apps for master data maintenance (already available in SAP S/4HANA Cloud) SAP Fiori apps for posting Analytical SAP Fiori apps for reporting, which are enabled by SAP HANA A basis for the development of new apps using artificial intelligence scenarios With intelligent asset accounting, SAP leaves most of the limitations of the existing solutions behind. The first major changes in architecture, concepts, and table structures are already in place for SAP S/4HANA Cloud. The journey will continue in future cloud releases. Intelligent asset accounting will be made available for SAP S/4HANA onpremise, too. If we called the steps from traditional to new asset accounting evolutionary, we clearly must call the changes in intelligent asset accounting revolutionary.

14.2.3

Inventory Accounting

SAP S/4HANA started to implement the vision of a clear separation of responsibilities between logistic and finance applications. Inventory management handles inventory quantities in operational processes, whereas the material ledger manages valuation of material inventories for multiple accounting standards in parallel. This separation of responsibilities is shown in Figure 14.5.

Figure 14.5

Inventory Accounting

When operational components such as inventory management process material-related business transactions, a runtime representation of a draft accounting document is prepared and passed to the accounting interface. In this preparation step, the material ledger serves as the source for valuation. Within the accounting interface, final processing steps take place and the accounting document is persisted. The central data persistency for the material ledger is the Universal Journal; material unit costs (prices) are stored in a separate database table. Analytical reporting, such as period balance reporting or line item reporting, uses the material ledger data of the Universal Journal. Inventory accounting offers a set of valuation methods for material inventories: Standard price valuates the material at one price, which is used for a time interval (for example, fiscal year) to value its material flows. Moving average price calculates perpetual actual costs, but the timing of material movements and subsequent costs can have an impact on results. Actual costing valuates inventories, work in process, and cost of goods sold at the average periodic costs. Balance sheet valuation valuates inventories for subsequent use in external or internal reporting in compliance with the lowest value principle. The material ledger integrates into the Universal Journal and valuates materials according to standard price and moving average price methods. Actual costing and balance sheet valuation are implemented separately, as both have their own taskoptimized data persistencies. However, the final results are integrated again into the material ledger and Universal Journal.

Another goal of SAP S/4HANA is to significantly increase the transactional data throughput for material movements. When SAP Business Suite was developed, it came with a lot of materialized aggregate tables for period balances, which have required setting exclusive SAP software enqueue locks on materials and database locks for update during goods movements. When processing a high transaction volume, this type of locking mechanism becomes a limiting factor. Considering that the trend in most industries is toward a more detailed tracking of transactions (for example, in the context of using serial numbers), it was essential to introduce a new way of locking during goods movements. In SAP S/4HANA, the new feature of nonexclusive locking overcomes these limits by: Using shared enqueue locks instead of exclusive enqueue locks, and Avoiding database locks by applying an insert-only approach instead of updating materialized aggregate tables. 14.2.4

Lease Accounting

SAP Contract and Lease Management for SAP S/4HANA is part of the record-to-report process and is used to abstract lease agreements, accurately manage lease modifications, and generate all lease-related financial postings within the Universal Journal. The application manages lease renewal and termination options and eliminates the possibility of “evergreen” leases, (leases that have been terminated or expired but for which the leasee continues to make payments) because it generates all lease payments based on the terms and conditions of the lease agreement. The application is a part of the finance functionality, so it leverages standard financial calendars, foreign currency and tax calculations. The lease contract is used to manage standard leasing processes such as managing leasing terms, tracking payment types, executing renewal/termination options, generating reminders for critical dates, calculating sales-based rent, and supporting the compliance of the IFRS16 and ASC842 lease accounting standards. There are typically two roles involved in leasing, the contract specialist and the valuation specialist. A key feature in the design of SAP Contract and Lease Management for SAP S/4HANA is that both roles leverage the same master data on the contract header (table VICNCN) and within the contract terms. Authorizations restrict the data that a user can view or modify. Specific transactions also are available to execute specific leasing processes such as lease payments, valuations, service charge settlements, percentage rent calculations, and rent adjustments. The contract specialist is primarily involved in managing the lease agreement, including the abstraction of the lease, generating lease payments, and managing the lease through its lifecycle. These lifecycle activities include managing renewal and termination options, updating the lease for negotiated price increases (index adjustments), documenting lease modifications, and generating lease payments. Business partner functionality (see Chapter 8, Section 8.3) is used to manage all contacts related to the agreement and to provide the connection to the vendor within accounts payable. Based on the terms of the lease and the defined payment conditions, cashflow values are calculated for each object and the business partner assigned to the contract. These cashflow values are used by the application to generate the lease

payments that are posted into the accounts payable ledger and within the Universal Journal. The valuation specialist manages the lease accounting entries required to comply with the IFRS16 and ASC842 lease accounting standards, as well as other industry and local lease accounting rules. A valuation rule is assigned to the contract for each accounting standard to be managed by the lease. Valuation cashflows generated for each valuation rule are available to support disclosure reporting required by these lease accounting standards. When a valuation rule is assigned to a lease and upon save, a right of use (RoU) asset is created within asset accounting based on the asset class defined within configuration. When the initial lease valuation postings are generated, the acquisition value of the RoU asset is posted to the asset accounting subledger for each applicable depreciation area. All planned depreciation entries are managed by the contract. As part of the periodic closing process, all accounting entries required by the lease accounting standards are posted to the Universal Journal based on the current terms and conditions of the lease agreement. SAP Contract and Lease Management for SAP S/4HANA is integrated with SAP’s real estate management application, supporting the full real estate lifecycle. 14.2.5

Service and Sales Accounting

With service and sales accounting, all relevant business transactions triggered by a service or sales transaction are recorded in financial accounting (see Figure 14.6). The accounting-relevant service or sales process step will be considered the event to trigger a revenue posting. Event-based revenue recognition enables revenues to be instantly recognized and displayed when certain events occur, such as when costs are incurred and posted in a project. With the event-based revenue recognition, we cover the current industry requirements for a comprehensive revenue recognition functionality and in particular the expectation for an intuitive-to-use application with a simplified period-end close—by supporting the required legal functionality. Event-based revenue recognition uses the Universal Journal and has no separate persistence—so no reconciliation efforts or settlement are required. The calculated results are immediately available in the general ledger, together with the original posting. Both the revenue posting and the original posting are carried out with account assignment (market segment information). The simultaneous updating of costs and revenues with the relevant allocations is an essential prerequisite for using margin analysis (Section 14.5.4). This means that, for example, settlement of projects as a step in the period-end closing is obsolete. One important characteristic of event-based revenue recognition is the need to ensure the matching principle for every single posting. For every cost and revenue posting, the corresponding revenue recognition document is created in real time. This allows up-todate profit and loss and profitability reporting, as well as work-in-progress (WIP) reporting.

Figure 14.6

Architecture of Service and Sales Accounting

Event-based revenue recognition supports the following sales and service processes: Calculating and posting real-time revenue and cost adjustment for professional services for a fixed price, time, and material, and periodic service customer projects. Providing the integration of enterprise projects with sales orders, which enables the direct flow of revenue to projects and goods issue for deliveries. Posting of revenues and cost adjustments for sales orders in real time when a goods issue is posted for sale from stock. Analyzing recognition values for service contracts based on the referenced billing plan. Principles of Event-Based Revenue Recognition Revenue recognition postings are generated simultaneously with the source documents and directly stored in accounting. This makes it possible to provide a real-time matching principle. The entry of a source document, such as a time confirmation or expense posting, creates two separate journal entries. Figure 14.7 shows an example that illustrates the principle of two journal entries created from one source document. Here the source document is a time confirmation from the cross-application time sheet (CATS) application for a fixed price project. From this document, two postings are created: 1. A journal entry for the initial time confirmation source document-in the controlling document.

2. The matching revenue recognition journal entry; the realized revenue is calculated based on the percentage of completion (actual cost divided by planned cost) and posted with the revenue recognition general ledger account revenue adjustment.

Figure 14.7

Revenue Recognition

The functionality also supports ledger-specific postings, flexible currency handling, and the drilldown from balance sheet accounts selected by profitability attributes. Period End The application enables a fast period close because most of the revenue recognition postings are already recorded and only adjustment and clearing postings need to be made. The period-end closing run performs the following steps: It clears accruals against deferrals; in some methods, the reevaluation of the complete income statements also is possible. The period-end closing run considers IFRS15-specific requirements. The period-end closing run posts calculated difference to the accounts. Errors that occurred during real time postings are stored in an error work list so that the primary posting is always posted. Monitoring To view real-time revenue recognition results and revenue postings, the processspecific SAP Fiori monitoring apps can be used. In addition, the monitoring apps support the following: Manual postings Temporary adjustments Showing general ledger account line items reporting

Because event-based revenue recognition covers the common IFRS15 requirements in the customer project scenario and in the sales order scenario, the SAP Fiori app Allocated Revenue displays the allocated revenue (allocated transaction price). The standalone selling price (SSP) is used as the basis for the allocation. Support of IFRS15 functionality is provided for: Fixed-price customer projects Periodic-service customer projects Combinations of fixed-price and periodic-service customer projects Sales orders The WIP Details SAP Fiori app displays details from the time and expense-based billing, referring to a customer project. The main purpose of the app is to explain the WIP posted in the Universal Journal. The WIP detail report is based on real-time data (project billing solution) and enriched by data from the Universal Journal. Customizing Using Self Service Configuration User Interfaces In SAP S/4HANA Cloud, key users can use the following self-service configuration user interfaces (SSC UI) for fine-tuning settings: Source assignments define which postings participate in revenue recognition. This is controlled using the cost elements. Posting rules define account assignment of revenue recognition, where users assign usages and general ledger accounts to the sources. Usages tell the system how revenue recognition should treat the amounts of a particular source. Using derivation of recognition keys, the you can configure which recognition key initially should by derived when a project item or sales order item is created. The recognition key determines how revenue recognition is calculated. You maintain an initial recognition key if you do not want to use revenue recognition for one of the presented scenarios. 14.2.6

Group Reporting

The group reporting component in SAP S/4HANA Finance provides functions for a highly automated corporate close process, supporting both legal and management consolidation. This includes the collection of financial and nonfinancial data from subsidiaries, process controls and data validations, automated generation of elimination postings, and preparation of consolidated financial statements. The new SAP S/4HANA accounting architecture offers an unprecedented level of detail, and group reporting has been architected to offer this detail for the corporate close process. With group reporting, the entity and corporate close are unified via the use of shared dimensionality, master data, and task control. This provides the necessary tooling for business initiatives such as continuous accounting, which embeds automation, control, and period-end tasks within day-to-day activities. Group reporting is available in SAP S/4HANA on-premise and SAP S/4HANA Cloud. It can be used as a traditional consolidation solution decoupled from the accounting

process or highly integrated with it. Group reporting is designed to handle a hybrid model where part of the subsidiaries is using a single common accounting system, and others feed their data from heterogeneous journal systems. In addition, group reporting integrates with SAP’s predictive accounting and with SAP Analytics Cloud for enhanced analytics and planning (for analytics, see also Chapter 4, Section 4.1). Data Collection Group reporting can load data from SAP or third-party systems as the basis of the consolidation and group close processes. The following methods are available for collecting individual financial statements and other relevant data into the consolidation system: Accounting data in SAP S/4HANA Finance directly flowing into group reporting by reading actual financial data from the Universal Journal Flexible upload from a data file. Calling public APIs Through SAP’s group reporting data collection service As depicted in Figure 14.8, part of the data collection and mapping function in group reporting has been implemented as a cloud-native service according to the microservice architecture on SAP Cloud Platform. It connects with SAP S/4HANA Cloud through the SAP Cloud SDK (see Chapter 5, Section 5.2) and uses the Cloud Connector (see Chapter 6, Section 6.5) to communicate with the SAP S/4HANA on-premise system.

Figure 14.8

SAP Group Reporting Data Collection

On the one hand, the data collection service manually collects financial, nonfinancial, and qualitative data, as well as comments. On the other hand, the data mapping function enables business users to maintain and automate the necessary mapping between the source dimensions from external SAP or non-SAP accounting systems and the target dimensions in group reporting. Data collection includes the following apps: Manage Scenarios, displayed as folders in a hierarchy in the scenario structure and used to organize related folders, forms, and reports

Define Ad Hoc Items, to use additional characteristics which do not exist in the SAP S/4HANA standard data model for financial statement item Define Data Reference, to use as reference element in the report definition Define Forms and Reports, in which to build the structure and layout of the table grid reports and forms Enter Group Reporting Data, where reported data can be entered and saved back to the SAP S/4HANA group reporting system Define Data Mapping, which provides mapping rules between dimension from any external system to the target dimension available in the SAP S/4HANA group reporting system Run Data Mapping, to define the mapping job to import external reported financial data into the SAP S/4HANA group reporting system The communication between SAP S/4HANA and SAP Cloud Platform happens through public APIs for read/write master and transaction data. Data Preparation Once the data collection task is finished, all the reported financial data is stored in the primary group reporting journal entry table ACDOCU. The application uses different document types to identify the type and source of reported data. During the data preparation phase, reported financial data from consolidation units is validated and translated. Here the posting level, such as group-independent postings for the local close or group-dependent postings for pairwise elimination, is introduced to categorize posting entry types in order to select data easily. The Data Monitor app is the entry point for performing various tasks for preparing and collecting the financial reported data from the consolidation unit. The tasks are predefined and ready to use, including balance carryforward, net income calculate, currency translation, validation of all collected data and standardized data, and so on. The system manages the status of the tasks and thus ensures a logical sequence and consistent data. Core Consolidation Functions Statutory consolidation requires that the reported financial data adhere to the accounting and valuation standards of the group. Therefore, the core consolidation function provides a task to eliminate the effects of the transaction from group internal relationships (for example, interunit trading and service), as well as investment relationships between consolidation units within a consolidation group. As depicted in Figure 14.9, before entering into the core consolidation process, the first step is to reconcile the two-side interunit elimination differences according to the matching rule without having the system post elimination entries yet. Furthermore, the intercompany reconciliation tool could also be configured to automatically correct erroneous entries for accounting documents or directly post the standardizing entries in

the group reporting system. In between tasks, the data validation function could be run at any time for checking the results.

Figure 14.9

Group Reporting Core Function

An alternative way to eliminate group-internal receivable/payables and group-internal revenue/expenses is to use one of the core consolidation tasks called reclassification. Typically for the consolidation of investments, which eliminates group-internal equity holdings, the reclassification task eliminates the investment of the higher-level units with their proportionate shares in the stockholder’s equity of the investee units that belong to the same consolidation group. Meanwhile, in some cases, it may still require the posting of manual entries to adjust the consolidated financial data. Other tasks like preparation for consolidation group changes and total divestiture deal with adjustment when a consolidation unit is acquired or divested from a consolidation group. When the consolidation core function tasks are finished, the final step is to run the validation task once again to validate the consolidated data before the group report can be generated. The posting result of all consolidation tasks along the consolidation process are also stored in the task log for further analysis. Currently these core consolidation tasks can be managed in the Consolidation Monitor app; similar to the Data Monitor app, the task sequence and the automated postings for each task are controlled by the individual task settings. Going forward, SAP’s target is to use a common interface to orchestrate both the entity and group close tasks using the SAP Advanced Financial Closing service on SAP Cloud Platform. Consolidated Reports and Data Analysis To support both legal and management consolidation while providing high flexibility in reorganizations, group reporting provides different views of consolidated data. The consolidation can be displayed as: a group view, which is a list of assigned consolidation units;

a hierarchical view of the consolidation units, profit centers, and segments; or a combined view, in which both hierarchical views and consolidation groups are chosen (called matrix consolidation). To perform data analysis, group reporting integrates pivot table reports for data analysis, with drill-down and a complete audit trail from the group consolidated results down to the document line items in accounting. With group reporting, it is also possible to analyze consolidated data in Microsoft Excel using SAP Analysis software for Microsoft Office. Group reporting provides standard financial statements delivered through SAP Analytics Cloud, such as consolidated balance sheet, consolidated profit and loss (P&L), consolidated cash flow, consolidated P&L by function of expenses, changes in equity, and comprehensive income. SAP S/4HANA Cloud offers live data connections between group reporting and SAP Analytics Cloud, which enables data analysis at the corporate level without data replication (for integration of SAP S/4HANA Cloud with SAP Analytics Cloud, see Chapter 4, Section 4.1). Finally, to prepare financial publications combining text, graphics, and numbers for communication to internal and external stakeholders and to automate the creation of complex electronic filings required for SEC or ESEF, a direct integration is being built into SAP Disclosure Management to allow users to retrieve data from group reporting directly into SAP Disclosure Management. Integration with Financial Planning and Analysis Plan consolidation is based on the integration between group reporting and the SAP Analytics Cloud solution for planning, SAP’s latest solution for financial planning and analysis (refer to Section 14.5.3). Unconsolidated financial data from the Universal Journal entry line items (table ACDOCA) can be transferred to SAP Analytics Cloud through live data connections or import functions. The unconsolidated actuals data is used as input data to start the planning process there. Plan data can be established by copying and adjusting actual data or based on value drivers. The planned result data can then be exported to the plan data line items (table ACDOCP) to run consolidations and compare actuals and planned data at the consolidated level. The plan and actual data integration can be also seen in Figure 14.10.

Figure 14.10

Financial Planning Integration with Consolidation

Integration with Predictive Accounting Group reporting can be integrated with predictive accounting (refer to Section 14.5.2). By linking the consolidation version to the predictive ledger in accounting, group reporting collects the predictive postings from the predictive ledger—for example, postings derived from sales orders. Predictive data can be consolidated to produce predictive consolidated results at the group level. This innovation extends the reach of continuous accounting and provides more accurate, forward-looking data for the organization’s financial performance. 14.2.7

Financial Closing

SAP S/4HANA has automated the valuation and accrual of transactional figures to a large extent as they are triggered by events from the respective operational transactions. Still, some closing tasks remain for processing at period end. One example is the foreign currency valuation, which relies on the availability of the final rates on the key date. The financial closing process requires a carefully designed set of interdependent activities, resulting in complete and accurately valuated financial data ready for external reporting. The orchestration, execution, and monitoring of a plan for such closing activities is supported by SAP S/4HANA Cloud for advanced financial closing. Advanced financial closing is a business application provided as a central cloud service. It provides services to manage, execute, and monitor cross-system task lists, in which each single task is an automatic or manual activity executed in or outside of one or multiple SAP S/4HANA systems. The financial closing architecture (Figure 14.11) has three layers:

1. A UI layer realized as the SAP Fiori launchpad with SAP Fiori element user interfaces supporting the different user profiles, as shown in Figure 14.11. 2. A hub layer which implements business logic for entity registration, administration, and setup of the financial close, processing and scheduling, workflow logic such as task automation and user notification, and so on. 3. The proxy layer within the SAP S/4HANA on-premise or SAP S/4HANA Cloud financial backend systems. For SAP ERP systems, the proxy layer is deployed as a separate integration component on top of the SAP ERP stack. The proxy layer exposes the backend system to the business functionality in the hub by means of an OData or REST service (depending on the shipment) and by means of content in terms of a program register with parameter mappings, task models, and task list models. This architecture supports multiple use cases. First, key users can create a single template to setup, define, and subsequently adjust the plan of all interdependent closing tasks across all connected financial systems. Organizational entities such as the controlling area, company codes, and plants are represented as nodes or folders in the task list. Global settings and parameters such as ledgers and fiscal year variants are represented as task list attributes or parameter types so that the parametrization of jobs to be scheduled during the financial closing process is highly automated. Second, when a task list is generated as a released instance of the template, automatic tasks such as jobs are automatically scheduled once their start time is due and predecessor tasks are completed successfully. Depending on the notification configuration, users are notified via email and/or SAP Fiori notifications and can start processing their tasks. Completed tasks subject to a four-eye principle are automatically presented to the respective responsible users for approval.

Figure 14.11

Financial Closing Architecture

Finally, accounting managers can monitor the progress and completion of task processing to understand the status in detail. Because a closing task supports 1:n task executions, all activities and automatic and manual status changes are recorded, along with the respective result objects, such as application logs, job logs, and spools, leaving a complete audit trail.

14.3

Tax and Legal Management

Tax and legal management in SAP S/4HANA Finance is responsible for the calculation of transactional taxes and for providing the data required for legally compliant statutory reporting. Based on country-specific requirements tax management calculates the following types of taxes: Value-added tax (VAT) Tax on sales and purchase of goods or services Sales and use tax Tax applicable on sales of goods or services; required in the United States Deferred taxes Taxes recognized during the payment Withholding tax Tax withheld during payment that is made on behalf of the payment receiver Income tax and corporate tax are not part of the tax management solution. Tax customizing allows you to define country-specific tax information, such as VAT tax codes, withholding tax codes, withholding tax types, tax jurisdiction codes, and tax rates. The VAT tax rate is stored in pricing condition tables in SAP S/4HANA Sales (see Figure 14.12). Tax-relevant business transactions inside SAP S/4HANA Finance, like payments and business transactions processed outside finance, such as supplier invoices or travel cost reimbursements, call tax management for tax calculation. When calling tax management, the business transaction provides the tax-relevant amount to tax management. The tax calculation uses the pricing component from SAP S/4HANA Sales to calculate the tax amount. The tax calculation collects these amounts on the tax code and jurisdiction code levels. Taxes can also be calculated on the granularity of the tax-relevant item. Then the amounts of the tax-relevant lines are not collected, and taxes are calculated for each item. Tax management determines tax attributes including the general ledger account for taxes depending on the business transactions and the country-specific requirements. Tax items with information like tax attributes and tax amounts, and tax-relevant items with tax attributes, are passed to the accounting interface. The accounting interface processes the tax data and stores them finally as financial document line items (table BSEG) and in the tax items (tables BSET and WITH_ITEM). This is done according to country-specific laws and follows the relevant accounting standards. In the accounting interface, tax management processes checks and determinations. The tax attributes are validated depending on the business transactions and the countryspecific requirements. For specific transactions (for example, BAPI calls of the accounting interface), the tax calculation is called in order to check that the tax amount is correctly calculated.

Figure 14.12

Tax Management

Depending on the business transactions and country-specific requirements, the user can enter the tax amounts and tax base amounts manually. Manual entry of a tax amount is used when, for example, the taxes of the invoice were calculated by the supplier and the system has to store and report exactly this tax amount printed on the supplier invoice prima nota. The user can also let the tax calculation automatically calculate the tax amount. Tax items can also be entered directly by the user. In this case, a tax base amount has to be entered or calculated automatically by the tax calculation. As an alternative to the internal tax calculation, the tax calculation can be delegated to the tax service provided by the SAP Localization Hub or to the external tax service interface. The SAP Localization Hub, tax service is a cloud-native service for calculating applicable country-/region-specific indirect taxes built on SAP Cloud Platform.

14.4

Enterprise Contract Management and Assembly

Enterprise contract management and assembly (E-CMA) functionality provides a suite of applications for enterprises designed for the creation and the management of legal contract s that can be integrated into core business processes. It offers a single repository for the lifecycle management of buy-side contracts, sell-side contracts and any other legal documents, such as nondisclosure agreements (NDAs), policies, or patents in enterprises. It allows legal counsellors of a company to trace obligations, approvals, signatures, and responsibilities internally and externally. Its unified formats and data structures ensure that all content can be digitized and is discoverable, reusable, and adaptable. Enterprise contract management manages the legal document flow across multiple departments, makes audits easier, and allows automated reviews and approvals. The solution consists of two capabilities, which are built on different platforms, but are offered as a seamless integration to each component (see Figure 14.13): Enterprise contract assembly provides authoring and configuration tools to create legal documents. It’s a service on SAP Cloud Platform, to be leveraged by other SAP applications, too. Enterprise contract management manages the lifecycle of legal transactions and integration into business processes of SAP S/4HANA. The legal content that is created is managed by legal transaction s, the main business objects of enterprise contract management (see Figure 14.13). The legal transaction collects all information required to create legal content, such as attachments, tasks, entities, contacts, documents, linked objects, dates, and so on. The context that is assigned to each legal transaction reflects a particular business scenario or circumstance and defines how the legal transaction has to be processed and what information has to be provided. Enterprise contract management provides an overview of all the critical legal transactions, contexts, and documents.

Figure 14.13

Architecture of Enterprise Contract Management and Assembly

The enterprise contract assembly service provides authoring and configuration tools that enable the legal counsel to author and assemble the legal content by using the following: Templates and text blocks as elements to reuse content Virtual documents created from a template, based on a specific business context The integration of Enterprise contract management with enterprise contract assembly enables companies to integrate with all core business processes; assemble all types of documents based on templates, text blocks, and rules; and simultaneously store all documents in a central online repository.

14.5

Financial Planning and Analysis

Modern enterprises are forward-looking. Financial planning and analysis provides finance departments with the right toolset to set budgets by means of detailed planning, monitor budgets to include both actual costs and commitments, manage contribution margins to include both actual revenues and costs and predictions, and understand and allocate overhead costs and monitor product costs. 14.5.1

Budgetary Accounting

For financial planning and analysis, SAP S/4HANA Finance includes a new budget availability control that allows finance departments to monitor and control the consumed budget. Budget availability control supports business processes, from budgeting to cost postings, and it offers specific budget reporting for the budget and cost-carrying objects. The active budget availability control issues warnings and error messages when actual or planned costs (commitments) are incurred that exceed a preset threshold. If certain conditions are met, it sends notification emails automatically to respective budget holders. In contrast, passive budget availability control is a reporting solution, providing an overview of the budget allocation. The new budget availability control is available in SAP S/4HANA on-premise for cost centers only. In SAP S/4HANA Cloud, it can be used for cost centers and investment projects. To use the new budget availability control in SAP S/4HANA, you must configure the relevant cost items for checking, specify tolerance limits for budget consumption, and specify the application’s response when the budget consumption has reached a defined threshold. This configuration is bundled into a budget availability control profile, which must be assigned to the corresponding master data object, such as a project or cost center, for which you want to control the budget. After a budget availability control profile is assigned to an object and actual costs are posted to it, the budget availability control calculates the available budget and reacts with a response according to the thresholds for budget consumption that you’ve specified. You can maintain your budget using a predefined planning/budgeting template in SAP Analytics Cloud. Alternatively, you can use a spreadsheet application to upload your data into table ACDOCP with the SAP Fiori app Import Financial Plan Data. You can view current commitments, actual costs, and the available budget for a specific object in the Project Budget Report SAP Fiori app (cloud only) and the Cost Center Budget Report SAP Fiori app (cloud and on-premise). In general, we distinguish three categories of objects involved in budget availability control: 1. The account assignment is the object incurring costs or commitments.

2. The control object is the object via which the budget and the assigned value are compared. 3. Object carries budget. When the project event of the accounting interface is called, the budget availability control identifies the active availability control classes (see Figure 14.14). Availability control classes combine characteristics like work breakdown structure (WBS) elements or cost centers or combinations of public sector entities with their specific availability control logic. Now the budget availability control collects the relevant line items. One business document may contain several line items related to the account assignment object. Then the budget availability control derives the control objects. It reads the budget amount from table ACDOCP. For the control object it retrieves the assigned value costs plus commitments—considering exceptions—from the Universal Journal (table ACDOCA). The budget availability control compares both values and issues an error or warning according to the customized tolerance limits.

Figure 14.14

14.5.2

Budget Availability Control

Predictive Accounting

Predictive accounting is a new functionality in SAP S/4HANA. With predictive accounting, the scope of traditional accounting is extended by leveraging the nonGAAP-relevant business transactions available in SAP S/4HANA—for example, sales orders—to predict future journal entries. This is only possible due to two major SAP S/4HANA Finance innovations. With the Universal Journal concept (Section 14.1), it’s ensured that GAAP-relevant postings and prediction data—the so-called predictive journal entries—are stored in table ACDOCA in the same structure; with the concept of ledger layering, the separation of predictive data from actual data is achieved. Statutory data is stored in the standard ledger, whereas the predictive data is stored in the corresponding extension ledger (see also Section 14.2.1). With ledger layering, each financial report and each KPI supporting ledger selection can be used for, for example, statutory reporting by selecting the standard ledger, or without further adjustments by simply switching to the extension ledger to display the predictive journal entries from the extension ledger and the journal entries of the referenced standard ledger.

What is predictive data? Traditional accounting creates the journal entries for statutory reporting according to GAAP. For this, reporting the original business transaction like the sales order is not relevant; only the business result is. In the case of a sales order, the goods issue and billing lead to journal entries only. However, with the information about the transactional line item, the follow-on processes can be simulated, and these simulated documents can be used as input for the creation of the predictive journal entry in the prediction ledger. To identify all the predictive journal entries for the same source document, the source document reference is stored in the corresponding predictive journal entry. This is also called a process anchor for the simulation of the follow-on process steps. An overview of the architecture for predictive accounting is shown in Figure 14.15. For getting accurate predictions, the predictive data should be as close to the actual data as possible. To achieve this, key information such as quantities, amounts, and expected dates of the source document—the process anchor—is used for the simulation of the follow-on process steps and the resulting postings. As mentioned, predictive accounting uses information already available as business transactions in SAP S/4HANA to get the corresponding predictive journal entries. To enable this, following process was implemented: Whenever a new document in one of the defined anchor processes gets created, a new predictive accounting notification is created. The predictive accounting notification contains information such as the sales order number and the timestamp and is stored in table FINS_PR_ACCNOTIF. When the transaction is completed, the on transaction finished event is thrown, which ensures that there are no further adjustments to the transaction.

Figure 14.15

Architecture Overview of Predictive Accounting

The on transaction finished event is handled by the predictive accounting event handler, which calls the so-called orchestrator. The orchestrator determines the relevant follow-on process steps and decides on further processing. In a basic sell-from-stock scenario, this would be the goods issue and the customer invoicing; for a transaction without a goods issue, such as a credit memo, the orchestrator would determine that a goods issue is not required, only a customer invoice. Based on this determination, the orchestrator calls the relevant standard functionality from logistics and sales in simulation mode. The simulation mode ensures that only temporary documents are created, which are not persisted and for which certain

validations are suppressed, such as checks for an open period or number validations. Those temporary documents are the input for the accounting interface, which triggers the financial document creation. The resulting journal entries are just predictions of how the actual postings would look based on the information from the source document item, so these documents are predictive journal entries, which are stored in the prediction ledger. When cost components are maintained for a material, this will result in a split of the cost of goods sold (CoGS) in the accounting interface when the goods issue is posted. Because predictive journal entries use the same posting logic as the actuals, the same cost split is posted when the predictive journal entry for the simulated goods issue is posted. The process of creating predictive journal entries is called simulation and is executed asynchronously in a separate logical unit of work. This approach ensures that the standard source document processing is not stopped when the simulation of the predictive journal entries is stopped in case of issues in the simulation of follow-on process steps. The successful creation of predictive journal entries updates the status of the corresponding predictive accounting notification to Processed Successfully. Errors during the simulation lead to the Processed with Errors status. Those erroneous simulations can be restarted manually with the Monitor Predictive Accounting SAP Fiori app. In addition, the Process Pending Source Documents for Predictive Accounting report (FINS_PRED_AIF_REPROCESS) is executed by a technical job and restarts the simulation erroneous predictive accounting notifications periodically. Changes in the source document will also trigger the on transaction finished event and lead to the resimulation of processes, including the call of the accounting interface. When the existing predictive journal entry deviates from the new predictive journal entry, the reversal process is triggered, which means that the existing predictive journal entry is set as obsolete with the reason Outdated Prediction, a new reversal posting with obsoleteness reason Cancellation of Outdated Prediction is created, and the new predictive journal entry is posted with an empty obsoleteness reason, which classifies this entry as the actual prediction. The newly created predictive journal entry and the reversal journal entry both have the actual date as the document date. For the posting date, the information from the source document, such as the goods issue date, is used. With this approach, the previous prediction is reversed and only the new predictive journal entry remains. The concept of document dates and simulations is the precondition to support the two analytical apps in predictive accounting, Incoming Sales Orders and Gross Margin Presumed/Actuals, which are built on the predictive accounting analytical CDS view (for analytics architecture, see Chapter 4, Section 4.1). The Incoming Sales Orders—Predictive Accounting SAP Fiori app provides an overview of the order entry based on the predictive journal entries. The relevant date used for the period assignment is the document date. The posting date is the relevant date for the period assignment in the Gross Margin Presumed/Actuals app. By using the same journal entries for both reports, the data is reconciled and a drilldown to the single line items is supported (see Figure 14.16).

Figure 14.16

Date Usage of Predictive Journal Entries

When follow-on steps for the source document are executed, then this leads to actual journal entries. If the corresponding predictive journal entries in the prediction ledger were not reduced, it would lead to wrong numbers, with the same transaction being counted twice. With the reduction logic, the existing predictions are reduced. Therefore, during the creation of an actual journal entry in the accounting interface, the reduction logic checks if the predictive journal entries exist for the source document item and transaction type. The system determines if the actual posting in the extension leads to a complete or to a partial reduction (for example, for partial delivery or partial invoicing) of the predictive journal entry in the extension ledger (see Figure 14.17). The (partial) reduction is a reversal posting for the relevant predictive journal entry. To identify the reduction postings, the documents have the obsoleteness reason Reduction Posting.

Figure 14.17

Simulation and Reduction of Predictive Journal Entries

When the predictive accounting process is completed for a sales order, which means that for the simulated follow-on processes the actuals are posted completely, the amounts of the related predictive journal entries should be zero. However, in certain cases, a small amount may still remain. This can occur, for example, due to exchange rate fluctuations. These amounts are cleared by the Post Final Reduction for Completed Source Docs in Predictive Accounting report (FINS_PRED_FIN_REDUCTION) after the last delivery or invoice is posted. In the current version, predictive journal entries for sales orders are created. The vision is to add additional processes in the future, such as predictive journal entries for purchase orders (as visualized in Figure 14.15) or the supplier invoice, and inclusion of the revenue recognition for service contracts. 14.5.3

Financial Planning

Planning is used to set organizational goals. Comparing actual operating results against the plan can identify variances that serve as signals to take corrective measures in business operations. The basic goals in planning are to plan the structure of a company’s future operations for particular periods, create benchmarks for monitoring the business transactions within a fiscal year, and monitor efficiency at the end of each period using plan/actual and target/actual comparisons. Financial planning in SAP S/4HANA enables integrated financial planning and budgeting processes based on detailed and up-to-date information to support business decisions even under volatile and constantly changing business conditions. Financial planning is based on SAP Analytics Cloud and its intelligence and simulation capabilities. SAP delivers predefined business content for SAP Analytics Cloud—so-called stories, which come with demo data. The business content for integrated financial planning covers the main financial planning and budgeting processes. It can be integrated with predecessor and successor processes within SAP S/4HANA and is built according to embedded analytics architecture (see Chapter 4, Section 4.1). In the following sections, we explain the various planning processes supported by the integrated financial planning functionality and how they relate to each other. Cost Center Planning Cost center planning involves entering plan figures for costs, activities, prices, or statistical key figures for a particular cost center and planning period. You can then determine the variances against plan when you compare the plan values with the actual costs. Variances serve as a signal to make the necessary changes to your business processes. Cost center planning forms part of the overall business planning process and is a prerequisite for standard costing. The main characteristic of standard costing is that values and quantities are planned for specified time frames, independently of the actual values from previous periods. You can use plan costs and plan activity quantities to determine the (activity) prices. These prices can be used to value internal activities during the ongoing period—that is, before the actual costs for the cost center are known. In this context, expense planning captures the external costs that impact the cost center budget, whereas statistical key figure planning captures the key figures that affect how these costs will be allocated (such as headcount). Allocations can either involve spreading amounts to the relevant receivers or use activity types to reflect the output of the cost center (such as machine hours). In this case, activity quantity planning captures the activity used by the cost center (such as energy usage) or supplied by the cost center (such as maintenance), and this provides the basis for the activity price calculation to determine the value of energy hours, machine hours, maintenance hours, and so on. Product Cost Planning Product cost planning involves calculating the non-order-related cost of goods manufactured and cost of goods sold for each product unit. You can establish how the costs are broken down for each product and calculate the value added for each step of the production process. This enables you to optimize the cost of goods manufactured

and support make-or-buy decisions. In addition, product cost planning provides information for sales and profitability planning as part of an integrated planning process. In this context, the BOMs, routings, and work centers in SAP S/4HANA are used to determine the quantity structure that acts as the basis for product cost planning. This quantity structure impacts activity quantity planning, determining the total machine hours needed to manufacture each product. Raw material price planning extends this to capture the prices for the materials included in the BOMs for each product. Product cost simulation pulls together the information in the quantity structure and the activity price planning to determine the planned costs for each manufactured material (see Figure 14.18).

Figure 14.18

Integrated Financial Planning

Sales and Profitability Planning Sales and profitability planning allows you to plan sales, revenue, and profitability data for any selected profitability segments. It is generally perceived as an integrated process involving different roles within profitability and sales accounting, such as the sales manager, regional manager, and sales employee. Distinctions are often made between the different approaches used, such as central top-down planning and local bottom-up planning. Many companies have implemented an iterative process consisting of a number of individual planning steps, in which existing planning data is copied, projected into the future, revalued, adjusted manually, and distributed from the top down until the company obtains a sales and profit plan that fulfills its requirements. In this context, sales quantity planning (see Figure 14.18) determines the total quantities to be manufactured according to the quantity structure, and profitability planning uses the results of product cost simulation to capture the planned cost of goods sold. Project Planning Project planning is used to plan the costs associated with a project or program. A work breakdown structure (WBS) is used to capture the project activities. The WBS consists of WBS elements that describe the tasks to be completed as part of the project. Project planning in SAP Analytics Cloud enables planners to capture project expense budgets and plans based on WBS elements. This project planning is integrated with availability

control functions and budget consistency checks in SAP S/4HANA (Section 14.5.1). In this context, project planning provides an alternative to expense planning by cost center. Internal Order Planning During internal order planning, you enter costs and business processes that you expect to incur during the lifecycle of an order. Using internal order planning, you can plan and compare your costs on a general ledger account basis. Financial Statement Planning Financial statement planning facilitates P&L planning, including the allocation to trading partners and balance sheet planning, with the possibility to drill down to the profit center and functional area (see Figure 14.18). It also allows planning administrators to prepopulate planning screens based on past actual data. In this context, P&L planning can also be seen as an aggregation of the detailed information in profitability analysis and the basis to derive balance sheet planning and cash flow planning. Architecture of Financial Planning As described earlier, financial planning leverages the interconnected nature of the different integrated financial planning parts to provide a driver-based simulation considering the logic and data of the included scenarios. For example, sales quantities are transferred into product cost simulation, breaking it down to needed resources, and using it for cost center activity price calculation. Financial planning connects the main planning table ACDOCP to SAP Analytics Cloud. Table ACDOCP serves as single source of truth for planning data in SAP S/4HANA, helping to connect predecessor and successor processes. To exchange master and transaction data with SAP Analytics Cloud, SAP S/4HANA Finance provides corresponding OData services (see Figure 14.19). When plan data is exported from SAP Analytics Cloud to financial planning, it’s possible to use embedded analytics in SAP S/4HANA based on ACDOCP. In addition, you can have plan data reporting during the planning process in SAP Analytics Cloud.

Figure 14.19

14.5.4

Architecture of Financial Planning

Margin Analysis

Margin analysis is SAP S/4HANA’s profitability analysis solution based on the Universal Journal. With margin analysis, the sales accountant (business role SAP_BR_SALES_ACCOUNTANT; for more information on roles see Chapter 17, Section 17.2) has a tool at hand enabling him to evaluate the company’s profit or contribution margin by market segments. Drill-down functionality for a variety of hierarchies and characteristics, such as product, customer, and project, in addition to freely definable custom characteristics, allows for detailed analysis of the key figures. Sales, marketing, product management, and corporate planning departments use margin analysis for accounting and decision-making. In addition to margin analysis, SAP S/4HANA on-premise offers the costing-based profitability analysis known from SAP ERP as an alternative solution. Costing-based profitability analysis does not leverage the Universal Journal but uses separate value fields instead. (Note: Customers migrating from SAP ERP to SAP S/4HANA should check SAP Note 2349278 for further information on migrating to margin analysis). The overall tasks of margin analysis are split into data preparation and reporting. Before you can report margins based on individual financial documents, data preparation must ensure that all costs and revenues are assigned to the correct market segments (market segment management); costs are refined and split into fixed and variable costs (costs-of-goods-sold split, production variances split, top-down distribution, and allocation); and additional statistical costs, relevant for management reporting only, are captured (statistical sales conditions).

The market segment (also known as profitability segment) is the central entity used in margin analysis. The market segment consists of a set of attributes commonly referred to as characteristic s. More than 30 characteristics, such as business area, functional area, plant, and distribution channel, are predefined standard fields. In addition, margin analysis applies the key user extensibility architecture (see Chapter 5, Section 5.1). Thus, key users can create up to 60 (in SAP S/4HANA onpremise) or 120 (in SAP S/4HANA Cloud) custom characteristics with the accounting: market segment business context in the SAP Fiori app Custom Fields and Logic. The custom fields are supported in all processes and reports of margin analysis. Each cost or revenue line item is linked to a market segment. The architecture of margin analysis is built from the building blocks discussed in the following sections (see Figure 14.20).

Figure 14.20

Margin Analysis Architecture

Data Preparation Building Blocks The market segment management component ensures the availability of characteristics values for all relevant financial documents. This is done in several steps as the market segment information is enriched throughout the overall business process. When you create a sales order, all characteristics values known at that point in time are captured and linked to the sales order. Because this happens before posting into the Universal Journal, the market segment is put into local storage (table CE4*) for later usage. Several posting processes call attribution and derivation modules via the accounting interface, such as goods issue, billing, manual financial postings, manual repostings, cost center assessment, and direct or indirect activity allocation (controlling).

In attribution, the market segment management tries to automatically find and allocate as much relevant information from the assigned cost objects as possible. In derivation, custom rules are executed to fill custom characteristics. The accounting interface then takes care of storing the market segment data in the Universal Journal as part of the financial document. Some processes require the realignment of market segment information after an initial posting—for example, when restructuring the sales organization. Note that realignment is the only margin analysis process that directly modifies table ACDOCA data. The accounting interface calls cost-of-goods-sold (COGS) split, product variances split, and statistical sales conditions. New line items are created or refined and inherit the previously defined market segments before being passed back to the accounting interface for posting into the Universal Journal. This additional information is used to enable a more detailed cost analysis: On goods issue, the COGS split posts the cost of goods sold into different general ledger accounts based on the cost components in the underlying cost component split from product cost controlling. Production variances are calculated in product cost controlling and split into different variance categories. The product variance split allows you to post the split variances to different general ledger accounts when the production order is settled. Statistical sales conditions are transferred to profitability analysis for sales order billing. Cost center assessment and top-down distribution are periodic functions that distribute aggregated data to more detailed levels of characteristic values on the basis of reference information such as the data from the previous year. The assessment run transfers primary costs, such as electricity costs, from a cost center and splits and allocates them as secondary costs to different market segments. The top-down distribution is used to split and distribute any type of cost or revenue from a higher-level characteristic to more detailed values, such as freight costs from a bundle shipment being distributed to the individual products sold. The margin reporting component (see Figure 14.20) is based on CDS-views reading directly from table ACDOCA. It uses semantic tags to calculate the required key figures like sales deduction, revenue adjustment, and contribution margins I, II, and III from individual line items. 14.5.5

Overhead Cost

Because the Universal Journal is the common line-item persistency for all finance data within SAP S/4HANA, the traditional split between legal accounting and management accounting no longer applies. All financial information is centrally stored in one single table. As a consequence, after postings are merged together in one table, the functional gaps between financial accounting and management accounting (also known as controlling) must also be closed to support the seamless integration of those two components properly. The ledger is a key element in the Universal Journal, used to separate documents that support different accounting principles, such as IFRS and Brazilian GAAP, or

documents that support different valuations, like legal or group valuation. The management accounting (controlling) components of traditional SAP R/3 and SAP ERP use versions and currency types to store parallel actual values, but not the ledger. Now with SAP S/4HANA, SAP have begun enabling key transactions in overhead management (CO-OM) to support parallel ledgers (see Figure 14.21). These transactions already use the Universal Journal (table ACDOCA) to store their line items, but until now, overhead management transactions have processed the leading ledger only, for example by calculating which part of the total balance has to be settled to which receiver. The values from the leading ledger will then be posted to all parallel ledgers. If financial departments choose to use parallel ledgers in overhead management, secondary processes as settlements, overhead allocations (universal allocation), and overhead calculations will process the different values in the different parallel ledgers per ledger. For settlement and overhead allocation this means that the sender objects will balance to zero in all ledgers, even if the object balances differ in parallel ledgers. If parallel ledgers are used, period-end closing steps have to be started per ledger, which can be automated using advanced financial closing (Section 14.2.7). A new table (ACCOSTRATE) supports parallel ledgers for activity allocations (see Figure 14.21). This table enables the storage of activity cost rates per ledger in SAP S/4HANA Cloud. The functionality will be enabled for SAP S/4HANA on-premise also. Automatic cost rate calculations (both for plan and actual cost rates) will be enabled to calculate ledger-specific cost rates, by reading cost information from any ledger and not just from the leading ledger. The calculated ledger-specific cost rates will then be stored in table ACCOSTRATE. Whenever activity allocations are posted and ledger-specific cost rates are available, overhead management reads the cost rates from the dedicated ledger and posts the resulting document just to this ledger. Documents are not copied from the leading ledger to all other ledgers. Currency is also a key element, and any posting line may include several currencies. In management accounting, typically a transactional, a local legal, and a group currency are used. Multinational companies often struggle with currency challenges that cannot be solved with just the two or three currencies that are available in a traditional SAP R/3 or SAP ERP environment. Imagine a US-based organization, the group currency of which would be USD. But for its European activities, the regional controller would like to use euros. In many implementations, the solution for such requirements was to create additional controlling areas. In our example, this would result in a US controlling area with USD as the group currency, and an additional Europe controlling area with the euro group currency. The downside of having more than one controlling area is that your controlling transactions are segregated by controlling area and you won’t be able to perform allocations between company codes in different controlling areas.

Figure 14.21

Ledger-Specific Processing

Initially, the up-to-10 parallel currencies that are available in the Universal Journal, were originally used in financial accounting only. Management accounting supported just two parallel currencies, and the other currencies were calculated and enhanced during document creation. This led, for example, to currency differences in the additional currencies after an internal order was fully settled. Only the two management accounting currencies balanced to zero. With the new parallel accounting solution available for overhead management processes, secondary processes like settlements or allocations calculate all Universal Journal currencies in parallel and therefore ensure zero balances in all currencies. 14.5.6

Production Cost

A central part of production costs is cost object controlling. With the new architecture of the Universal Journal, it’s possible to provide a completely new approach to eventbased cost object controlling, which is triggered each time a production order is posted to the Universal Journal (see Figure 14.22).

Figure 14.22

Architecture of Cost Object Controlling in SAP S/4HANA

Event-based means that from the point of view of cost object controlling, a full closing step is performed each time a confirmation or a relevant status change occurs for a production order, resulting in follow-on documents to reflect the future close. Typical period-end closing activities in cost object controlling are as follows: Overhead cost calculation, which is used to allocate surcharges like material overhead or activity overhead to the cost object. WIP determination, which calculates the value of the unfinished goods located on the production line (the work in process). Variance calculation, which determines the production variances in different categories like quantity variances, price variances, resource usage variances, and lot size variances in order to provide this information for cost analysis and profitability analysis. Order settlement, which settles the WIP value and the price differences to financial accounting and the variances by categories to profitability analysis. In SAP ERP, the user has to wait until period-end closing activities are completed and the results are available in cost object controlling to be analyzed. But in the new architecture of SAP S/4HANA, it’s possible to perform all these functions for production orders and process orders in an event-based way. This means that they are done directly with any posting to a production and process order, such as goods issue, goods receipt, or any order confirmation with activity allocation. So long as the production or process order isn’t closed, for any posting that allocates costs, the corresponding overhead amount is calculated via the related costing sheet

and is simultaneously posted to the order. In addition, the costs including overheads are posted to a WIP account in the same step. With the goods receipt posting, which credits the corresponding production or process order, the WIP account is credited accordingly. Thus, the current WIP amounts are always available without any period-end-closing activity. When the production or process order is closed—that is, finally delivered or technically completed—the remaining WIP amount is balanced to zero, and this amount is posted as the price difference. The variance categories also are calculated for the order and posted to different subledger account line item types in the Universal Journal document so that they are immediately available for profitability analysis. Any follow-up costs posted to the production or process order after final delivery or technical completion are directly posted as variances. Thus, with the new architecture, the settlement step is no longer required. In contrast to the old period-end-closing approach, the event-based process has the following advantages: Earlier transparency and better view of where the cash is tied up as working capital. WIP is an element of inventory and thus a key element of working capital for manufacturing companies. With a lack of transparency into WIP and the value of the manufactured goods, it is difficult to predict cash requirements. Event-based WIP and variances provide a more accurate view of working capital. Without the event-based WIP solution, consumer product companies (especially for food) completed their production in one shift or day but had to wait until the periodend close to analyze the WIP and the variances. This delay meant that the information delivered by finance was no longer actionable. Improvement of the reliability of financial statement for a manufacturing company. Engineering companies with long-running production processes were including variances in the WIP until the order was completed, which distorted the financial statements. Contribution to real-time P&L analysis. Engineering companies with long-running production processes were including variances in the WIP until the order was completed, which distorted the financial statements. Relief of the production accountant from complicated period-end-closing tasks. If any of the event-based functions encounters an error situation, such as missing customizing setting or locking of a cost center, then the original document is posted anyway in order to not hinder the time-critical logistical processes of production confirmations and goods movements. In this case, the event-based engine administrates an error log and puts an appropriate error message into this log. A monitor app is provided, with which these errors can be monitored, and after elimination of the error’s cause, the missing postings can be triggered from this monitor app. Another aspect of the new architecture is the harmonization of actual and plan costs for production and process orders. In SAP S/4HANA, plan costs for production orders are written to table ACODCP. With the structural similarity of tables ACDOCA and ACDOCP, it’s now easy to provide plan and actual comparisons. It’s also possible to perform a target cost calculation on the fly; this ability is provided by the new event-based reporting.

To improve the analytics capability, SAP introduced more logistical information to the Universal Journal by adding fields like operation (action within routing) and work center to both tables. This provides a very detailed view of critical key figures like variances and WIP to the production accountant.

14.6

Payables Management

Payables management is about the highly efficient processing of supplier invoices. The architecture of payables management focuses on the following goals: Reducing processing costs per invoice by maximizing automation Enable systematic use of cash discounts Ensuring timely payments for key suppliers or high-priority items 14.6.1

Supplier Invoicing

SAP S/4HANA integrates with invoicing applications such as Ariba Network and the application SAP Invoice Management by OpenText. Payables management in SAP S/4HANA is the central hub, which matches outgoing payments with received supplier invoices. It also supports—in addition to many other automation features for invoice processing—collaboration with suppliers over the timing of payment. 14.6.2

Open Payables Management

Open payables management records all supplier business transactions, such as invoices, credit memos, payments, and down payments, based on the accounts payables subledger (table BSEG). Typically, SAP S/4HANA Finance receives the financial documents of the supplier invoice type from the invoice verification component in procurement, but they can be created manually, too. The open received invoices and credit memos are recorded in accounts payable and are processed in the payment process. The company can document its payments by creating the payment documents manually, creating the payment documents via an automatic payment program, or enabling automatic debit transactions. 14.6.3

Automatic Payment Processing

Automatic payment processing (also called a payment run) is used to make periodic payments to suppliers, customers, and employees. It’s a batch job that can be scheduled to process, for example, a set of invoices or credit memos due for payment. During automatic payment, clearing of the open items also takes place. As shown in Figure 14.23, the user can enter the necessary selection parameters to select financial document lines for customers and suppliers. This selection is passed on to payment proposal processing, which generates the payment proposal. The payment proposal has the following characteristics: contains a list of items selected for payment; can be edited via the user interface—for example, to change discounts, payment methods, and bank instructions; and

contains a list of items that must be excluded from the selection because of any reason (for example, no payment method found or payment block set on item/master data of supplier/customer). This payment proposal data is passed on to payment processing when the scheduled payment processing starts. Payment enqueue processing ensures that payment of selected items is not processed simultaneously by different parallel payment processes but only once. The selected items necessary for posting the financial documents and clearing are processed. The payment data, containing the list of payment documents created and the paid document item list, is created. The payment data that is generated can be used directly to create a printout of the result, or the user can run payment medium programs to create payment mediums (such as checks, e-files, and so on). Payment data can also be displayed by the user. The creation of a payment medium can be chosen during selection when scheduling the payment program. Payment medium creation can also be achieved by the paymentmedium workbench (PMW), which summarizes functionalities of former payment medium programs.

Figure 14.23

Automatic Payment

Once the payment run has finished successfully, the PMW is triggered either by the user or by the payment program. The items that are to be sent in the medium are sorted and grouped based on the level of detail defined by customizing settings. A note to the payee is prepared, depending on the customizing, and this is passed on to the payment medium creation process. The payment medium creation process takes the necessary payment information and creates a payment medium, depending on the selected type of the payment medium (for example, data medium exchange). In addition, payment advice notes are created as letters, IDocs, or faxes. The payment program’s customizing settings define details of payment methods, banks to use, and so on. This info is used by the payment program during the payment run.

14.7

Receivables Management

Receivables management keeps track of payments to be received by the enterprise, including customer invoices, credit memos, and down payments. Its main goals are to: keep overdue receivables low, keep days of sales outstanding (DSO) close to the agreed-upon payment terms (sometimes called net DSO), and minimize write-offs for bad debts, without sacrificing possible business opportunities. To achieve high operational efficiency, receivables management automates corresponding processes—for example, the processing of incoming payments or closing activities. Based on the existing accounts receivables backbone of SAP ERP Finance, SAP S/4HANA offers a suite of add-on applications that are fully integrated with accounts receivables processes, as well as sales. These receivables management applications include the following: The SAP Credit Management application helps companies implement a single company-wide credit policy and use credit analysis results to calculate credit limits, including embedding credit checks into key operational processes. SAP S/4HANA Cloud for credit integration is a solution that extracts information from credit reports provided by various credit agencies and automatically updates crucial organizational and credit data into your SAP Credit Management system. SAP S/4HANA Cloud for customer payments is an electronic bill presentment and payment solution that lets business partners download or pay invoices using bank transfer, credit card, and so on. It also allows bill recipients to see and change their account master data, as well as initiate a clarification process. The SAP Collections and Dispute Management application provides workflow and process support for managing disputes more effectively and establishing a proactive cash collections process by providing a prioritized worklist for cash collectors. SAP Cash Application is a service to match open invoice receivables to incoming payments. With machine learning algorithms, financial applications are trained to learn from historical matchings in order to achieve higher matching rate automation for clearing incoming payments. These offerings can pull data together from multiple SAP S/4HANA or SAP ERP Finance systems, providing a unified view of open receivables, even in a distributed systems landscape. 14.7.1

Open Receivables Management

Open receivables management is based on the accounts receivables subledger (table BSEG). Like in open payables management (Section 14.6.2), open receivables management records all customer business transactions, such as invoices, credit memos, payments, and down payments. Typically, SAP S/4HANA Finance receives the

financial documents of the customer invoice type from the billing component in sales, but they can be created manually, too. Dunning If a customer does not pay an invoice by its due date, a company can start the dunning process, which sends a dunning letter to the customer as a reminder for the payment(s) to be made. The accounts receivable subledger integrates dunning, which consists of two functions: 1. A background process used for selecting the open invoices and triggering the dunning process. 2. A user interface for accessing the dunning history of an individual customer Figure 14.24 shows the architecture of dunning. The dunning run can be started periodically. It checks customizing and business transaction data to determine whether dunning should be started.

Figure 14.24

Architecture of Dunning

When scheduling the dunning run, the range of customers and the date of dunning can be defined. The dunning run selects the customer and vendor documents (invoices and credit memos) for the given selection. Then it decides whether the item can be dunned based on due date, total amount of pending payments, open credit memos for the same business partner, and so on. The dunning proposal is created, which can be edited by the dunning processor—for example, to exclude some invoices from dunning. Dunning calculates the interest on the due amount and creates the appropriate dunning letter. The dunning letter is printed and sent to the customer by email, fax, or mail. The dunning processor can access the dunning history of a customer and also display the individual dunning notes.

Bank Statement Processing and Cash Application Bank statement processing provides the functionality to automatically import or manually enter bank statements, as well as other bank information, such as check deposit transactions, lockbox, and account balances. Figure 14.25 shows the bank statement processing architecture overview. Bank statements can be created in SAP S/4HANA Finance via different channels, such as automatic file transfer by SAP Multi-Bank Connectivity, file upload in the Manage Incoming Payment Files SAP Fiori app, or manual entry in the Manage Bank Statements SAP Fiori app. After creation, the automatic processing is started that analyzes the bank statement data and performs automatic posting and clearing. This automation can be configured either by traditional rules (interpretation algorithms, posting rules) or by the new bank statement reprocessing rules engine, explained in the next section. If the automatic processing for a bank statement item fails, such as when the invoices to be cleared are not found automatically, there are two options: either the item is manually reprocessed in the Reprocess Bank Statement Items SAP Fiori app, or there is a machine-learning based automatic reprocessing initiated by SAP Cash Application, discussed ahead. SAP Bank Statement Reprocessing Rules The bank statement reprocessing rules engine allows for fully system-guided creation of new rules to automate future occurrences of bank statement items that fulfill the same condition. After the manual reconciliation of a bank statement item, the accounting clerk is asked to create a rule that is derived from the current posting just been made. The bank statement reprocessing rules are integrated with the Reprocess Bank Statement Items SAP Fiori app. It’s possible to propose bank statement reprocessing rules based on intelligent technology. The system analyzes historic clearing data and creates a reprocessing rules template that reflects the recent treatment of bank statement items in disjunct logical groups. After the creation of a reprocessing rules template, these rules can be validated on live data, shared among groups, and automated to be performed automatically on new bank statement items imported into SAP S/4HANA.

Figure 14.25

Bank Statement Processing Architecture

SAP Cash Application SAP Cash Application enables finance departments to use their historic clearing data to automate future bank statement clearing processes. Changes in the current process can easily be included by the retraining of the model. This lowers TCO and allows a full automation of the bank statement process. SAP Cash Application consists of the machine learning application on SAP Cloud Platform and the integrated components in SAP S/4HANA (see Figure 14.26). According to the confidence level of a particular assignment proposal the SAP Cash Application performs either automatic clearing or files a proposal to be confirmed by the accounting clerk. A corresponding confidence level for assignment proposals can be defined for each company code. SAP Cash Application uses each bank statement item that needs to be reconciled by the accounting clerks manually to learn from it and provide clearing proposals in the future. A user interface is available that shows key users the result of a new training and how it performs compared to the current trained model in productive use. SAP Cash Application also provides features to extract payment advices received in PDF or Microsoft Excel files. It can turn the received information into structured data so that it can be used fully automatically in the clearing process of aggregated payments.

Figure 14.26

14.7.2

SAP Cash Application

Credit Evaluation and Management

Often, goods or services sold to customers are given on credit. To limit the risk of losing money, enterprises assess a customer’s credit rating before selling on credit. This is managed by the credit management application, which is part of receivables management. In SAP S/4HANA, the former credit management functionality of sales and distribution is no longer available. Credit management functionality enables companies to operate centralized credit management in a distributed system landscape with multiple SAP S/4HANA Finance systems. It takes into account both internal and external credit information. Using credit management you can collect, manage, and analyze credit information from application components inside SAP S/4HANA, as well as from integrated finance systems. Its architecture enables companies to distribute and use the credit information effectively in the systems and components connected—for example, sales, logistics, and financial accounting. The data reported is consolidated in credit management and is available for many types of credit checks at the business partner or business partner group level. Examples of the cross-system and component information flow include the following: Credit exposure update from accounts receivable accounting and sales to credit management Credit check in credit management on creation of a sales order in sales In a multisystem landscape, the data exchange can be processed via SOAP web service interfaces, enabling a connection with SAP S/4HANA, SAP S/4HANA Cloud,

SAP ERP, and non-SAP systems. The exchange of messages can be made using SAP Process Integration (SAP PI) or—for all synchronous and for certain asynchronous service interfaces—directly using the Web services reliable messaging (WS-RM) protocol as a point-to-point connection. The central part of credit management is credit limit management and control, which is integrated with the following components (see Figure 14.27): The credit rules engine calculates and assigns a business partner-specific score and limit based on a company-defined credit score card. External ratings of business partners can be incorporated using SAP S/4HANA Cloud for credit integration for credit agency integration. The rules engine supports the automatic determination of the credit worthiness of each business partner, impacting payment conditions and terms of sale. Credit events automate the monitoring process by setting up a process chain, in which follow-on activities are triggered by predefined credit events. For example, if the score has changed, the application can trigger the redetermination of risk class and, as a result, the recalculation of the credit limits of the credit account. Credit decision support collects the necessary information about the credit data of business partners and credit use information and provides the necessary data for fact sheets, credit history, and worklists. Main objects are the credit case for structured processing of credit limit applications and the documented credit decision (DCD). The DCD allows for an efficient decision-making approach to release a blocked sales document and a comprehensive analysis of the credit status before making a decision. When a customer-related business transaction such as a sales order is to be created, the corresponding application requests a credit check in the credit management system (see Figure 14.27). Credit limit management and control processes this request and sends back to the requesting application whether the credit can be granted or not based on the existing credit information of the business partner. If the credit was not granted, sales blocks that particular business transaction and a DCD is created.

Figure 14.27

Architecture of Credit Evaluation and Management

Credit exposure information of a customer is updated in the case that one of the following occurs: A delivery is posted An invoice is created The invoice is sent to financial accounting The payment is made by the customer for the invoice When the credit exposure information is changed, the credit limit of the business partner is adjusted in real time. For creditworthiness assessment and reporting, there are various analysis reports available as SAP GUI transactions, as well as SAP Fiori apps for credit management with instant insight into business transactions to display and analyze credit-relevant business partner information. In addition, credit management provides stories for SAP Analytics Cloud. 14.7.3

Customer Invoicing

SAP S/4HANA Cloud for customer payments provides a self-service offering for electronic bill presentment and payment (EBPP). Companies can use the cloud-based solution to display bills, credits, payments, and account balances to their customers online. It further allows bill recipients to maintain their customer data, initiate payments, and create disputes. SAP S/4HANA Cloud for customer payments is a SaaS solution built on SAP Cloud Platform that is integrated with SAP S/4HANA Financials via an OData service (see Figure 14.28). The web user interface includes various apps of the customer payments solution and can be launched in SAP Fiori launchpad or it can be integrated through inline frames (iframes) into third-party portal applications.

Figure 14.28

Architecture of SAP S/4HANA Cloud for Customer Payments

The solution supports the following services: Read and update customer data, such as bank details and address. Based on the updated information provided by the portal user, the corresponding customer master data in accounts receivables is updated instantly. Process invoices: read open customer invoices, credit memos, and paid bills from accounts receivables. The bills can originate from accounts receivables or sales. In detail the users can do the following: View, download, and fully or partially clear invoices with payments. Download attachments for invoices. View notes to payee. Create and view disputes on invoices and upload and download attachments for disputed invoices. Leave comments on invoices. Create and manage payment advices. Make full or partial payments using credits, bank transfers, direct debits, credit cards, PayPal, or custom payment methods by calling the accounting interface. In the simple call center mode, accounting clerks have the option to log on to the portal and directly make searches after they receive a phone call with account information. Customer payments apps can be opened via iframe in third-party portal applications. The solution allows the user to just pay the total, pay on the invoice level, or make partial payments by a payment method of choice. On the application side, this

information is then appropriately sent to the relevant backend interface, which sets the payment method on the invoice paid in accounts receivables. For credit card payments and other real-time payments, an integration with SAP digital payments add-on is delivered (for details, Section 14.8.1). Then automatic payment clears this paid invoice in account receivables. A document for which the payment method was set by the portal payment cannot be cleared by manual clearing transactions. If the user chooses to pay partially, the backend calls the accounting interface to create a partial payment document in accounts receivable with a payment method set on it. The user can also opt to create a dispute during this process for this partially paid invoice, which will call the BAPI interface in SAP S/4HANA Finance to create a dispute case. 14.7.4

Dispute Resolution

Dispute management capabilities are part of receivables management. They provide functions to process discrepancies between the selling organization and customers regarding the customer’s financial obligation—in other words, the dispute case. The solution stores all relevant information, like reason code, priority, processing deadlines, free-text notes, and attachments from various departments involved in the clarification process—in a dispute case. To do so, it creates a dispute case using the records and case management reuse module, which acts as a repository to create, store, and maintain these electronic files. Dispute cases can be created in three different ways (see Figure 14.29): Directly from various accounts receivables processes (that is, out of a manual cash application, lockbox postprocessing, or account clear transaction) From SAP S/4HANA Cloud for customer payments portal via the dispute case create BAPI In multiple-system scenarios, dispute cases can be created and updated using IDocs A dispute case can be routed to various processors using SAP Business Workflow. When a processor of a dispute case views the dispute, he or she can navigate to linked objects. The linked object holds references to the corresponding financial accounting data, such as customers, invoices, and cleared items. The dispute processor can create notes for the dispute case and add attachments. The processor can access and change various dispute case attributes, such as reason, escalation reason, status, and priority. Dispute management is tightly integrated into financials processes. Incoming payments, credits given, and write-offs created in accounts receivable transactions automatically update the dispute case—more precisely, the remaining disputed amount. Thus, the dispute case always shows the latest payment status. It also supports manual or automatic correspondence to the customer by letter, email, or fax. Using the corresponding SAP Fiori app, the information about a dispute case is available in one screen, including critical case attributes, notes, attachments, and linked objects. For root-cause analysis and progress review, there are various reports available as SAP GUI transactions, as well as SAP Fiori apps with instant insight into business transactions to display and analyze dispute-relevant customer information. For example,

why are disputes raised, by which customers, and how long does it take to process them. In addition, dispute management provides stories for SAP Analytics Cloud.

Figure 14.29

14.7.5

Architecture of Dispute Management

Collection Management

Collections management is part of receivables management, too. Companies use collection management to implement a proactive cash collection process. According to specified rules collections management selects and evaluates business partner accounts for collecting outstanding debts. Collections management automatically generates a worklist of past-due customer accounts ranked in order of severity following a flexibly defined collection strategy. Collections management acts as a type of central hub, with multiple receivables management applications connected to it. In this scenario, when receivables management processes invoices and payments, it sends an IDoc message with this data to the central collections management. Based on this data, collections management creates collection items. The batch job work list generator runs periodically. It selects collection items per customer and then prioritizes the list of customers according to the collection strategy. The collection strategy is defined by rules such as credit rating limit, invoice overdue for one month or more, or amount to be collected greater than 1,000 EUR. Companies can implement additional collection strategies as needed. The final list is stored as a collections work list. The work list user interface displays the prioritized list of business partners to the specialist who is responsible for collecting the amount. The Process Receivables screen allows collectors to review customer accounts, promises to pay, dispute cases, customer contact information, and so on. The collection specialist works along their individual work list, calls the customer with the pending payment details, and updates the collection items based on the customer response. Based on the response, the collection specialist creates one of the following business documents:

A resubmission of the work-list item by the next notifification date A promise to pay the due amount on a particular date A dispute case requesting more clarification Single or mass correspondence according to receivables management correspondence types to share with the customer (such as open item list or dunning letter) A note to summarize the outcome of the collection call Using the corresponding SAP Fiori app Process Receivables, a collection clerk gets all necessary information in one single screen combined, including invoices, payments, and information about the collection history. Collections management is tightly integrated into the accounts receivables and treasury and risk management processes. For progress review, there are various reports available as SAP GUI transactions, SAP Fiori apps for collections management, and stories for SAP Analytics Cloud. 14.7.6

Convergent Invoicing

Convergent invoicing (CI) implements the invoicing process from the consumption of service, including phone calls, software downloads, toll fees, and parking tickets, through pricing and billing, right up to dispatch of the invoice to the customer, containing all consumed services from various billing systems. Convergent invoicing is designed to enable scale-out for large volumes—for example, several million consumption records per day. It supports various contract models, such as prepaid or postpaid subscriptions, including recurring fees and usage in B2C, revenue sharing and partner settlement to onboard channel partners (resellers, app developers, artists, record labels), B2B master agreements and billing for enterprise customers with volume-based discounts over customer hierarchies, and outcome-based billing models. The data is processed in a fully automated way. The clarification framework controller and enhanced message management detect errors and exceptions automatically and create clarification cases for manual follow-up activities. Convergent invoicing can scale to very high volumes of consumption records. Based on rules defined in the system configuration, the consumption records are aggregated during data processing, while detailed consumption information can be retrieved at any time. Convergent invoicing is integrated with contract accounting, SAP’s high-volume subledger for accounts receivable and payable. Convergent invoicing creates an invoice for the business partner and posts this total amount to the contract account of the business partner as a receivable using automated account determination for financial accounting and controlling. You manage the receivable or payable using the standard processes of contract accounting to collect the receivable from the customer or refund a credit amount to a partner, such as automated processing of incoming and outgoing payments, dunning, and automatic account maintenance. The functional value of convergent invoicing connected tightly to contract accounting is based on a multitude of embedded accounting and accounts receivable/accounts payable processes that support streamlined handling of the postings generated in

convergent invoicing and provide an optimized takeover of accounts receivable processes already during invoicing to reduce overall processing time. Architecture Overview Convergent invoicing provides the capability to merge billing content from various metering and billing sources into any structure holding the information from these sources that might have an impact on the billing and invoicing logic. There are three main process steps; the use of convergent invoicing can start in any one of these (see Figure 14.30): 1. Rating 2. Billing 3. Invoicing

Figure 14.30

Detailed Architecture of Convergent Invoicing

Rating processes unrated consumption items representing, for example, a call detail record (CDR) or a software download notification, and determines the consumption price a business partner is to be billed. Unrated consumption items contain a technical identifier of the consumed service; a technical resource ID such as a phone number, car license plate, or email address; a consumption date; and the measured consumption itself. The unrated consumption items contain neither a price nor how the business partner has to pay for the consumption. The rating process determines the business partner (payer) and the price of the consumption. The rating process has a default implementation to call the SAP Convergent Charging application through a bulk web service for pricing and determination of the payer. Alternatively, the consumption items can be rated using the pricing technique of sales. To reduce the load, consumption items can be aggregated prior to the final rating based according to configuration. The result of the rating process are billable items.

Billing manages billable items and groups them into billing documents. Billing documents are further processed by invoicing, resulting in an invoice to the business partner. To create structured bill content for the subsequent invoicing process, based on system configuration, the billing process aggregates billable items into billing document items—for example, all usage records for a month. During the billing process, the system can create additional billable items for volume discount calculations that span multiple billing cycles, such as when a monthly billing cycle is used but a discount is calculated on an annual basis. Invoicing creates convergent invoices for business partners and posts the invoice amounts in contract accounting. Apart from the billing documents created by the billing process of convergent invoicing, invoicing can merge additional invoice information into a customer invoice, such as billing documents from sales, net calculation documents or tax calculation documents from the public sector, billing-relevant documents from banking, and contract accounting posting documents, to name a few. Convergent invoicing also offers APIs (RFCs/BAPIs, web services) and a file upload to generate consumption items and billable items from external systems. If the consumption data is already linked to an SAP master data entity and if the price is known up front, then billable items can be sent as there is no need for the rating of consumption items. On the other hand, if the price is dependent on the total consumption of a month, then uploading and aggregating consumption items prior to rating is necessary. Rating systems such as SAP Convergent Charging provide high availability and lean APIs for communication with the technical infrastructure that measures the service consumption (for example, the telecommunication infrastructure to manage mobile services). The rating systems can either rate consumption items directly and send rated consumptions/billable items to convergent invoicing (for example, a prepaid usage rating) or send unrated consumption items for batch rating (for example, a postpaid usage rating at month end). Billing documents can be transferred to convergent invoicing by using BAPIs. In a complex integration scenario, a combination is possible; that is, some business segments might upload consumption items for final rating at the end of the billing cycle and others might upload billable items for billing. Rating, billing, and invoicing can generate new consumption items and billable items automatically. When billable items are uploaded to convergent invoicing, some of the items might be relevant for revenue sharing and partner settlement. In this case, the convergent invoicing splits off dependent items, either as unrated consumption items if rating is required or unbilled billable items if the revenue share is known. Similar dependent items are created for the purpose of intercompany settlement. In convergent invoicing, billing plans cover one-time fees, recurring fees, and down payment requests. They can be created solely as standalone objects or be integrated with the provider contract. When a billing plan is processed, the system creates billable items in an unbilled status, potentially with dependent items for partner settlement and/or intercompany settlement. For posting unbilled revenues, convergent invoicing can create and process revenue accounting items. Convergent invoicing triggers deferred revenue postings (time- and event-based) in contract accounting. In the common business scenario offering prepaid or postpaid buckets of consumption items (as, for example, allowances), convergent

invoicing is the central source from which to derive appropriate accrual and revenue recognition postings for the refill and usage of transactions in a bucket. To cope with a requirement in digital business models, invoices can contain an itemized list of every consumed transaction in the billing cycle to substantiate the basis for legal invoices and its billing content. Invoice corrections and requests for credit memos in the event of a dispute can be captured down to the transactional level of a billable item. If an invoice is incorrect, it can be corrected using billing tasks for invoice correction. In the processing of these billing tasks, the application ensures that the new invoice is linked to the corrected invoice and labeled as a correction. Integration Architecture Convergent invoicing is a central component of the SAP Billing and Revenue Innovation Management solution, which consists of multiple SAP S/4HANA and other components covering the end-to-end processes to monetize digital business models for subscriptions, pay-per-use, and outcome-based billing. The components of the end-toend solution are integrated smoothly into SAP S/4HANA (see Figure 14.31).

Figure 14.31

Components of SAP Billing and Revenue Innovation Management

SAP Billing and Revenue Innovation Management, subscription order management serves as the subscription management component for product modeling; subscription order capturing; and change processes of subscriptions such as renewals, contract termination, and add or eliminate options. Contract accounting-specific master data entities such as the contract account or the provider contract are first created in subscription order management and then distributed to contract accounting. If SAP Convergent Charging is connected, the contract accounting inbound master data adapter forwards the master data creation requests to SAP Convergent Charging. SAP Convergent Charging sends unrated and rated consumption items (CITs), as well as unbilled billable items (BITs). Unrated consumption items are sent either event-based or in bulk, if a total consumption for the billing period must be computed to derive the correct price. The rating process comes with a standard implementation for calling the SAP Convergent Charging web service to determine the price and the payer of unrated consumption items. SAP Convergent Mediation by DigitalRoute is a solution that can be connected to convergent invoicing as middleware to stage consumption and billable items before they are sent to convergent invoicing and orchestrates the API calls to send this data to convergent invoicing. Convergent invoicing integrates with SAP S/4HANA Sales for billing bundles of services and hardware deliveries to the customer, whereby billing documents created in sales are transferred to convergent invoicing for posting and printing in one invoice. Using the

optional functionality in the solution SAP S/4HANA for billing and revenue innovation management as well as the additional sales and distribution option, convergent invoicing can transfer its prebilling content to sales billing and posting in accounts receivables. The rating process comes with a second standard implementation to call the sales pricing module. The billing plan can also be integrated with pricing to determine the amount of the one-time and recurring fees. To handle the requirements of revenue recognition and to comply with IFRS 15/ASC606 legal accounting, convergent invoicing is integrated with revenue accounting and reporting. Convergent invoicing creates the necessary fulfillment and invoice revenue accounting items (RAIs). The creation of a performance obligation in revenue accounting is either triggered via the creation of a provider contract or the creation of a billable item with the necessary information. 14.7.7

Contract Accounting

Contract accounting (FI-CA) is a subledger tailored toward the requirements of industries and lines of business with high volumes of customers and transactions in B2C business models, such as utilities, insurance, retail, high tech, telecommunications, and the public sector. In a B2C scenario, contract accounting implements the typical functions of accounts receivable accounting, such as posting, and receivables processing with the aim of processing large document volumes. Receivables processing includes payment handling and collections management. Contract accounting also implements functions of accounts payable accounting, facilitating the disbursement of payments to large numbers of recipients. In addition, by using master agreements and partner agreements, contract accounting supports complex B2B scenarios with flexible contract models and conditions. Master agreements specify agreements and conditions by which individual transactions in the master agreement must abide. Partner agreements represent a company’s agreements with service providers. Attributes of the partner agreement control, for example, when a partner receives their share or whether the partner bears the default risk. Contract accounting processes data with a high degree of automation and a high throughput that is achieved by applying business rule automation, parallel processing of background jobs, and data scalability. You can use intelligent situation handling (see Chapter 4, Section 4.3) to inform users about issues that occurred during the processing of scheduled mass activities. Situations are triggered when the error rate or run time exceed preconfigured limits or when critical messages occur during mass processing. By implementing flexible configuration options provided through what are known as posting areas in customizing, the application is enabled to automatically determine accounts for automatic postings or for determining default values for transactions. If exceptions occur during automatic processing, contract accounting creates clarification cases for manual follow-up activities. A flexible master data model in contract accounting allows individual contract models and payment conditions for a business partner. Contract accounting uses a business partner in the role of contract partner (see Chapter 8, Section 8.3). The contract partner supports the use of multiple addresses for the same business partner for correspondence and the delivery of services.

As a contract partner receives services from a company and is billed for these services, the contract partner is connected to the company by contract accounts and by the conclusion of contracts. You define control and payment data on the contract partner. In addition to bank details, the contract partner stores general data, such as the business partner type and creditworthiness. Contract accounting determines and updates the creditworthiness automatically. Each business partner posting is assigned to one business partner and to one contract account. For each business partner, the contract account master record defines the procedures that apply when posting and processing the line items of a given contract account. These include, for example, payment and dunning. Each contract partner can be assigned several contract accounts with different payment conditions. Example One contract account contains a single euro payments area (SEPA) direct-debit payment method, so all posted receivables of this account are automatically cleared by the payment run (company-initiated payments). Another contract account for the same business partner contains not a payment method but contains a dunning procedure because the receivables will be paid by the customer via bank transfer (customer-initiated payments). To support an even more finely differentiated master data model, you can use contracts in addition to contract accounts. While contract accounting is integrated with various industry solutions through the use of industry-specific contracts, such as the contract object in public sector tax and revenue management, the utilities contract, or the insurance object, it also comes with a generic contract object, the provider contract, when you use contract accounting standalone. A provider contract is a generic contract that comprises all legally binding agreements regarding the provision and billing of services that are entered by a customer and a company for a specified period of time, including payment and dunning specifications. For each business partner, contract accounting provides an account balance and reporting on the contract account level and contract level. Payment handling comprises the following business processes: Company-initiated payments, which covers automated processing of all outgoing and incoming payments for customers who have granted the company authorization. Customer-initiated payments, which covers automated processing of payments made by customers through the bank. The incoming payments are processed by lots. The cash desk and cash journal, in which customers make payments directly to the company. For secure handling of credit card payments and processing of payments via payment service providers (PSPs), contract accounting can be integrated with the SAP digital payments add-on. Collection activities of financial accounting rely on business logic defined in the business configuration. The system derives the relevant collection activities for each customer, such as sending dunning reminders, creating work items for the collection specialist,

and submitting receivables to external collection agencies or to third-party applications for legal dunning procedures. Collection specialists can gain a detailed overview of their customers and access to all the necessary tools and functions for the required measures—for example, creating installment plans. Figure 14.32 illustrates which SAP S/4HANA components integrate with contract accounting. As explained, contract accounting is closely integrated with convergent invoicing. Convergent invoicing consolidates billing and invoicing for all services, including usage, subscriptions, and channel partner shares, into a single invoice for the customer. It then posts this total amount to the contract account of the business partner as a receivable. In addition, you can transfer billing documents from sales to contract accounting or post documents from settlement management.

Figure 14.32

Integration of Contract Accounting within SAP S/4HANA

During posting in contract accounting, the transaction figures are not automatically updated in the general ledger. For performance reasons and to limit the document volume in the general ledger, postings are transferred periodically in summarized form to the general ledger. Margin analysis is updated automatically when you transfer the postings to the general ledger. Contract accounting can be operated as a hub solution. That is, if you run contract accounting in a separate system, you can update the general ledger of this system while replicating all contract accounting postings to a central SAP S/4HANA system in which you consolidate and run your general ledger operations. Contract accounting updates cash management every time you post a document. This means that the cash management liquidity forecast and cash position are always up to date.

You can forward actual data from contract accounting to funds management on a totals basis and thereby manage budget planning. Contract accounting supports credit scoring. You can put your customers into segments with regard to their credit risk and payment behavior. Segmentation takes place in credit management (Section 14.7.2), in which external credit information and internal credit information, such as the length of the business relationship and the payment behavior, are considered. The external and internal credit information from the different systems is saved in the master data of the business partner. This risk-based segmentation has an influence on the collection process. By integrating contract accounting with dispute management (Section 14.7.4), you can create and manage complaints from contract partners with regard to incorrect invoices and credits or missing payments and credits as dispute cases. Contract accounting can use debt recovery management (FIN-FSCM-DR) to collect overdue receivables using internal collection departments. By integrating financial customer care (CRM-IC-FCA), you can provide your service center agents with access to your customers’ financial data from a web-based user interface. There are various message interfaces for exchanging data between contract accounting and external billing and order management systems. The SAP Global Trade Services application can be connected to contract accounting to comply with legal regulations by using sanctioned party list screening in handling your payment transactions. To be able to run only one subledger and process payments only for one subledger while benefiting from the functional scope of contract accounting, it’s possible to configure SAP S/4HANA Finance to transfer postings from contract accounting to accounts receivable accounting (FI-AR) or accounts payable accounting (FI-AP). These transfer postings can help during a subledger migration from accounts receivable accounting or accounts payable accounting to contract accounting, or when only a certain business is to be reflected in contract accounting using convergent invoicing integration.

14.8

Treasury Management

Running a treasury department is mostly a very central task. Usually linked as a central team near the CFO, the treasury team supports the CFO in core tasks. SAP S/4HANA Finance addresses these tasks with a set of applications to: Manage the relationship to the banks a company has business with, including management of the company’s bank accounts and running a (bank) credit limit management Control the connection to banks and payment service providers in the payment process with outgoing payment files and incoming bank account statements, including as an intermediate and control instance for other payment-triggering systems (SAP and third-party systems) Run a (central) cash management, collecting cash position and cash flow information of bank accounts in the system instance itself or remotely managed bank accounts Fund and manage the company’s debts internally and externally from end to end Manage financial risks such as foreign exchange risk or interest rate risk centrally, including for subsidiaries, and provide access to the financial markets via the headquarters or a specialized funding company Manage the company’s financial assets from small to large asset managers 14.8.1

Advanced Payment Management

Payments are typically initiated from several applications and systems across an enterprise’s system landscape, that has been growing for years. Typical system landscapes include modern SAP S/4HANA software, older SAP Business Suite applications, and third-party systems. All these payment-initiating systems need to connect to the organization’s banks. Doing this directly out of all these applications and systems is not efficient and doesn’t provide full visibility into cash positions and payment statuses and results in a complex bank integration setup. Using the capabilities in the solution SAP S/4HANA Finance for advanced payment management, payment processing can be centralized in a payment factory directly within SAP S/4HANA (see Figure 14.33). This provides full control over all payments, including payment routing, payment approvals, and status monitoring, and enables accurate cash management as well. On top of that, bank integration can be established out of one central system instead of multiple point-to-point connections. Advanced payment management supports four payment scenarios: 1. Payments in name of (forwarding) In this scenario, the invoice is paid from bank accounts that belong to the same company code as the one in which the invoice was registered. The sending group affiliate has already specified which bank account should be used for executing the payment. 2. Payments in name of (routing) In this scenario, the invoice is paid from bank accounts that belong to the same company code as the one in which the invoice was registered. The sending group

affiliate has not predefined the bank account to be used. Advanced payment management therefore determines which account should be used for executing the payment, as well as which payment instrument. It therefore routes the payment to the proper account. 3. Internal payments In this scenario, a payment is executed between group affiliates without using an external bank transfer. 4. Payments on behalf of In this scenario, the sending group affiliate pays using an account of the group. Advanced payment management therefore determines which account to use, as well as which payment instrument. It then creates the relevant postings. A transfer of assets/liabilities from one company code to another takes place. In other words, one company code pays on behalf of another company code.

Figure 14.33

Payment Management Architecture

Integration into advanced payment management can be achieved either by using IDocs, by using the connector for SAP Multi-Bank Connectivity (in the figure called MBC Connector)—technically a SOAP service—or by importing physical files. If you are using the connector for SAP Multi-Bank Connectivity, the communication can be directly between the connector in those two systems or could even use the cloud instance of SAP Multi-Bank Connectivity as the integration layer. In addition, the connector allows you to route payment to advanced payment management locally in case the payments are initiated in the same system. In general, these systems transfer the payments in a structure that is either supported out of the box in advanced payment management, like ISO20022, MT101, or PEXR2003, or could make use of a proprietary format, that can be mapped into ISO20022. For this mapping as well as for for outbound payment format creation, a graphical editor based on the extended data medium exchange engine (DMEEX) is provided. Advanced payment management maps this ISO20022 format to its internal

data model by leveraging built-in mapping from this ISO20022 transfer structure. Once a payment has been imported, there can be confirmation messages back to the paymentinitiating systems to update the status of the payment, similar to how a bank would confirm reception of a payment. These messages can be provided via files or via the connector for SAP Multi-Bank Connectivity and can also be provided later throughout the process. The architecture of advanced payment management is detailed in Figure 14.34.

Figure 14.34

Advanced Payment Management

Processing of these payments starts with a validation of the payment and all transactions of that payment. These validations can be simple content checks (for example, bank identification code is valid) or more sophisticated checks like duplicate checks. In addition, fraud or sanction party list screening products from SAP or thirdparty can be integrated for synchronous or asynchronous checking. Another important check is the enrichment of the ID of the bank account in bank account management, which is supposed to be used. On the basis of the requested payment scenario, the solution routes payments accordingly to internal general ledger accounts, to in-house cash accounts, or towards external banks. Before a payment is finally executed to the bank, cash management is updated with the amount to be transferred from the external bank account and its value date. In addition, an approval process can be triggered. In this approval process, payments can be fully approved, partially approved, or completely rejected. Once finally approved, the payment medium is generated, leveraging payment formats maintained centrally using DMEEX. Thus, payment formats required for bank communication need no longer be generated in the connected systems, but are managed by advanced payment management within the central SAP S/4HANA system.

14.8.2

Bank Integration Using SAP Multi-Bank Connectivity

The area of bank integration has been a project-dominated area in the past. With SAP Multi-Bank Connectivity, the integration of SAP S/4HANA with banks is now much more standardized and automated. The solution itself is based on the SAP Cloud Platform Integration service (see Chapter 6, Section 6.6) and takes care of the technical integration with banks either directly via traditional host-to-host (H2H) connections, using the built-in Electronic Banking Internet Communication Standard (EBICS) adapter; or using the Alliance Lite-2 for Business Applications (L2BA) partnership with SWIFT. In the SWIFT L2BA model, SAP runs the SWIFT infrastructure within SAP Cloud Platform, which prevents IT departments from operating the corresponding infrastructure in their own in their data centers. The communication between SAP S/4HANA and SAP Multi-Bank Connectivity is based on SOAP. Within SAP S/4HANA, the integration with the respective business processes, like a payment run or the import of an account statement, is achieved by the connector for SAP Multi-Bank Connectivity (see Figure 14.34). This component receives messages to be sent to the bank(s) or pulls messages from SAP Multi-Bank Connectivity that have been received from the bank(s). Based on the message type received from SAP Multi-Bank Connectivity, the respective business process is triggered. Supported business processes for automatic processing of messages include the following: Payments and collections Payment status messages Payment advices Account statements Lockbox Bank fee reports Foreign exchange (FX) and deposit and loan confirmations All other messages still can be received, but processing requires manual user involvement. The communication with SAP Multi-Bank Connectivity is secured by two layers: 1. Transport level through HTTPS 2. Message level via encryption of digital signatures of the message payload (for example, a payment message) Within the cloud service of SAP Multi-Bank Connectivity, SAP configures the integration by leveraging one of the three supported channels (H2H, EBICS, or SWIFT) for both directions—sending and receiving messages. This includes mediation of security mechanisms to protocols like PGP or the various types of EBICS signatures. Once a customer is operational in SAP Multi-Bank Connectivity, SAP actively operates the bank integration as a service provider, which also covers regular updates to the integrations, including periodic certificate renewals.

14.8.3

Connectivity to Payment Service Providers and Payment Gateways

The rapid development in the area of payment management, right up to digital real-time payments (for example, credit card payments, PayPal), is driven by industries like retail, insurance, or utilities. These payments reduce the risk for the seller dramatically because the payment or at least a binding reservation is performed before goods or services are delivered to the customers. With real-time outgoing payments, the process efficiency and customer satisfaction can be improved by the completion of the payments process at the point of interaction, like online shop checkout, call center, or customer portal. As shown in the payment management architecture (see Figure 14.35), the integration of SAP S/4HANA to external payment service providers is realized by the central SAP Cloud Platform application, SAP digital payments add-on. Digital payment methods are deeply integrated in the complete value chain, including financials. The main processes for digital payments are as follows: Digital payments authorization, for initiation of a payment or payment reservation Digital payments capture, for execution of a payment Digital payments advice, for reconciliation of payments SAP digital payments add-on not only connects SAP S/4HANA applications like sales and financials to payment service providers, but also integrates the solutions SAP Subscription Billing solution and SAP S/4HANA Cloud for customer payments, as well as SAP Commerce Cloud. SAP digital payments add-on also offers SAP customers and partners the possibility to use the payment service provider integration.

Figure 14.35

14.8.4

Integration of SAP S/4HANA and the SAP Digital Payments Add-on

Cash Management

The SAP Cash Management application enables an organization’s cash or treasury department to manage multiple bank accounts centrally and to overview the cash operations and long-term liquidity trends accurately and precisely. Cash managers can get a high-level overview of and detailed insights into bank accounts, cash positions, and cash flows, which enables them to make decisions and take actions directly. It consists of several components (see Figure 14.36), which we’ll discuss next. Bank relationship management allows cash specialists to manage bank account master data centrally, using a process to govern the opening, closing, changing, and reviewing of bank accounts. The streamlined workflow and dual control processes also help improve user efficiency in accomplishing compliance-related tasks: Maintain banks, house banks, house bank accounts, and bank accounts. A bank as a business partner and a corresponding bank hierarchy are created as business partner components (see Chapter 8, Section 8.3). Manage cash pools and their hierarchies for cash concentration. Define payment approver controls for payment approval processes. Support dual control or workflow processes for opening, modifying, closing, reopening, and reviewing bank accounts. Replicate house banks, house bank accounts, and bank accounts to distributed SAP S/4HANA or SAP ERP systems. Check foreign bank accounts and responsible payment approvers in your company. Import bank services billing XML files to load bank-fee data for analytical purposes. The format of bank services billing files is defined by ISO 20022. Bank relationship management provides analytics for overall transaction information with banks.

Figure 14.36

Bank Relationship Management

One-exposure from operations provides the cash management interface for many source applications (see Figure 14.37). The single interface provides adapters for SAP S/4HANA application components to collect and store operational data that is relevant for managing cash and liquidity. The provision of the data in the one-exposure from operations hub facilitates funds planning and risk management across multiple companies. It collects from the following source applications: financial operations, treasury and risk management, consumer and mortgage loans, contract accounts receivable and payable, procurement, and sales, all within the same SAP S/4HANA

system. It also receives data from a distributed cash management or third-party system via ALE or SOAP web service. Cash management provides a number of SAP Fiori apps for cash users, which we’ll now discuss. The Cash Operations app enables cash users to perform daily tasks such as managing bank transfers and outgoing payment by approval processes, managing memo records, and inputting bank account balances. The Cash Concentration app enables cash users to define cash pools and then perform cash concentration for cash pools with the service provider enterprise. This results in the creation of payment requests for the transfer of the necessary funds between the header account and subaccount in the payment processing. The SAP Fiori app Reconcile with Forecasted Cash Flows enables cash users to reconcile intraday memo records that were generated automatically from intraday bank statements with forecasted cash flows.

Figure 14.37

Cash and Liquidity Management

The Cash Flow Analyzer app provides cash users with views on aggregated amounts and line item details of cash positions, medium-and-long term liquidity forecasts, and actual cash flows. Cash users can analyze cash flows over days, weeks, months, quarters, or years for all bank accounts and liquidity items. The data presented in the app can be used to give a high-level overview of and detailed insight into the cash flow status for management. It supports the cash position, liquidity forecast, and actual cash flow analysis.

Cash users can use the Check Cash Flow Items SAP Fiori app to analyze cash flow item details. They can track and trace all the cash flow items from different source applications that are integrated with cash management. Cash users can also see line item details of the original documents, such as journal entries, banks, and account assignments. The Cash Flow Comparison app allows cash users to compare actual cash flows with past forecasts, as well as to compare different forecast records that were made on different snapshot dates. It helps users understand the accuracy of past forecast records and assists in improving cash flow forecasts continuously. Using the Adjust Liquidity Item SAP Fiori app, cash users can adjust assigned liquidity items and ensure that they are accurate and compliant. It leverages SAP S/4HANA’s embedded machine learning capability to identify the extraordinary and abnormal cash flows (for machine learning architecture, see Chapter 4, Section 4.2). The Cash Trade Request app generates an foreign exchange trade request and handover to SAP Treasury and Risk Management applications for further trade execution. It impacts the forecast of cash positions. The SAP Fiori app Treasury Executive Dashboard combines data from different analytical queries based on embedded analytics using SAP Analytics Cloud (see Chapter 4, Section 4.1), showing information about group liquidity and bank relationships. The SAP Fiori app Liquidity Planning is used by cash users to develop and analyze liquidity plans. They can track the status and trace the liquidity planning cycle to get an early-warning indicator of liquidity shortages or a steering tool for medium- and longterm investment or borrowing. It runs on SAP Analytics Cloud and pulls master data and transaction data from cash management. Cash management receives the transaction data from advanced payments as well as from treasury and risk management and updates the cash flows. Treasury and risk management can also access the cash position and flows from cash management to further exposure management. 14.8.5

Treasury and Risk Management

The SAP Treasury and Risk Management application supports companies in managing their investments effectively. It allows for monitoring and controlling financial instruments, as well as financial risks like foreign exchange (FX) risk or interest rate risk. Figure 14.38 shows the architecture of SAP Treasury and Risk Management. (For better readability, both cash management and accounting document appear twice in the diagram, but they both indicate the same instance. A full size version of this figure is also available for download at www.sap-press.com/5189.) SAP Treasury and Risk Management consists of the following components: Transaction manager and position management manage the operational flow of financial transactions for all financial instruments, covering everything from financial transaction origination to the conclusion of a transaction, transaction processing (including confirmation management for counterparties and payment processing), position management, and accounting.

Figure 14.38

Architecture of SAP Treasury and Risk Management

Exposure management supports the process of collecting FX exposure information. It supports the decentralized collection and entry of existing and future FX exposures. It also supports the version management of exposures. Exposure position can be processed then in FX risk management. There are two different solutions for FX hedge management: Balance sheet FX risk selects exposure from exposure management, cash management (Section 14.8.4), and accounting documents and compares it with FX hedging instruments in a freely defined hierarchy. It offers fast implementation to cover simple hedging policies without hedge accounting. FX risk management based on hedging areas selects and aggregates exposures from different sources. The hedging area allows the control of hedging policy with controls for selection filters, data sources, time, and the business structure of the hedging policy target quota. Hedge accounting is offered for this solution; designation control and hedging accounting rules are controlled here, too. The designation is usually set up in auto-designation mode so that a hedging instrument with an appropriate hedging classification is automatically assigned to the correct hedging area at deal creation and finds its designation settings and exposure items. Both FX risk management applications let you create trade requests to be handed over to the trading platform integration for trade execution. The trading platform integration from SAP is a cloud application based on SAP Cloud Platform that connects external trading platforms with treasury and risk management and cash management capabilities in SAP S/4HANA. It allows a trader to manage trade requests for foreign exchange or money market instruments coming from SAP S/4HANA

and trade them using a marketplace or phone, which increases the level of automation and reduces error rates. Resulting trades are automatically transferred back into SAP S/4HANA. In addition, intercompany trading scenarios like back-to-back trading are supported also. This allows a new level of straight-through processing. A typical use case can look like this: 1. For the use case of FX risk management based on hedging areas, the hedge management cockpit offers a calculation of the open amounts based on the comparison of the target quota with current hedges and supports (semi)automated mass creation of trade requests to close the open exposures. Each trade request carries information about the requesting open exposure, including information about how a potential hedge accounting should be carried out. 2. The trade requests are pulled from the trader workplace in the trading platform integration tool. The trader can decide when and how to fulfill the trade request, including split functions and more. The trader then forwards the request to a selected trading platform and executes the request with a counterparty selected manually or automatically creates a transaction between the requesting company code and the counterparty selected. 3. The trading platform routes the created transaction back to the trading platform integration to the workplace, and further trading platform integration routes the transaction to the requesting SAP S/4HANA backend. In the backend, the deal is automatically created and assigned to the originating open exposure. If hedge accounting is to be applied, the system automatically performs designation into a hedging relationship with setting of the hedge accounting profile and rules, potential creation of hypothetical derivates, and more. Within the treasury application, you find both analytics and analyzers. Both share the same etymology, but they have a small but important difference: Analytics is used in SAP S/4HANA, as described in Chapter 4, Section 4.1. The SAP Fiori app Treasury Executive Dashboard combines data from different analytical queries based on embedded analytics using SAP Analytics Cloud, showing information like group liquidity, bank relationships or counterparty risk. Analyzers provide key figure engines to calculate business key figures like fair value, durations, sensitivities, value at risk, or cash flow at risk. Many of them have a nonadditive character, so the analyzers also provide the framework to aggregate, calculate, and store them along with portfolio hierarchies or risk hierarchies. The analyzers we’ll discuss now are the heart of risk management. The credit risk analyzer enables the active control of default risks by computation of amounts and specifications of limits. Only counterparty risk is considered—that is, the risk of loss of value of a receivable due to the degradation of the credit standing of the business partner. It should be noted that the credit risk analyzer here must be distinguished from the credit management of payments (Section 14.7.2). The first refers to the risk of the bank defaulting and the second to the risk of the customer defaulting. The credit risk analyzer offers a view for the treasurer or CFO to analyze the open positions with banks, counterparties, or bond issuers across all financial positions, covering bank account balances over all kinds of financial assets and liabilities,

including derivatives, and offering different ways of calculating attributable amounts. The credit risk analyzer also offers a pre- or post-trade limit check. The market risk analyzer analyzes market risks in financial positions. Changes in market prices represent important influencing factors for company success. Changes to market prices can influence the value, transaction value, or timing of payment flows. Risks can be analyzed according to risk factors, such as exchange rates, interest rates, stock price risk, index price risk, or volatility. External market data is imported either via a market data interface and individual connections to the market data provider under individual contracts, or via SAP Market Rates Management application built on SAP Cloud Platform. The Thomson Reuters data option includes both automatic connection and the service contract to load market data into SAP S/4HANA. The service also offers an option to manage and distribute your own market data across systems. The SAP Integration Package for SWIFT provides necessary communication to the banking systems for linking the transactions between SAP Treasury and Risk Management and external banks. SAP Treasury and Risk Management is integrated with SAP Cash Management module. SAP Cash Management receives the transaction data of SAP Treasury and Risk Management and updates the cash flows. SAP Treasury and Risk Management can also be connected to an existing in-house cash application to allow carrying out financial transactions for external banks via a central headquarter.

14.9

Central Finance

Many big enterprises have grown system landscapes with different releases of various ERP systems from both SAP and other vendors. Typically, each of the systems has different master data (such as chart of accounts, customers, vendors, and materials), configurations, or accounting approaches. This makes it hard for an enterprise to get real-time insight into what’s going on in the entire corporate group. In addition, the existing ERP systems often do not reflect new modern business models in terms of digitization and provide only limited support for manufacturing companies transitioning to become service providers. Another challenge that enterprises have been trying to address for years is to reduce administrative costs by streamlining and centralizing financial processes in shared service centers. This can be very hard to achieve if accountants in service centers have to log into different systems, which might be using different master data or different accounting principles. SAP S/4HANA addresses these business challenges with SAP S/4HANA for central finance, which supports enterprises in operating finance processes centrally. Such central processes increase efficiency because process steps need to be executed only once in the central instance. For example, the electronic bank account statements could be imported to SAP S/4HANA for central finance only. However, here are multiple architectural challenges that the implementation of a central finance solution needs to address: Replication of financial postings from the source ERP systems to the central finance system Aligned configuration settings between the source finance applications and the central finance application Synchronization of relevant master data like business partners, cost centers, or cost objects Mapping between IDs and code values for the participating systems Centralized error handling for replicated financial postings Centralized processes embodying process steps in multiple finance applications Synchronization of conflicting process steps executed in parallel on the original and the replicated financial postings In the typical central finance scenario with centralized financial processes, business processes are initiated in procurement, logistics, or sales in the source SAP ERP or SAP S/4HANA system. All the business-specific process steps, such as the creation of purchase and sales orders, goods deliveries, goods and invoice receipts, billing, and invoice verification, are executed in the source system. However, to benefit from centralized financial processes, the accounting-relevant process steps, such as payment and open item management, can be moved to the central finance system, in our case SAP S/4HANA for central finance. Thus, when SAP S/4HANA for central finance is enabled, the posting data for the selected financial documents of a source system are transferred to the central finance

system, where this posting data is posted again centrally. As you can see from Figure 14.2 in Section 14.1, the central finance solution is integrated into SAP S/4HANA. But Figure 14.39 shows a much greater level of detail of the components of SAP S/4HANA for central finance as well as the components in the source systems. To tackle the architectural challenges mentioned before, the architecture of SAP S/4HANA for central finance takes advantage of the following technical components: SAP Landscape Transformation Replication Server (SLT) SAP Application Interface Framework (AIF; see Chapter 6, Section 6.3) SAP Master Data Governance These components are essential for the replication, mapping, and monitoring of the transferred financial postings. SAP S/4HANA provides the central finance functionality. Source financial applications can be SAP S/4HANA, SAP S/4HANA Cloud, SAP ECC 6.0, or third-party applications.

Figure 14.39

14.9.1

Integration Architecture of SAP S/4HANA for Central Finance

Replication

In the following we explain how central finance addresses business challenges and outline how enterprises can centralize financial processes using it. Source System Setup

Posting transactions are recorded in the source finance system and sent to the central finance system. Each source system needs to be set up for the central finance scenario. The settings control which company codes are relevant for replication or the start date for the initial load. In the source system, during the processing of the posting, the internal posting data is copied and stored in staging tables. Ongoing Replication To replicate newly created financial postings from a source financial system to the central finance system, the systems are connected using an SLT server. If a third-party system is to be connected, then the data coming from this source system needs to be loaded into separate staging tables in the SLT server. Depending on the source system, this can be done either in a custom implementation project or by using SAP Central Finance Transaction Replication by Magnitude, which is a solution extension for integration of third-party systems. The SLT server connects to the source system and extracts the data to be replicated from the staging tables through an RFC connection. Inside the SLT server, the data is mapped and transformed from the source to the central finance data structure because the structures can vary due to different releases of the two finance applications. The transformed data is then transported via an RFC connection to the central finance system. There, the central finance interface passes the data to SAP Application Interface Framework for further processing. The whole replication process is controlled by a scheduler in the SLT server that automatically triggers the communication to the involved systems (see Figure 14.40).

Figure 14.40

Integration with SAP SLT Server

The SLT add-on in the source system provides a mechanism to automatically detect changes to application data by means of database triggers (see Figure 14.40). These triggers are available in SAP ERP applications for all database platforms supported by SAP and executed in response to inserts, updates, or deletions of entries in database tables. The database trigger creates an entry in an SLT logging table, where a reference to the created (or changed) entry of an application table is stored. During the setup of a new replication in the SLT server, a predelivered configuration of the relevant application tables is used as a template for the actual configuration. The system then automatically reads the necessary metadata (for example, structure definitions) from the involved systems and generates the artifacts (for example, database triggers, logging tables, and function modules) required in the involved

systems for the data transfer. This approach has the advantage that no changes to the application coding are necessary to include any table in the replication. Error Handling The posting data in the central finance system needs to be saved, monitored, and processed in a uniform way. Error conditions, which arise during the processing of the posting data in the central finance system, need to be monitored and corrected. Central finance uses SAP Application Interface Framework to tackle those challenges. When the central finance interface is called by SLT and provided with the posting data, the interface will save this data in SAP Application Interface Framework (AIF) as a message. Every AIF message contains two parts: the source part contains the posting data as it came from the source system, whereas the target part contains the posting data after it was modified by central finance. In particular, the target part contains the posting data after the mapping (that is, the mapping of codes and identifiers) has been performed. The SAP Application Interface Framework controls the actual processing of the data, especially mapping and posting. Background jobs process the AIF messages in application callbacks from the SAP Application Interface Framework to dedicated and central finance-specific routines. The SAP Application Interface Framework collects any errors that occur during the processing and presents them to the user in the error monitor. Initial Load The ongoing replication transfers posting data after its activation in the source system but cannot transfer documents that were previously posted. For such historical postings, an initial load is needed. There are different initial loads for different types of documents—for example, for the transfer of historical financial postings or for historical controlling postings, which use different technologies. The initial load of controlling postings uses the SLT capabilities and is performed by an SLT document load. In the following paragraphs, we’ll discuss the initial load of financial postings in greater detail. The finance initial load reuses the financial migration framework, the Mass Data Framework (MDF). This framework makes it possible to parallelize the processing of large amounts of data by creating packages, the processing of which can be dynamically distributed across multiple ABAP application servers. The finance initial load consists of two steps. The extract step extracts data from the finance persistency in the source system and transfers it to the staging tables in the central finance system. The post step reads this extracted data and posts it in the central finance system by using the accounting interface. Both steps are triggered in the central finance system, but the extract step runs mostly in the source system, and the post step runs entirely in the central finance system. Reading every finance document from the source system and posting it into the central finance system, while possible, would be very resource-intensive. The initial load

provides the possibility to specify a starting period, as of which documents will be extracted. All documents posted before this period are not extracted; only the respective account balances are extracted instead. In addition, all open items as of the key date as per the starting period will be extracted. To make it easier to extract large amounts of data from multiple company codes, it’s possible to configure initial load groups. Those initial load groups specify a combination of source systems and company codes, which are to be extracted or posted together. 14.9.2

Mapping

The master data in the source systems usually have different codes and identifiers than those in the central finance system. For example, in the source system, a supplier may have ID 4711, whereas in the central finance system, the same supplier has ID 4712. Thus, during the posting in the central finance, ID 4711 from the source posting data needs to be mapped to ID 4712 in the central finance posting data. The central finance business mapping uses the foundation of SAP Master Data Governance (SAP MDG) for mapping of IDs (such as vendor ID) and code values (such as payment method). With this, central finance can be fully integrated with an hub installation of SAP Master Data Governance for centralized master data management, even though the full implementation of SAP Master Data Governance is not mandatory; it’s enough to maintain the relevant mapping data in central finance. However, the use of SAP Master Data Governance provides further advantages because the distribution and governance of master data is commonly a very important topic for SAP customers. If SAP Master Data Governance is used, the mapping maintained there is automatically available and reused for central finance replications. Mapping of identifiers (such as vendor ID or material ID) is created by using key mapping functions of SAP Master Data Governance, whereas mapping of codes (such as company code or business area) uses the value mapping functions. In central finance, mapping entities are defined to facilitate the use of SAP Master Data Governance value and key mappings. A mapping entity is either an identifier (such as vendor ID) of a business object or a code (such as payment terms). All fields belonging to a mapping entity are mapped in the same way throughout the posting data. For every mapping entity, a mapping action can be defined to control how the system handles field values belonging to a mapping entity during the mapping step of the posting process. The user can choose if values from the source system are to be kept, mapped, or cleared. For rather short-lived cost objects (such as production orders or internal orders), an automatic replication mechanism is provided. When a new order is created in the source system, the attributes are replicated to and a new object is created in the central finance system. The new object in the central finance system is based on configuration settings and the source system data. The configuration also enables a change of the order category (for example, from production order to product cost collector). The relation between the object in the source system and the newly created counterpart in the central finance system is stored and used during the mapping of the posting data.

14.9.3

Accounting Views of Logistics Information

Some financial processes still require data from procurement and sales documents, but because these documents are not available in the central finance system, the information is missing. Replication of the complete procurement or sales document is not feasible, as this would require an extended configuration of the corresponding application component. Thus, to get access to the required data in the central finance system, the accounting views of logistics information (AVL) are used. The AVL contain a subset of data from the corresponding procurement or sales document, which is relevant for financials processes. With the standard SLT replication process used by central finance (see Figure 14.41), the AVL data is replicated from the source systems to the central finance system. The following objects from sales and procurement are supported: Sales document Customer invoice Purchase order Supplier invoice Pricing element

Figure 14.41

Replication of Accounting Views

As an example, Figure 14.42 shows the relationships between the sales order tables and the corresponding AVL tables.

Figure 14.42

Entity Relationship Diagram for Sales Order and Accounting View of Sales Order

Use Case: Indirect Tax Reporting As part of the SAP S/4HANA solution for advanced compliance reporting (see Chapter 15, Section 15.1), accounting views are used to create statutory reports for indirect taxes as requested by several governments. Additional data from logistics documents needed for these reports vary for different countries and comprise attributes from the sales order, customer invoice, and supplier invoice. Use Case: Purchase Order Accruals Engine If there are liabilities to third parties—for example, to suppliers—but the associated costs are not yet posted, accruals must be calculated and posted based on the information taken from procurement. With centralized processes like for advanced central payment, the accrual postings must be created in central finance, and the AVL for purchase orders provides the needed information (such as prices) in central finance. 14.9.4

Central Payment

Central finance allows companies to decide whether invoices are paid in the source system or in SAP S/4HANA for central finance. When invoices are paid and cleared in the source system, then the clearing will be transferred to the central finance system and reposted there. Clearing replicated items directly in the central finance system is prevented without central payment. When central payment is enabled in the central finance system, open items are cleared there only. The source systems will be concerned with the logistical part of the scenario —that is, the sales orders or the purchase orders and the resulting accounting documents. These accounting documents can be then centrally paid in the central finance system. In such a case, the clearing of open items in the respective source systems needs to be prevented. This is done by marking them as technically cleared in the source system. A technical clearing essentially means that the clearing information is not available in the current system, but rather in another system.

14.9.5

Cross-System Process Control

There are additional challenges for system integration that arise from the centralization of processes. Business processes, which were traditionally designed to be executed within one integrated SAP ERP application, are enabled to run across a landscape of two systems. The distribution of business processes across systems also requires rules to control the data exchange to prevent data inconsistencies. For example, with central payment, as shown in Figure 14.43, invoices are posted in the source system, but are paid in the central finance system.

Figure 14.43

Business Process Flow Involving Concurrent Process Steps in Different Systems

After an invoice is transferred to central finance, users cannot reverse the invoice in the source system and pay the invoice in the central finance system in parallel. In an integrated system, this is prevented by implementing locking concepts and businessrelated checks. Such mechanisms are not readily available in distributed scenarios. There is a need to control business processes going across systems, and without a unified framework the implementation will likely end up with a multitude of various interfaces. The cross-system process control (CSPC) framework is a reuse functionality. It provides an infrastructure supporting implementations to control business processes that are distributed across two systems (see Figure 14.44).

Figure 14.44

Major Components Involved in CSPC Implementation of Business Process

In general, the CSPC framework is characterized by the following key features: Token concept Generic locking concept Semantic checks The token concept assures that only one system can execute a business process step at a given point in time. The token can be associated with a business object node. Tokens can be passed between the systems either asynchronously together with the AIF message or synchronously via an action permission request (such as an RFC call). Tokens are saved in each system in a dedicated token persistency. An implementation can define business process-specific locking strategies on a process type level (for example, central payment). The framework uses generic lock handling to maintain the relevant locks provided to it by consumer callbacks. Central payment uses the CSPC framework to exert control over the business process and prevent inconsistencies. With central payment, a single invoice can be paid, and a payment run can be executed in the central finance system to clear a multitude of open items. A payment run would not want to lock all open items to be paid. Locking all open items could easily overload the ABAP enqueue server. Thus, the payment run would simply lock the respective vendor or customer using an exclusive lock instead. Another transaction, which aims to adjust an individual open item, would lock the respective vendor or customer with a shared lock and the individual open item with an exclusive lock. This locking strategy is specific to this process. Other business processes will use different locking strategies. Using CSPC, the implementing business processes can continue to use their process-specific locking strategies while handing over the handling of the locks to the CSPC framework. The advantage in this case is that a fine-grained token-specific lock is not always necessary. The implementation can define a coarsegrained locking concept when needed by the business process. An implementation can define semantic checks on a process type and system role level (that is, a sender or a receiver system). The semantic checks provided to the framework by consumer callbacks are executed upon a remote token request and determine if the

state of the business object allows the execution of a specific process step. The token request can be accepted only if the semantic checks are passed successfully. All required checks are accessible via the generic API of the CSPC framework. Implementations of business processes, like central payment, only need to access methods of the API to determine if a business process step can be performed (that is, to request action permission). All communication between the involved systems is handled solely by the framework. In the central payment example, if a reversal is to be performed in the source system and the token is not locally available (see Figure 14.45), then the token is requested from the central finance system. The central finance system locks the object and performs semantic checks. Depending on the success of these operations, the request is either accepted or denied, and the result is passed back to the source system, which can then either perform the process step or present an error message to the user. Many centralized processes usually start in the source systems, but the following process steps are typically performed in the central finance system. For example, in the central payment scenario, the invoice is created in the source system, but the payment is processed in central finance. The CSPC implementation for the centralized business process therefore takes advantage of this natural business process flow and minimizes synchronous communication between the source systems and the central finance system by transferring CSPC tokens asynchronously together with the AIF message.

Figure 14.45 Systems

CSPC Implementation of Process Flow Involving Concurrent Steps in Different

14.10

Finance Extensibility

SAP S/4HANA Finance has implemented several in-app and key user extensibility options (for key user extensibility, see Chapter 5, Section 5.1). Custom fields can be added to the Universal Journal for actual and predictive data (table ACDOCA), planning data (ACDOCP), and group reporting data (ACDOCU), as well as to the contract accounting item (DFKKOP). Master data like fixed asset, cost center, and profit center can also be extended by custom fields. Process extensibility is supported in several scenarios, such as from contract accounting to the general ledger or from billing to the general ledger. Creating a coding block custom field is a simplified way to add a custom account assignment field to almost all processes that lead to cost postings, such as purchase orders resulting in supplier invoices and goods receipts. Similarly, creating a custom field for a market segment adds a custom characteristic to all processes that lead to revenues. Another important option to add standard and custom fields to CDS-based analytics is data source extensibility. This offers a very simple way to add fields of associated master data, like product or customer, or even sales order, to many reports in finance. Rather than persisting a field as a custom field, such as in table ACDOCA, the field value is looked up in the associated table when the report is executed. This allows for reporting of the data “as is” rather than “as posted.” Data source extensibility is available for many analytical cubes and queries in SAP S/4HANA Finance. Standard processes can be enhanced by custom logic. The main use cases in finance are determination (derivation and substitution) of additional field values and validation of user input. Such custom logic can be implemented by a key user without programming skills in the Manage Substitution/Validation Rules SAP Fiori app. In addition, several cloud BAdIs can be implemented using ABAP for key users in a web-based editor in the Custom Logic app. On top of these SAP S/4HANA in-app extensibility options, more complex scenarios like custom business objects, custom CDS views, and APIs are possible, as well as side-byside extensibility scenarios based on SAP Cloud Platform (see Chapter 5, Section 5.2).

14.11

SAP Governance, Risk, and Compliance

Enterprises need to ensure their employees and processes comply with regulatory requirements and follow defined procedures for every market in which they are active and for every customer or partner they interact with. At the same time, executives need to foster innovation, develop new business models, and continuously expand the market reach of their organization. Their strategic decisions carry uncertainty, so the potential opportunities need to be carefully balanced with the associated risks. These set regulatory requirements that typically impose high penalties on the one side, and the need to succeed in the market on the other side naturally creates friction. Since the late 1990s and the early 2000s, propelled by corporate scandals, governance, risk, and compliance (GRC), has emerged as a discipline that aims to manage this inherent conflict of goals. International standards, be they from the International Standards Organization (ISO) or the Open Compliance and Ethics Group (OCEG), provide the necessary baseline for GRC experts to develop and constantly refine their operating frameworks, methods, and tools. Depending on the size, industry, markets, and countries that companies are active or operate in, the average number of individual articles an international company needs to follow can easily exceed 2,000. So today, the GRC domain covers a wide array of disciplines that are spread across and are ingrained into an organization. From risk, control, and audit management on a tactical level to critical legal and security functions, each team works with and shares common data and tools. For their work, GRC experts require structured and unstructured data from a multitude of sources. Whereas structured data is available in and extracted from business-critical software such as SAP S/4HANA, SAP Customer Experience, SAP SuccessFactors, and SAP Ariba solutions, as well as third-party systems, the unstructured data is mostly gathered through meetings, workshops, surveys, and expert interviews. To be effective, governance, risk, and compliance, as a domain and a software solution, is not tethered to a specific line of business or software solution but relies on and combines data that is spread across the intelligent enterprise. 14.11.1

Overview of SAP GRC Solutions

SAP governance, risk, and compliance (SAP GRC) solutions support GRC experts in their daily work in the most cost-conscious way, independent of the company size, industry, or jurisdiction they operate in. The current GRC solution portfolio helps to address the following business-critical aspects for an organization from a GRC perspective (see Figure 14.46): Enterprise risk and compliance allows GRC experts to effectively handle risk, control, and assurance tasks within the organization. This includes an ongoing risk assessment for the organization and definition of countermeasures, continuous monitoring of process controls, the planning and execution of audits, and the detection of irregularities that might affect legal compliance with tax, fraud, or corruption regulations. A series of products deals with these aspects, including SAP Risk Management, SAP Process Control, SAP Audit Management, SAP Business Integrity Screening, and SAP Tax Compliance.

International trade management refers to a set of products and solutions that focus on foreign trade and customs compliance to help companies safeguard their supply chains across continents. Apart from import and export compliance, Intrastat reporting, and legal control management, the screening of business partners is crucial for companies to comply with national and international trade regulations. The products that support these aspects provide a tight integration with and baseline for the solution SAP S/4HANA for international trade and include the SAP Watch List Screening, SAP Global Trade Services, and SAP Electronic Invoicing for Brazil for SAP S/4HANA applications. Access governance covers the technical access to critical infrastructure and systems across the intelligent enterprise, including SAP S/4HANA and third-party applications. Tightly connected to the digital identity of system users, SAP Access Control and SAP Cloud Identity Access Governance provide continuous access risk analysis and robust access controls, among other capabilities. Privacy and security are key aspects of governance. y In the wake of an increasing awareness of the security and use of personal or even sensitive data from customers, partners, and employees. Companies need to handle elevated requirements for data privacy and security, with the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act as just two examples. The SAP Data Privacy Governance application provides data privacy officers and stakeholders with the necessary tools and workflows to fulfill these requirements. These solutions provide a variety of consumption models to anticipate varying business requirements: from native public cloud services that require no up-front installation, like SAP Cloud Identity Access Governance through on-premise installations (such as SAP Access Control and SAP Global Trade Services) that allow for maximum configuration options, to hybrid private cloud deployments, companies can give choices to their GRC departments and experts. Depending on the business scenario and use case, the solutions use different technical capabilities. Products that aim to optimize existing manual and automated GRC processes and workflows while balancing implementation costs at the same time are based on SAP ABAP platform. Scenarios that extract and analyze high volumes of data from business systems use the in-memory capabilities and performance of SAP HANA. The public cloud services of SAP GRC solutions rely on SAP Cloud Platform as their technical foundation (see Figure 14.46).

Figure 14.46

Overview of SAP GRC Solutions

14.11.2

SAP GRC Solutions and SAP S/4HANA Integration

A common approach in system landscapes is to deploy SAP GRC solutions separate from and alongside other business applications. For these scenarios, the data that GRC experts require for their work either is extracted and transferred through plug-ins (mostly add-on installations), which should be installed on the business application, or is replicated for mass data analysis scenarios, such as through the data replication option for SAP HANA. These plug-ins also ensure an ongoing connectivity to the business systems and enable SAP GRC solutions to intercept operational processes in case risks or adverse events are identified. Using that mechanism, for example, a previously granted user authorization could be deprovisioned to prevent a potential segregation of duties violation and mitigate the related access risk. Because SAP S/4HANA serves as the digital core in SAP’s intelligent enterprise strategy, these integration capabilities are further enhanced. To lower the cost of operations for IT departments, the SAP GRC solutions offer co-deployment options on SAP S/4HANA. As a side effect, SAP GRC solutions and SAP S/4HANA work in the same database to avoid the need for data replication in specific scenarios that rely on mass data analysis. In addition, crucial connectivity that was previously handled through plug-ins is now directly embedded in SAP S/4HANA processes for specific GRC scenarios. The combination of these changes enables a new generation of real-time preventative compliance management for enterprises using both, SAP S/4HANA and SAP GRC solutions. The following sections describe some integration examples and technical architecture of the SAP GRC solutions in the areas of enterprise risk and compliance, international trade management, access governance, and privacy and security. 14.11.3

SAP S/4HANA Integration with Enterprise Risk and Compliance

As mentioned in the previous section, applications for enterprise risk and compliance can be co-deployed with SAP S/4HANA or installed as an independent side car to act as a hub. In both deployment options, a large variety of use cases are supported, such as compliance with various regulations and corporate guidelines, detection of fraud, screening of business partners, and audits. Companies typically set up hundreds to thousands of automated and manual controls to detect violations and ensure compliance. With the power of SAP HANA, hundreds of millions of records across different applications can be screened based on predefined and custom-built screening methods that create alerts once definable thresholds are exceeded. The co-deployment on SAP S/4HANA is the option of choice for many companies because it not only helps to reduce IT costs but allows pre-integrated and embedded scenarios. With this option, data can be reused in real time and without replication. One example of such an embedded scenario is the integration of business integrity screening (BIS) with the SAP payment program. In this scenario, companies can set up sophisticated detection strategies, with many rules checking the payment proposal itself, the invoice,

and the bank and vendor master data, including sanctioned list screening of business partners and payments to high-risk countries. Once a suspicious payment is detected, the payment is blocked automatically. After an investigation based on a 360-degree view of all relevant data, the proposal can then be finally blocked, or released for payment in the case of a false alarm. 14.11.4

SAP S/4HANA Integration with International Trade Management

International trade management capabilities are embedded (the SAP S/4HANA solution for international trade) or are deployed as an independent side car to act as a hub with SAP S/4HANA Cloud (SAP Global Trade Services). Using SAP Global Trade Services as a hub allows the connection of multiple systems (SAP S/4HANA Cloud, SAP ERP, and legacy solutions) and helps to optimize compliance and customs processes such as central product classification or central communication with authorities. Being fully embedded within SAP S/4HANA Cloud, SAP S/4HANA for international trade makes use of all necessary master data (products and business partners) and logistics documents data to ensure compliant export and import processes. The embargo and legal control functionalities are implemented within SAP S/4HANA Cloud, whereas SAP Watchlist Screening uses a dedicated communication scenario for the remote integration (SAP_COM_0219; SAP Watchlist Screening Integration; see Chapter 6, Section 6.4). The check results of the specific compliance checks are part of the logistics documents, and subsequent functions for those documents can be prevented by configuration. SAP Global Trade Services is connected to SAP S/4HANA Cloud by a dedicated integration scenario too (SAP_COM_00084; SAP Global Trade Services Integration). Master data is replicated to SAP Global Trade Services and to the logistics documents. For documents, the end user is informed of potential issues within SAP Global Trade Services. As long they are not resolved by a user in SAP Global Trade Services, subsequent processes (such as outbound delivery creation for a blocked sales order or goods issue for a blocked outbound delivery) can be prevented by configuration. Those issues need to be resolved within SAP Global Trade Services, and potential subsequent processes are then triggered in SAP S/4HANA Cloud. This also applies for customs processes, including external communication (EDI or web services) with authorities. Unless such a customs declaration is not released by customs authorities, subsequent actions can be prevented. The results of compliance checks or customs activities are not persisted within SAP S/4HANA Cloud. If this information is required to influence the logistics process, the information is derived synchronously from SAP Global Trade Services. 14.11.5

SAP S/4HANA Integration with Access Governance

Digital technologies continue to advance at an unprecedented rate, transforming the business. It’s necessary to broaden the scope of identity and access governance solutions to ensure companies are not limited to applications in silos. New standards are emerging to foster compatibility and interoperability to support diverse platforms. Access governance solutions help to provide essential services that help organizations make it easier for users (employees, customers, and partners) to get the access they need as

seamlessly as possible. This can be very complex because of applications with varied authorization capabilities and security requirements. SAP Cloud Identity Access Governance provides tools to remediate access risk for business applications like SAP S/4HANA. It ensures a continuous state of compliance when it comes to managing access, making the process of access governance effective.

Figure 14.47

Technical Architecture of SAP Cloud Identity Access Governance

As shown in Figure 14.47, SAP Cloud Identity Access Governance provides key capabilities to address the access management and compliance challenges for SAP S/4HANA: Access analysis provides flexible and extendable access risk analysis capabilities, with recommendations during the remediation process. An event-driven access risk analysis produces real-time access risk results. Role design provides a comprehensive approach to designing and optimizing business roles to reduce the complexity of role administration and simplify the process of access assignment with machine learning-based business role adoption. Access request enables a unified process through which employees are granted application access to only what they require to fulfill their job responsibilities. It provides complete visibility into the critical or sensitive areas to which a user might have access to help prevent security breaches. Access certification is designed to prevent privilege creep. The periodic recertification of access privileges helps establish a governance process to identify changes in individual usage behaviors and prevent the accumulation of unnecessary access privileges.

Privilege access management establishes a governance process to monitor and flag not only sensitive or administrative transactions, but also what is being executed with these elevated authorizations. It enables an effective review of all activities by immediately flagging suspicious activities using machine learning capabilities to analyze the logs and identify anomalies and behavior changes. 14.11.6

SAP S/4HANA Integration with SAP Privacy Governance

In today’s threat-intensive environment, earning digital trust—especially around sensitive data—is no small feat. It is paramount for any organization to establish trust with customers, partners, and employees by securing and protecting personal data. Not only is personally identifiable information a target for hackers, but complex regulations across the globe define the governance and management processes needed to protect and manage this data, as well as comply with these regulations. Therefore, you need systems and processes that can support, document, and enable compliance. SAP Privacy Governance is designed and intended to provide a single point of entry for the data protection officer (DPO) into their day-to-day tasks, from data subject rights management to data documentation to analysis capabilities to pure compliance fulfillment aspects relevant for privacy audits and security certifications. In this context, the SAP Privacy Governance automated privacy procedures and controls for SAP S/4HANA are paramount for proper documentation, risk evaluation, and mitigations. Automated privacy procedures include an SAP S/4HANA data analysis mechanism to identify concrete issues based on the bundling of controls into work packages and their execution scheduling. The results can be presented in a findings list. Procedures and controls come with standard content and reflect a strong competitive advantage. From an architectural perspective, the privacy governance solution consists of separate microservices on SAP Cloud Platform, including the GRC business services of automated procedures and controls for SAP S/4HANA.

14.12

Summary

The application architecture of SAP S/4HANA Finance differs significantly from the corresponding implementation of finance capabilities in SAP ERP. We have seen how the accounting applications have adopted the Universal Journal which forms the single source of truth for financial postings. We explained how financial planning and analysis uses the architecture concepts of embedded and side-by-side analytics to provide reporting in real time as well as planning and simulation capabilities. Then we described the architecture for tax calculation and how to create and manage legal contracts using enterprise contract management and assembly. Going on we described how the architecture of payables management enables highly efficient processing of supplier invoices. Receivables management provides a set of integrated applications to keep track of payments to be received by a company. We explained the architecture of these applications, especially of the new SAP Credit Management application and SAP S/4HANA Cloud for customer payments which is built on SAP Cloud Platform. Convergent invoicing enables invoicing for large volumes of service consumption records. Convergent invoicing is integrated with contract accounting which provides the corresponding subledger. The master agreements of contract accounting enable flexible contract models especially for service industries. The architecture of treasury management enables managing the connection and relationship with banks and payment service providers, run a central cash management, and finally manage financial assets as well as risks. We explained how payment processing can be centralized in a payment factory using advanced payment management. Further we described the integration architecture based on SAP MultiBank Connectivity. With SAP S/4HANA for central finance, companies have the possibility to operate core finance and payment processes centrally while still running further ERP and finance applications for business units or subsidiaries. The data replication and integration technologies of SAP HANA and SAP S/4HANA enable this. SAP governance, risk, and compliance solutions are built to flexibly extend the core SAP S/4HANA business applications to fulfill compliance and governance tasks. Several of them can be co-deployed on SAP S/4HANA to keep the TCO low as they run fully integrated on the same system. There are country-specific regulations and requirements which impact ERP processes. In the next chapter we explain how SAP S/4HANA addresses these by localization.

15

Localization in SAP S/4HANA SAP S/4HANA has built-in internationalization standards and provides localizations for country-specific regulations in order to support enterprises and organizations across the world.

SAP has more than 40 years of experience with developing ERP software used across industries and countries. Addressing the global market, SAP S/4HANA supports different languages, currencies, calendars, and time zones as part of SAP’s I18N internationalization standards. The evolution and frequent changes of laws and regulations across countries and different business practices worldwide make the product localization dimension very important if companies want to succeed in the international marketplace. To ensure legal compliance and to allow for country-specific business practices, SAP S/4HANA offers 64 local versions in 39 different languages with specific localization and legal features for countries and regions in SAP S/4HANA on-premise, and 43 local versions in SAP S/4HANA Cloud as of now, with the plan to reach parity of localization between the cloud and on-premise editions within the next few years. Local versions add localized business logic, such as tax calculations, and reporting capabilities for application areas like financial accounting, asset accounting, taxation, customer/supplier invoicing, procurement and sales, master data validations, and much more. Typically, localization features extend the standard functionality and are deeply integrated into the application logic. There are two specific applications in SAP S/4HANA that address highly country-specific topics: statutory reporting and electronic documents. These are explained in more detail in Section 15.1 and Section 15.2. Enabling SAP customers and partners to develop localization extensions on their own also is important to enable using SAP S/4HANA in places beyond the 64 localized countries and regions. For this, there are several tools and approaches available that are summarized under the Localization Toolkit umbrella. This is covered in Section 15.3.

15.1

Advanced Compliance Reporting

This section introduces the architecture of advanced compliance reporting, a framework for country-specific compliance reporting. In SAP S/4HANA, compliance reporting (a.k.a. statutory reporting) is the legally required disclosure of aggregated financial and nonfinancial information to a government agency according to a prescribed format (XML, XBRL, JSON, flat file, or PDF). The report is to be submitted periodically or on request, often electronically. Optionally, business partners can be notified of the data that was sent to the government agency. Typical use cases include tax declarations (VAT return, EC sales list for VAT on sales within the European Union, withholding tax) and audit files. Advanced compliance reporting is a framework for creating statutory reports and includes various country-specific reports provided by SAP. The advanced compliance reporting framework serves as a foundation for all new legally required reports in SAP

S/4HANA. The reporting framework is not only used by SAP to define compliance reports, but it can also be used by SAP customers and partners. Besides creating their own reports, they can extend reports shipped by SAP. SAP-shipped legal reports that exist in SAP ERP, will gradually be migrated to advanced compliance reporting. The architecture of advanced compliance reporting basically consists of three main components: business configuration, report definition, and finally for execution report run (see Figure 15.1). Application data is consumed via CDS views or ABAP class methods or analytical queries. Generated documents are stored centrally using Knowledge Provider (KPro), which also allows integration of a customer-specific documentmanagement system. PDF documents are generated with the help of Adobe Forms. Having advanced compliance reporting as a central framework for compliance reporting has several advantages. For the business user, it comes with the SAP Fiori app Run Advanced Compliance Reports as a one-stop shop for all compliance reporting based on the report run component. This way, the user gets a single homogeneous user interface with one common feature set. Using report run, the report is prepared, generated, and electronically submitted to a government agency. To understand and explain the data shown in the reports—for example, to an auditor—embedded analytics is integrated into the advanced compliance reporting framework. It provides transparency based on line items as a single source of truth. It lets you drill down to the data that was read to generate the report, independently from whether it was a line-item report (such as, an audit file) or a report containing only totals (such as, VAT return with aggregated values in so-called tax boxes). Advanced compliance reporting includes workflow functionality so that reports can be optionally sent for approval—for example, to the head of the tax department before the report is submitted to a government agency. SAP Localization Hub, advanced compliance reporting service is an optional, SAP-operated cloud application for electronically sending statutory reports to government agencies (see ACR service in Figure 15.1). For application developers, the report definition component of advanced compliance reporting provides the SAP Fiori app Define Advanced Compliance Reports for building reports. The framework handles most of the business-user-related aspects out of the box, such that the application developer can focus on the reporting needs.

Figure 15.1

Advanced Compliance Reporting Architecture

Application developers can use the reporting framework in two ways. The first option is the model-based approach for developing a report. Such a report consists of one or more document definitions. Per this document, the developer uploads a schema that describes the internal structure of the file to be generated. Supported schema formats are XSD, JSON, and ABAP Dictionary structures. In a mapping screen, data sources are assigned to schema fields. These data sources typically are CDS views from the SAP S/4HANA VDM (see Chapter 2, Section 2.1), but ABAP code can also be used. By using CDS views and modeled data mapping, calculation is pushed down to the database. Embedded analytics is supported out of the box whenever CDS analytic queries are used. Also, key user extensibility is supported for model-based report definitions (see Chapter 5, Section 5.1). The second option is suitable if an ABAP report already exists. Instead of reimplementing the entire fetching of data with the help of CDS views and mapping the result set to the document schema, the developer creates a lean compliance report definition that acts as a wrapper of the existing ABAP report. At runtime, the ABAP report is called by advanced compliance reporting. It generates the required files and writes them into advanced compliance reporting. Thus, advanced compliance reporting still serves as a single entry point for the business user, but embedded analytics, which is based on CDS views, is not available in that option. In business configuration, a key user specifies the reports to submit, when, and for what reporting entity (the legal entity that is submitting the reports). Based on this configuration, reporting tasks are generated at runtime that prompt the business user to generate a report. For each report, additional report-specific data classification and aggregation might need to be maintained (for example, the aggregation of tax line items with customer-specific tax codes into tax boxes for VAT returns). This report-specific configuration is compatible with SAP ERP whenever a statutory report was already supported in SAP ERP.

15.2

Document Compliance

In this section, we provide an overview of the SAP Document Compliance solution, which provides a framework for exchanging business documents with external parties in different country-specific formats. It’s available in both on-premise SAP S/4HANA and SAP S/4HANA Cloud. 15.2.1

Motivation

There is an increasing number of legal mandates worldwide demanding that certain document exchanges take place exclusively in electronic form. Most notable is electronic invoicing—that is, electronically sending invoices to the government for clearance, which is already mandatory in several countries, including Brazil, Italy, and Mexico, or sending them directly from the seller to the buyer to make the exchange more efficient and reliable. As different countries define different processes and formats for this document exchange, it becomes a real challenge to comply with all these legal requirements in the same system. This motivated the creation of SAP Document Compliance, which is a solution that takes advantage of the similarities between the processes in the different countries and networks and facilitates the implementation of their differences, while maintaining the same configuration steps and user experience. This approach prevents every new scenario requiring modifications to the standard business processes or the creation of entire new applications to manage the creation and exchange of electronic documents. When active for a certain country scenario (such as, incoming and outgoing electronic invoicing in Italy), SAP Document Compliance helps ensure that every relevant document posted in SAP S/4HANA will generate a corresponding electronic document (which we call an eDocument), which can then be sent to the required party (for example, a tax agency for clearance or a business partner for B2B exchanges). 15.2.2

Architecture Overview

SAP Document Compliance consists of functions in the SAP S/4HANA back end and services on SAP Cloud Platform (see Figure 15.2). The functions in the back end read the requested business data and generate the eDocuments, and they process received eDocuments. For sending and receiving the electronic documents, SAP Document Compliance uses either integration flows in SAP Cloud Platform Integration service (see Chapter 6, Section 6.6) or relies on services built specifically for one document type, such as SAP Document Compliance, cloud edition, a set of specifications for crossborder electronic procurement.

Figure 15.2

SAP Document Compliance Architecture Overview

In Figure 15.3, the architecture of the SAP Document Compliance backend functions is shown in more detail. All business logic is present in the SAP S/4HANA back end. The process manager orchestrates the related and interdependent steps necessary to create the eDocument. Some steps are communication-related, while others take care of internal processing such as creating and adding attachments. Business users send the eDocument using the eDocument Cockpit. Outbound messages are sent through the interface connector to the communication platform. Inbound content is first processed in the inbound message handler to be transformed into a new eDocument or a process step of an existing document. Emails can be sent through the partner connector. The architecture is open, and different processes can be built using these components (hence the name eDocument framework). This flexibility is achieved by having configurations for each of the main objects (actions, processes, interfaces, and so on). When the source documents are posted, the related eDocuments are created and displayed in the cockpit. Electronic messages (in XML) are created in separate process steps. The mapping of the source data to the message is done in the interface connector.

Figure 15.3

Architecture of Back-End Functions in SAP Document Compliance

The cockpit provides a country-organized overview of the processes that the user is authorized for. A user may be authorized for the Hungarian invoice registration and

invoicing for Italy, but not for the transport registration for Hungary. The eDocuments for the process are shown in detail by selecting the process. Possible actions, such as sending, are process-specific and appear upon selection. Specific changes to the content of the eDocuments (usually XML structures) can be made using cloud BAdIs. Because these changes are usually scenario-specific, the available cloud BAdIs and their capabilities are different for each scenario. There are also APIs that enable access to eDocument-related data so that customers can, for example, extract the received XML files for external archiving or further processing in another application. The APIs can be found in the SAP API Business Hub. 15.2.3

Recent Developments and Future Outlook

In addition to the legal mandates for using electronic invoicing in some countries, there are also initiatives to encourage this type of exchange everywhere. Peppol is a very prominent example. Peppol is a set of artifacts and specifications enabling cross-border electronic procurement. It was created to standardize such processes in the European Union, but it already has global reach. SAP Document Compliance supports exchanges through Peppol for several countries, including Germany, the Netherlands, and Norway. The proven benefits of electronic invoicing bring about mandates and requests for other types of processes as well, such as the registration of transports or e-ordering. The architecture of SAP Document Compliance contemplates such future requirements, and the solution scope is constantly growing.

15.3

Localization Toolkit for SAPS/4HANA Cloud

Advanced compliance reporting and SAP Document Compliance are frameworks in SAP S/4HANA meant to fulfill statutory reporting and eDocument requirements, be it onpremise or in the cloud. While these frameworks address the requirements for a specific local version, companies might have additional business or region-specific needs beyond what is offered in the standard. This calls for a comprehensive set of tools to facilitate extensions by SAP customers and partners and induce more flexibility in the cloud. SAP offers a wide palette of country and region-specific functions. For details, visit SAP S/4HANA Cloud - Country/Region-Specific Functions at help.sap.com Given the global nature of businesses today, there might be specific requirements beyond the country and region-specific functionalities listed thus far. In this section, you’ll learn how SAP addresses growing business needs by way of providing localization-relevant extensibility in SAP S/4HANA Cloud. 15.3.1

Components of the Toolkit

A comprehensive set of tools to facilitate custom and partner extensions is a prerequisite for enabling localization-relevant extensibility. The localization toolkit for SAP S/4HANA Cloud consists of specific localization frameworks (for example, extensibility features provided by advanced compliance reporting and by payment medium workbench, a tool provided by SAP to configure and create payment media sent by organizations to their house banks), along with generic extensibility features such as CDS view, OData, business logic, and UI extensibility. Covering these components, the toolkit provides guidance on how to cover the requirements on top of the available localization features. The guidance spans several areas in the localization spectrum, as depicted in Figure 15.4. You can also see the stack of underlying tools and technologies that are used in creating the guidance for localization-relevant extensibility scenarios.

Figure 15.4 Localization Areas Addressed by Localization Toolkit for SAP S/4HANA Cloud and Underlying Tools and Technologies

15.3.2

Extensibility Scenario Guides and the Community

The guides that form the core of the toolkit cover end-to-end scenarios for varied localization areas. For instance, a scenario explains how to extend a tax-reporting solution, display a custom field in a translated language, or adapt a form to fulfill a localization need. The toolkit provides an aggregation of several useful links and code snippets in one place, resulting in an efficient implementation time for partners and SAP S/4HANA Cloud customers. SAP provides an interactive community space dedicated to the localization toolkit, in which partners and customers can access a wide range of guides to check relevant scenarios and also highlight their localization needs, which could be addressed via extensibility. It also serves as a knowledge-sharing platform for SAP localization development experts to contribute their best practices. You can find more information about the community and participate in the discussions at https://community.sap.com/topics/localization-toolkit-s4hana-cloud.

15.4

Summary

In this chapter we looked at two applications that address highly country-specific topics. We first covered advanced compliance reporting, a framework for country-specific compliance reporting. It is the foundation for all new legally required reports in SAP S/4HANA, and it can also be used by SAP customers and partners to implement their own reports. Developers can define compliance reports in a model model-based way, by mapping a report schema to CDS views from the VDM. Alternatively, they can write ABAP code that typically wraps an existing ABAP report. With an optional cloud service for advanced compliance reporting, you can send statutory reports to government agencies. Next, we looked at the SAP Document Compliance solution, a framework for exchanging business documents with external parties in different country-specific formats. SAP Document Compliance provides a common configuration and user experience for different document types in different countries, and facilitates the implementation of their specific aspects. With the corresponding configuration, SAP Document Compliance generates electronic documents from business transactions posted in SAP S/4HANA. These electronic documents can be sent to the required external party. We explained the architecture of SAP Document Compliance, with functionality both in the SAP S/4HANA back end and on SAP Cloud Platform. Finally, we introduced the localization toolkit for SAP S/4HANA Cloud. Based on a set of localization frameworks and the generic extensibility features of SAP S/4HANA, the toolkit provides guidance on how to cover localization requirements using these options. The toolkit is complemented by an interactive community space for sharing knowledge. With this chapter we have completed the part about the architecture of business applications in SAP S/4HANA. In the final part of the book we will focus on topics related to SAP S/4HANA Cloud, starting with scoping and configuration.

Part III SAP S/4HANA Cloud-Specific Architecture and Operations Consuming SAP S/4HANA as a Software-as-a-Service (SaaS) offering is quite different from running an SAP S/4HANA on-premise installation. First, the provider of the service is not SAP and not the company’s IT department. SAP is responsible for managing the complete software lifecycle for the SAP S/4HANA Cloud SaaS offering—from the installation of required software on appropriate hardware, provisioning of SAP S/4HANA Cloud tenants to subscribers, monitoring the health of the services, updating and upgrading the software to newer and better versions, and the eventual deprecation of the SAP S/4HANA Cloud tenant. This responsibility also includes user support, backup and recovery, and auditing of the operations procedures. From an architecture perspective, there are quite a few differences between SAP S/4HANA on-premise and SAP S/4HANA Cloud, even though the business applications originate from one code line. Subscribers to SAP S/4HANA Cloud expect a short implementation time—which is why certain business functionality is offered in a more standardized way compared to SAP S/4HANA on-premise. Guided scoping and configuration together with SAP reference content enable fast implementation projects (see Chapter 16). The cloud-specific identity and access management provides tools and infrastructure to manage business users and their authorizations conveniently while ensuring segregation of duty compliance (see Chapter 17). With SAP S/4HANA output management, SAP has built a new infrastructure for printing, email, forms, and EDI output, specifically designed for the cloud (see Chapter 18). To give you a basic understanding of how SAP S/4HANA Cloud works as a SaaS, we give insight into cloud operations (see Chapter 19). We explain how the multitenancy architecture of SAP S/4HANA Cloud works, as well as how software maintenance and built-in support are performed. We conclude with a detailed look at performance (see Chapter 20) and security (see Chapter 21), two highly critical cloud qualities.

16

Scoping and Configuration Scoping and configuration enable companies to adapt software to their specific needs. SAP S/4HANA Cloud provides a new infrastructure for scoping and business configuration that speeds up implementation projects. SAP S/4HANA Cloud is planned to be the first solution supported by SAP Central Business Configuration.

It has always been a core strength of SAP’s products to offer a high degree of flexibility and thereby a vast number of customizing possibilities. This allows for adjustment and extension of SAP’s standard definition of business software to meet the needs of each specific consumer. As of today, SAP S/4HANA Cloud provides thousands of individual settings to tune an installation to meet an individual company’s needs. However, which configuration combinations are truly semantically correct? Which combinations result in a consistent business process? Which combinations strike the right balance between diversification and high efficiency? For more than a decade, SAP has provided reference content that enables SAP customers to equip their solution with a consistent and reliable preconfiguration of all relevant business processes and supporting functionalities. This preconfiguration serves three key criteria: Rapid deployment A preconfiguration allows you to start with a basic, consistent set of configurations for implementing SAP S/4HANA Cloud. You may at first accept standard settings as the default in many business areas and define custom settings in focus areas. With this mix, you can go productive with a fully functional solution and then later individualize the application further. This reduces the initial total cost of implementation (TCI) and leads to a rapid deployment and thus rapid go-live. Best-practice approach SAP leverages its decades of experience and presents a best-of-breed solution for all core business processes of an enterprise. The best-practice content represents a balance of high performance, solid flexibility, and country-specific flavors. SAP’s reference content is not stiff; you can adjust and extend content at various points. However, the reference content is a de facto standard and allows for a reliable and rapid implementation. Lifecycle compatibility The business world is in continuous change, and so SAP’s reference content. One key differentiator of cloud software is the innovation adoption speed. New innovations must be quickly available, easy to be consumed, and highly reliable in terms of quality and performance. Thus, SAP adopts these changes in the reference content and updates the affected installations regularly. These updates must at no point endanger the stability of the running productive landscapes of SAP customers, however. Therefore, SAP’s reference content is enriched with lifecycle-relevant metadata to control how changes need to be handled during the upgrade in existing implementations. This allows for a secure, automated upgrade process – an essential

cloud quality. Changes that are incompatible with the lifecycle of the software and its content are avoided. The SAP reference content for SAP S/4HANA Cloud consists of different types of content: Configuration data in business configuration sets Configuration data consists of configuration entries that are packaged together in the form of a business configuration set (BC set) to realize a business capability, and these entries can be fully and automatically deployed into the corresponding customizing tables of SAP S/4HANA Cloud. Master data scripts Master data scripts are small reports that generate master data entries (such as business partners, house banks, and so one) fully and automatically in master data tables of SAP S/4HANA Cloud. Self-service configuration UIs Self-service configuration UIs (SSC UIs) enable you to alter the reference content provided as configuration data. These SAP Fiori apps are intuitive to use, provide consistency and integrity checks, and help ensure that key-user adjustments are recorded correctly and protected against future content updates. Finetuning UIs are built from SSC UIs. Instruction guides Instruction guides are documents containing manual instructions to configure the application. With these different types of content, you can configure all relevant aspects of the software in terms of scope, functionality, variance, and integration. SAP’s goal is to automate this procedure as much as possible. However, depending on the nature of the content, user interaction may be unavoidable. Certain settings refer to company-specific code lists or master data or require credentials known only to a respective company’s IT department (especially in the case of integration content). Reference content configures SAP’s software; it enables integrated processes and a harmonized and performant setup of the customer’s SAP S/4HANA Cloud tenant. Therefore, the content validity is influenced by two additional dimensions: The organizational structure of the enterprise A large portion of required business functionality for a company is determined by the function of its organizational unit. What is the nature of the unit? Is it a plant? Storage? A sales office? Is it a legal entity or only a company-internal subdivision? SAP S/4HANA Cloud can accommodate multiple organizational units in one tenant, and divides them with the help of dedicated company codes. Hence, configuration and the corresponding content need to carry this correct company code so that the customizing settings differentiate between the units. Also, the scope differs depending on the organizational unit’s purpose. The relevant country (and region) Usually, an organizational unit comes along with a physical installation and thus assignment to a legal space. The legal space also influences the selection of correct configuration settings because country-specific settings that either support legal

compliance or represent regional best practices need to be chosen instead of global or general ones.

16.1 Configure Your Solution: Scoping and Configuration Today As of now, SAP S/4HANA Cloud uses SAP solution builder tool as its scoping and configuration engine. The plan is to replace SAP solution builder tool in time with SAP Central Business Configuration (see Section 16.2). 16.1.1

Content Model of SAP Solution Builder Tool

The content model for SAP solution builder tool is based on the following elements (see also Figure 16.1): The solution represents a complete package of scope bundles, scope items, and building blocks. The solution is used to differentiate between completely different content models. Today, it’s mainly used to separate the SAP S/4HANA Cloud suite from the SAP Marketing Cloud solution. A scope bundle is a collection of scope items that represents a fully consistent scope selection. You can choose scope bundles for the initial setup of the SAP S/4HANA Cloud instance provisioning. A scope item is a collection of building blocks that in itself is consistent from a technical perspective and usually represents a core functionality of SAP S/4HANA Cloud that would work independently. However, scope items often depend on a base amount of general settings. Scope bundles therefore ensure a semantical consistency. A building block groups together configuration activities. The building block’s main purpose is to reuse certain combinations of configuration activities that are required for various scope items. Building blocks are given country relevance and lifecycle metadata attributes. Such attributes help to treat changes in the upgrade process correctly without causing a disruption in the customer’s tenantss. An activity describes what exactly needs to be done with the records packaged and assigned to an activity. It links with a corresponding activity in the SAP Implementation Guide (IMG). The IMG activities allow you to alter configuration tables within SAP S/4HANA Cloud in a controlled and validated way. Semantic and syntactic checks are performed alongside an IMG activity to ensure no foreign key clashes or record inconsistencies occur. A record is a customizing entry that was recorded by SAP in a content reference system and will be inserted as part of the scoping selection. Simple records are fully defined lines. However, it’s also possible that variable elements need to be considered (for example, company code) or that only certain rows of a line are defined.

Figure 16.1

Content Model of SAP Solution Builder Tool

The overall model is stored in multiple installation files and is validated with multiple automated tests to ensure its overall integrity. The integrity is relevant not only when provisioning a new SAP S/4HANA Cloud tenant, but even more for a smooth upgrade process for existing tenants. 16.1.2

Scoping and Deployment

The initial scoping of SAP S/4HANA Cloud is based on scope bundles that you can select prior to the provisioning of SAP S/4HANA Cloud tenants. Upon request, SAP provides a tenant as a quality and test tenant. This tenant is then the basis for further scoping refinements (for example, additional selection of scope items, additional organizational units, or additional countries). The initial scoping process already leads to a deployment of content in your tenant and an activation of all selected content. The activation process then adheres to necessary sequential orders to be followed to avoid temporary foreign key clashes and execution of scripts to generate required master data entries. The fine-tuning process allows you to adjust preconfigured values by altering them, hiding them, or adding additional company-specific values. What changes are possible depends on the customizing object and the rules of the corresponding IMG activity. However, every fine-tuning change is stored as customer adaptation and kept intact (if possible) during upgrade processes initiated by SAP operations. This is one of the lifecycle compatibility criteria. Upon a successful testing and quality assurance, the customer releases the SAPinitiated configurations plus any fine-tuning adoptions in the test and configuration

tenant to be deployed into the production tenant. This deployment happens with SAP’s fast transport import procedure and ensures a minimal downtime. This process is called quality-to-productive (Q2P) transport. Note that in order to avoid partial deployments leading to production tenant disruptions, there can be only one open customer-initiated Q2P transport at a time to record all activities in the scoping and testing phase.

16.2

Outlook: SAP Central Business Configuration

SAP is currently in the process of releasing a new configuration tool called SAP Central Business Configuration. This tool differs in many aspects from the existing model: Stand-alone software SAP Central Business Configuration is a stand-alone service available for registered users to use before any provisioning of a specific product or software package. The configuration tool allows you to browse through available business processes and fine-grained functions even prior to a product selection. One configuration for all products and solutions The business adaptation catalog as the central scoping instrument is structured purely according to business terminology and is adjusted every time any of the products available introduces new innovations or changes in existing offerings. Different products and solutions are available to realize the selected business capabilities by having content assigned to the catalog entries. Customers can not only scope one product, such as SAP S/4HANA Cloud, but also select fully integrated enterprise services using the wide-ranging product portfolio of SAP and its partners. Scoping of business capabilities and integrated business processes The customer’s desire is not to identify required business content, but to focus on selecting the business processes to be operated by their organization. Based on the customer’s selections—boundary conditions, organizational structure, country footprint, and industry focus—the correct content is identified and deployed into the necessary software packages by Central Business Configuration. This also includes required integration settings. Self-service scoping and project management SAP Central Business Configuration enables a compelling user experience when you are performing self-service scoping. The scoping is much more intuitive, transparent, simple to perform, and persona-driven in design. It also considers potentially required authorization boundaries to ensure that not every user can alter everything at any time, but areas of responsibility are reflected in user profiling as well. 16.2.1

The Business Adaptation Catalog

The business adaptation catalog is a hierarchical representation of all business-driven configuration decisions relevant for the customer. It’s the central element for the customer to select a scope. The selection within the business adaptation catalog defines the business scope of the customer’s solution: It hides technical complexity. It organizes decisions in a tree, regrouping related business topics. It packages functionality as bundles of documented, selectable units. It guides you through (only) necessary activities based on your decisions. The business adaptation catalog centrally organizes the SAP-delivered content, which means that it enforces overall consistency of delivered content and lets the customer’s workspace be the central storage for all configuration decisions.

The business adaptation catalog consists of a hierarchy of structuring elements ending with fine-grained business elements (see Figure 16.2). The user can navigate in the catalog tree and scope through selection of available business capabilities. Dependencies are identified, and the user can then make an informed and conscious decision for the entire list of necessary selections. The business adaptation catalog supports and secures, but it never forces choices on the user.

Figure 16.2

Structure of the Business Adaptation Catalog (BAC)

There are three levels of structuring elements (see Figure 16.3): A business area is an area within the enterprise, such as sales, purchasing, or HR. It’s the highest level within the business adaptation catalog. A business area contains one or more business packages, and it can contain one or more business options. A business package represents a logically definable subarea within the enterprise. The business package consists of capabilities that are required in that subarea. For example, within the sales business area, there is a logical contract management subarea. The business package contains all capabilities for contract management. A business package can contain one or more business topics and one or more business options. A business topic represents a logically definable subtopic within a subarea of the enterprise. The business topic consists of the capabilities required in that subarea, which can be grouped semantically. For example, within the contract management subarea, there is a contract closure business topic that includes all capabilities for contract closure. A business topic contains one or more business options. A business topic can be mandatory for a business package, which means that the topic is automatically selected when the business package is selected.

Figure 16.3

16.2.2

Basic Structure of Business Adaptation Catalog

The Ten Business Adaptation Catalog Commandments

The sheer definition of structural elements is not sufficient to ensure a homogeneous catalog carrying all capabilities in a unified and easy-to-consume way. The following 10 binding principles complete the business adaptation catalog set of guidelines and rules: 1. There is only one business adaptation catalog for all products and all providers. This means that there are not multiple catalogs representing a scope offering per product; instead, all offerings are embedded in a single catalog. 2. The business adaptation catalog is independent of any product release cycle. Changes in the business adaptation catalog can be triggered by any participating solution at any possible time. It can be caused by new innovation, alterations, or corrections. Hence, the business adaptation catalog changes instantly when a change from underlying products or solutions becomes available. It is not bound to any specific software release cycle. 3. The business adaptation catalog is an exposure of software-enabled business capabilities. (That is, the business adaptation catalog is not a bundling of business content.) As noted in earlier sections, the business adaptation catalog puts the customer’s selection of business capabilities in the center. Business content follows the business adaptation catalog selection. 4. The business adaptation catalog is product agnostic. The business adaptation catalog is not modeled on or documented for one specific software solution but represents a business view on processes and functions. The scoping does not determine a product choice expressly but implicitly, by either a previous customer choice or by identifying the best solution implementation scenario.

5. The business adaptation catalog is industry agnostic. The business adaptation catalog is not modeled on or documented for one specific industry but represents a business view on processes and functions in a general sense. Industry-specific content may be defined and attached nonetheless and selected based on a company’s profiling. 6. The business adaptation catalog is country agnostic. The business adaptation catalog is not modeled on or documented for one specific country or region but represents a business view on processes and functions in a general sense. Country relevance is defined by country-specific content. 7. The business adaptation catalog is the only connection to business content. There is no scoping possibility that would result in the deployment of SAP reference business content other than through a scoping selection in the business adaptation catalog. 8. The business adaptation catalog is the only connection for business processes to business content. SAP’s fully integrated business processes also leverage the business adaptation catalog to assemble the necessary processes and functions to realize an end-toend business process. 9. Business adaptation catalog authoring requires clear roles and authorizations. Within the authoring process to build and extend the business adaptation catalog, products and subdivisions of products must not interfere with each other uncontrollably. 10. Business adaptation catalog scoping requires persona-driven authorizations. Scoping in the business adaptation catalog is equipped with an authorization concept to allow company rules and authorization models to be reflected in the scoping and project-execution activities within SAP Central Business Configuration. 16.2.3

Business Processes

On top of the business adaptation catalog, SAP defines integrated business processes. These business processes are used in the scoping process as a preselection for a consistent and process-oriented scoping. More finely granular scope variation is then done in the business adaptation catalog directly. Central business configuration validates not only the consistency of a business process itself, but also the consistency between the business processes and the overall integrity of the scope selection of the business adaptation catalog. Business processes consist of mandatory parts, variable aspects, and, in rare cases, also mutual exclusions with other scoping selections. Note that the term business process is not yet finalized; currently these elements are still referred to as scenarios in the user interface of SAP Central Business Configuration. 16.2.4

Constraints

Constraints are a powerful instrument within the definition of the business adaptation catalog because they allow the expression of dependencies and mutual exclusions of business adaptation catalog selection options. Users are guided and supported in an otherwise unrestricted scoping experience. Whenever the user’s selection conflicts with constraints, she or he will be made aware of the entire list of needed elements or will be faced with a choice if there are exclusive options. 16.2.5

From Scoping to Deployment

In contrast to the current scoping process with SAP solution builder tool, SAP Central Business Configuration aims toward a process combining scoping and fine-tuning prior to the deployment into the target landscape. This allows for a better correction and adjustment process. However, this ambition is also dependent on the capabilities of the receiving software product. In the case of SAP S/4HANA Cloud, there are still finetuning steps to be performed in the target tenant after a scope deployment. All scoping and fine-tuning activities are recorded in a customer workspace, so SAP Central Business Configuration is always aware of the SAP reference content footprint and the customer content adaptations. This supports a fully consistent follow-up scoping and a secure upgrade process.

16.3

Summary

As outlined in the beginning of this chapter, a highly flexible and customizable standard software solution has always been an important offering of SAP. Due to the vastness of modern business and of SAP S/4HANA Cloud, the existing business configuration solution (SAP solution builder tool) has ensured a stable provisioning and go-live of the solution for companies with different scopes, different focuses, and different organizational or regional footprints. The future direction, SAP Central Business Configuration, introduces new possibilities to manage multiproduct, multiprocess, and self-service multiuser scenarios in one intuitiveto-use and business-oriented, consistent solution.

17

Identity and Access Management Managing users, authorizations, and roles is an important building block for running systems in a secure and compliant way. This chapter introduces Identity and Access Management, SAP’s new approach for simplifying these tasks in SAP S/4HANA Cloud.

In this chapter, you’ll get to know Identity and Access Management (IAM), which SAP has specifically designed to simplify the management of authorizations in SAP S/4HANA Cloud. IAM is not available in SAP S/4HANA on-premise. First, in Section 17.1, we explain the key entities of IAM, such as business catalogs, business roles, and restriction types, and the tools that are used to maintain them. Then, in Section 17.2, we focus on business users and describe how they are assigned the roles and catalogs that define the business functions and objects they are allowed to access. You’ll learn about what SAP provides and how you can use this content to define your specifics, including adaptations required by SAP changes during upgrades. In addition, we outline the interactions of IAM with scoping and with SAP Fiori pages and spaces, and we explain how IAM supports segregation of duty auditing.

17.1 Architecture Concepts of Identity and Access Management Identity and Access Management in SAP S/4HANA Cloud is a simplification layer on top of the well-established ABAP authorization concept. It’s based on ready-to-use IAM content delivered by SAP. SAP S/4HANA Cloud customers do not maintain this predefined IAM content directly but adapt it via a set of IAM tools for authorization administration. When working with these IAM tools, the authorization administrator is not accessing the underlying ABAP authorization concepts such as roles or authorization objects. The roles within the ABAP authorization concept are maintained using role maintenance transaction PFCG and are therefore referred to as PFCG roles for short. The required PFCG roles are automatically generated based on IAM entities, which the authorization administrator defines and configures. For example, the authorization administrator grants authorizations to a business user by assigning a business role such as general ledger accountant to the corresponding business user. Then IAM creates all required PFCG roles and assigns them to this business user automatically and completely hidden from the authorization administrator. However, for understanding how IAM works internally, we briefly introduce the ABAP authorization concept. 17.1.1

ABAP Authorization Concept

The ABAP authorization concept protects applications and services from unauthorized access. The assigned authorizations to a business user determine which actions they can execute and which data they can access. Authorization s in the ABAP authorization concept represent instances of authorization objects, which are combined in an authorization profile that is associated with a role. The authorization profiles are

assigned to business users via the associated roles maintained with the role maintenance transaction PFCG. An authorization object groups up to 10 authorization fields. An authorization field represents either data as attributes of a business object or activities such as reading or changing. An authorization check during runtime is successful if one or more authorizations for an authorization object assigned to the business user fit the required combination of authorization field values for this authorization object. For example, an authorization object has an authorization field for cost center and another authorization field for cost element. If a business user has one authorization for this authorization object for the combination of cost center 1000 and cost element 50000 and a second authorization for the combination of cost center 2000 and cost element 60000, this business user will have access during runtime to line items in cost center 1000 for cost element 50000, but not for any other cost element, like 60000. In cost center 2000, the business user will only have access to line items for cost element 60000 but not for any other cost element. 17.1.2

Authentication

Business users do not authenticate to their SAP S/4HANA Cloud tenant directly. They always authenticate to the Identity Authentication service of SAP Cloud Platform. Business users are signed into the SAP S/4HANA Cloud system via Single Sign-On to the Identity Authentication service using a security assertion markup language (SAML) assertion. Authentication rules and the detailed configuration for an authentication method (for example, password policy for password-based authentication) have to be configured in the Identity Authentication service. The security assertion markup language (SAML) is an open-standard XML-based markup language for security assertions for exchanging authentication data between an identity provider and a service provider. Thus, in SAML terminology, SAP S/4HANA Cloud is a SAML service provider (an application) and uses the Identity Authentication service as a SAML identity provider. As an SAP S/4HANA Cloud customer, you get two tenants of the Identity Authentication service: one for productive use and a second one for testing. If these tenants are already present because you use other SAP cloud solutions, the existing tenants can also be used for SAP S/4HANA Cloud. You can use the Identity Authentication service tenant in proxy mode and forward all authentication requests to your existing corporate identity provider. To grant a user access to an SAP S/4HANA Cloud tenant, the business user must be created in SAP S/4HANA Cloud and in the Identity Authentication service user store. In addition, the attribute configured as the subject name identifier in the Identity Authentication service for the SAP S/4HANA Cloud service provider must match the username for the business user in SAP S/4HANA Cloud. For quick setup of the SAP S/4HANA Cloud tenant with a few users, SAP provides tools to upload users to SAP S/4HANA Cloud and Identity Authentication service via a file. For productive use, SAP delivers communication scenarios for automatic replication of business users into SAP S/4HANA Cloud from external identity management systems,

from SAP SuccessFactors Employee Central, or from external human capital management systems. As explained in Chapter 6, Section 6.4, communication users for inbound communication authenticate via user ID and password or via SSL client certificate. 17.1.3

Identity and Access Entities and Their Relationships

In SAP S/4HANA Cloud, you use the SAP‐delivered IAM entities to define your own business roles. In a next step, you assign these business roles to your business users to grant access to applications and to define access restrictions. Most of these entities are predefined by SAP and cannot be changed as of SAP S/4HANA Cloud 2011. The IAM concepts are based on a set of entities, which are illustrated in Figure 17.1. Gray boxes indicate entities that are predefined by SAP and cannot be changed by the user. Entities with no fill color are defined by you as an IAM administrator. An authorization object extension is an internal object not visible for an IAM administrator, which adds additional information to an ABAP authorization object required during automatic generation of the PFCG roles in SAP S/4HANA Cloud. This additional information is, for example, used during role generation to generate read-only roles. An activity mapping defines for an authorization object extension the mapping between the permitted activity values of an ABAP authorization object and the supported access categories (write, read, or value help). It is an internal object not visible for an IAM administrator. For example, the permitted activity value 03 of an ABAP authorization object typically maps to the read access category.

Figure 17.1

IAM Entities in SAP S/4HANA Cloud

A restriction field corresponds to an authorization field in the ABAP authorization concept. Restriction fields mask those fields in an authorization object for which restriction values can be maintained by customers. Typical examples for such fields are sales organization, company code, or cost center. Restriction fields are combined in one or more restriction types. Note that restriction fields are defined by SAP, and you cannot define your own restriction fields.

A restriction type defines a set of one or multiple restriction fields, for which values can be maintained within a business role to define instance-based restrictions. Instancebased restrictions define which data records users with a certain role are authorized to access, create, or change. Examples are the company code restriction type with one company code restriction field or the sales area restriction type with the three sales organization, sales division, and distribution channel restriction fields. The purpose of a restriction type is to define a reduced view of the authorization fields of one or a group of ABAP authorization objects. Restriction types are assigned to business catalogs. Defining your own restriction types is not supported. A restriction is a customer-defined instance of a restriction type, which defines the access restrictions for an access category (read, write, or value help), together with the values for the restriction fields (such as, company code 1000 for the company code restriction type or sales organization 1000 combined with sales division 01 and distribution channel 01 for the sales area restriction type). A restriction can only be maintained within a business role. A business catalog groups the applications (accessible via a tile in the SAP Fiori launchpad or via application-to-application navigation; see Chapter 3, Section 3.1) and the supported restriction types to define the instance-based restrictions that belong together within a business process or a part of it. It is an SAP-delivered building block for maintaining business roles. The available business catalogs in an SAP S/4HANA Cloud tenant depend on the chosen edition and pricing category and configured scoping. The business catalogs are regularly reviewed in International Standard on Assurance Engagements (ISAE) 3000 audits for segregation of duty conflicts. Business catalogs are predefined by SAP and cannot be changed (as of SAP S/4HANA Cloud 2011). A business catalog can have dependencies to other business catalogs, which means that the business catalog works only in combination with its dependent catalogs. The SAP-delivered business catalogs can be deprecated and replaced by one or more successors. A business role template is an SAP-delivered grouping of business catalogs representing typical personas, which IAM administrators can use to create a business role. In contrast to business catalogs, segregation of duty (SoD) compliance is not ensured for business role templates. You cannot create your own business role template, but you can create a new business role as a copy of an existing one. A business role is an aggregation of one or more business catalogs. Assigning a user to a business role grants access rights to the user (using restrictions) and defines which applications are visible on the overview page of a logged-on business user in the SAP Fiori launchpad. A business role can be created by the customer from an SAP-delivered business role template, by adding SAP-delivered business catalogs directly to a business role, or by deriving it from a customer-defined master business role. You can maintain restrictions for all restriction types included in the business catalogs aggregated in the business role. The included restriction types of the catalogs are merged, and maintained restrictions are therefore always applied to all included business catalogs. By assigning a business role to a business user, this user will get access to all applications of the included business catalogs and all access rights defined by the restrictions. When a business role is activated, PFCG roles are automatically generated for each added business catalog and assigned to the business users. If more

than one business role is assigned to a business user, all access rights included in the business roles are merged on the level of the business user. In addition to granting access rights, the assignment of a business role to a business user will also define the tiles visible on the homepage of a logged-on business user in the SAP Fiori launchpad. A business role can be marked by the customer as a master business role, from which other business roles can be derived. All derived roles share the assigned business catalogs and the defined restrictions from the master business role. These values cannot be changed in a derived role, but additional restrictions can be defined. 17.1.4

Identity and Access Management Tools

SAP delivers a set of easy-to-use tools to support authorizations and user management in SAP S/4HANA Cloud. The available tools are distributed to separate catalogs to support segregation of duty. There are business catalogs for user management (business catalog SAP_CORE_BC_IAM_UM), business role management (business catalog SAP_CORE_BC_IAM_RM), and business role assignment (business catalog SAP_CORE_BC_IAM_RA). The following list gives an overview of the most important tools: Maintain Business Users This tool allows you to maintain the business user settings like validity period or data format and the assignment of business roles. In addition, it enables you to monitor the status of business users (for example, logged on since key date) and to display the change documents for business users. The creation of business users is not possible with this tool. This must be done by file upload or automatic replication from external identity-management systems or HCM systems. Maintain Deleted Business Users This tool allows you to display deleted business users within their retention period and to maintain whether a recreation of a business user is allowed or forbidden. Maintain Business Roles This is the central tool for authorization administration. It allows you to create and edit business roles and maintain their restrictions. It also allows you to display the change documents of a business role and to download business roles as an XML file. Business Role Templates This tool provides an overview of the business role templates delivered by SAP. It lists the business roles created from the business role templates and allows you to show where existing roles differ from the role templates with respect to included business catalogs. If this is the case, it can also be used to synchronize the business roles with the business role templates with respect to the included catalogs if required. Business Catalogs This tool gives an overview of the SAP-delivered business catalogs. It shows the included applications, the assigned restriction types, their dependencies to other business catalogs, and their usage within business roles and business role templates. The tool also shows which business catalogs have been deprecated by SAP and what catalogs replace them as their successors. If this happens, the tool can adjust affected business roles so that they include the successor catalogs instead of the deprecated ones.

Display Restriction Type With this tool you can view restriction types, including their description, assigned restriction fields, and in which business catalogs the restriction type is assigned. IAM Key Figures This dashboard gives a quick overview of the business users by displaying information from Maintain Business Users as charts. IAM Information System This tool allows for complex searches across business users, business roles, business role templates, business catalogs, restrictions, and applications. For example, you can address the following questions: Which users can access a specific application? Which restrictions does a user have? Manage Business Role Changes after Upgrade This tool displays the relevant changes to business catalogs and restriction types together with the affected business roles after an upgrade. It can be used after an upgrade as a comprehensive starting point to identify which adaptations are required for the customer defined business roles caused by the release changes. Display Authorization Trace With this tool, you can activate and show the results of an authorization trace for a business user. The trace result indicates if an authorization check was successful or failed. For authorization checks of a certain restriction type, the detailed information lists all business roles that allow you to define a restriction for this restriction type. 17.1.5

SAP Fiori Pages and Spaces

As explained earlier, business roles define not only access restrictions, but also the homepage a user sees in SAP Fiori launchpad when logging on to an SAP S/4HANA Cloud tenant. Each business catalog has an SAP Fiori launchpad business catalog assigned, which defines with the included launchpad tiles the applications a business user can launch from the SAP Fiori launchpad. It isn’t possible to change the SAPdelivered SAP Fiori launchpad business catalogs. Figure 17.2 shows the entities defining the homepage of a business user in SAP Fiori launchpad. Entities with a gray fill cannot be changed by SAP customers. Entities with no fill are defined by SAP customers. Which launchpad tiles are visible on a user’s homepage is defined by the launchpad page, which is assigned to a business role via a launchpad space. A launchpad space for a business role must be created by the SAP customer in the Maintain Business Roles application by creating a new space or using an already existing space. When a new launchpad space is created, a launchpad page is automatically assigned. Only one page can be assigned to a business role. SAP delivers a space template for business role templates also, which can be used to create a space when a business role is created from a business role template. The layout of a page can be defined with the Manage Launchpad Pages tool (included in business catalog SAP_CORE_BC_UI_FLD), which can also be launched from within the Maintain Business Roles application. A launchpad page can be structured into multiple

sections in which you can assign any of the tiles included in the business catalogs assigned to the business role.

Figure 17.2

Entities Defining Business User’s Overview Page

17.2

Managing Users, Roles, and Catalogs

This section describes the management of IAM content. The lifecycle of IAM starts with SAP provisioning business catalogs and PFCG roles (including authorization values). In Section 17.1, we noted that PFCG roles are created internally when the administrator activates a business role. This is done based on a predelivered PFCG role, that is used as a template and completed with the authorizations that correspond to the restrictions maintained by the IAM administrator. For authorization fields exposed using restriction fields, the administrator defines the desired settings, while all other authorization values are copied over from the SAP-defined defaults of the template. The combination of these restrictions defined by SAP and the customer’s administrator is the basis for the customer-specific setup of the user roles, which are assigned to the customer’s business users. After the initial setup of the roles assigned to the users, further changes during the software lifecycle are implied by changes to the SAP-provided content (such as introduction of new business catalogs, restructuring of business catalogs, changes to authorization values, adding/removing authorization checks), or by changes of the SAP customer’s business scope or user role scope. In addition, the SAP-delivered business catalog has an SAP Fiori catalog assigned that determines which applications the user can access in the SAP Fiori launchpad (see also Chapter 3, Section 3.1). 17.2.1

Communication Arrangements

Apart from human users, several SAP or third-party applications integrate with SAP S/4HANA Cloud. As for humans, IAM manages authorizations for remote access from other applications. Each of these integrations relies on a communication arrangement in the SAP S/4HANA tenant. The main IAM parts of a communication arrangement are the communication user and the correspondent roles, which grant the necessary authorizations for communication with the SAP S/4HANA tenant. Communication arrangements do not support defining restrictions for communication users. More details are described in Chapter 6, Section 6.4. 17.2.2

User Types

In your SAP S/4HANA Cloud tenant, several types of users exist. The business user is managed by the SAP customer’s user administrator. In addition, SAP creates a defined set of technical users, which are intended for SAP operations of the tenant (for example, specific support users, who have the authorizations needed for SAP to provide support). These users are fully managed by SAP, and there is no need and no option for customers to access them. Let’s discuss the most important types of users you work with as an administrator. Business user s are the core of the user management in the customer’s SAP S/4HANA Cloud tenant. They relate to the “real” people working with the business applications. To these users, the administrator needs to assign business roles to define which tasks a user should be allowed to perform and which authorizations they should be granted.

Administration and business configuration users are the first customer-specific users in the tenant. They are created as part of the tenant-provisioning process. Administration users own the authorization to create business users. The business configuration user is needed in the quality tenant only (see Chapter 16, Section 16.1.2) and is used for the configuration of the business processes. Communication user s are needed in inbound communication in customer-managed communication scenarios (see Chapter 6, Section 6.4). Customers create communication users and assign them to communication systems. Then they create a communication arrangement based on a communication scenario and a communication system. When saving the communication arrangement, the communication user is assigned to all communication scenario roles resulting from the communication arrangements it’s used in. Finally, printing user s are needed for the connection between the SAP Cloud Print Manager in your corporate network and the print queue web access service in the cloud. For more information about these components, see Chapter 18, Section 18.2. 17.2.3

SAP PFCG Roles and Business Catalogs

Simplified maintenance for customers is one of the main aspects of IAM in SAP S/4HANA Cloud. Under the hood, however, it uses the ABAP authorization mechanisms as the underlying basis. As explained earlier, SAP provides business catalog templates as the basis for creating a customer’s own business roles and catalogs, which are then assigned to the business users. The business catalog templates come with predefined PFCG roles, which are the basis for generating specific PFCG roles for each business role. Figure 17.3 shows the design-time artifacts underlying the SAP-delivered business role templates and business catalogs.

Figure 17.3

Overview of IAM Content Delivered by SAP

In addition to the info in Section 17.1.3, it also shows the underlying objects to which restriction types and restriction fields are mapped. The authorization object extension defines how a given activity of the authorization object is mapped to the write, read, or value help access category (explained ahead). How Predelivered PFCG Roles Are Used The SAP-provided PFCG roles cover all relevant authorizations. Parts of these authorizations are defined as fixed values, which means SAP defines the parameter values for the authorization object or field and they can’t be changed. These parameters are therefore hidden for the IAM administrator who creates a business role from a business role template. A business role for an employee dealing with sell-from-stock orders would already carry the limitation of the order type to the correspondent sellfrom-stock order type, for example. Beyond this, all authorizations for which the IAM administrator can define restrictions are accessible via restriction types. For SAP S/4HANA Cloud, SAP introduced simplification layers on top of the existing authorization concept. On the lowest level, authorization checks allow finely granular separations based on authorization activities. On the IAM level, these fine-grained activities are not visible. For simplification, the different activities are grouped into three categories, called access categories: 1. Read access groups all activities that do not change the object. 2. Write access groups all activities that perform change operations on an object. This includes, for example, creation, change, and deletion. 3. Value help groups activities that are checked during value help functions. Structuring of SAP Business Role Templates and Business Catalogs The business catalogs and related business role templates provided by SAP are structured in such a way that they can be assigned to users without providing unnecessary or unintended authorizations. It is especially important that work on one business object can be distributed between different persons, which each of them only given access to what is needed to perform their own job. This allows you to follow the segregation of duty principle: you can split the processing of a business object between different users by simply assigning the corresponding business roles and catalogs to each of them. Lifecycle Changes of SAP-Delivered Roles and Catalogs Like all software and development artifacts, roles and catalogs also underlie a lifecycle. Assume that with some version of the software, SAP provided a new business role template or business catalog to enable access to a new function or feature. In subsequent releases, this role/catalog may undergo certain changes. These changes can include the following: Changes to restriction types Such changes may include the following:

New restriction types are added because an already existing check that was predefined and fixed now is changeable by the customer and needs an explicit authorization assignment. A completely new check is introduced to increase the security level and a new authorization object is included in the new version of the PFCG role predefined by SAP. However, this authorization would not be in the existing PFCG roles for business roles that were generated from the old version of the predelivered PFCG role. Changes to authorization default values due to a refinement of the authorization concept. Adding new applications to a business catalog as part of the ongoing enhancement of the service. A user with a business role that includes this catalog now has access to that application, and it may be necessary to add additional authorizations to the role for the new application (using restrictions). Applications are removed from a business catalog because they became obsolete or are replaced by a successor or are moved to another business catalog. Splitting/restructuring of a business catalog due to segregation of duties or business requirements. This is usually done by deprecation of the old one and the introduction of the new ones. For this case, SAP provides information about which catalogs are the successor(s) of the deprecated one. The deprecation phase allows you to adjust your own business roles in due time, thus avoiding an interruption for business users. Finally, to avoid incompatible changes, SAP introduces checks for new authorizations in a switched way where applicable. In this case, a new authorization object is shipped in a given release. It isn’t yet checked in the coding (it’s “switched off”), but customers can already include it in their roles. In the next release, this check is switched on. As the customer roles are already prepared for this new check, no unwanted side effects occur. SAP provides information about the changes in the “What’s New” documentation and in the system in the relevant IAM applications. The impact of these changes on customer roles and catalogs is described in Section 17.2.4. 17.2.4

Management of Users, Roles, and Catalogs by Customers

IAM in SAP S/4HANA Cloud is reduced to the essentials to provide a simplified experience and to make management of user roles more efficient. Using the templates provided by SAP (Section 17.2.3), your main focus is to provide your users the needed business catalogs and business roles to perform their daily work. Scoping As one of the first steps of the implementation of SAP S/4HANA Cloud, the SAP customer decides in the so-called scoping which business processes will be used in the company (see Chapter 16, Section 16.1.2).

From an IAM perspective, SAP S/4HANA Cloud contains business catalogs and roles covering the complete functional scope. To reflect the scoping decisions in the IAM context, SAP interrelates the IAM objects with the scoping entities. Having this information in place, the scoping done in the implementation phase leads to the situation that all relevant business catalogs and roles are known. Leveraging this, SAP S/4HANA Cloud presents all IAM objects that are relevant for your business scope selection; you must only maintain the authorization restrictions for these relevant objects. User Creation and Role Assignment The main responsibility of an IAM administrator in an SAP S/4HANA Cloud tenant is to create business users and to assign business roles to them. The IAM administrator is responsible for the following activities: Identification and definition of tasks to be assigned to a specific user. The IAM administrator uses this to check if existing roles fulfill the need or if a new one must be created. Segregation of duty. The IAM administrator defines which tasks need to be separated between users—for example, creation of a business object and approval of it. Based on this, the IAM administrator decides which roles are assigned to a user. This also requires a check whether the defined split of the roles and catalogs follows the segregation of duty principles. Creation of roles and catalogs based on the SAP templates (taking segregation of duty into account). As a first step, the IAM administrator creates users in the tenant. This can be done directly, or they can be first created in an HCM service like SAP SuccessFactors Employee Central and replicated from there into the SAP S/4HANA Cloud tenant. After the business roles and catalogs are created, they are assigned to the users. Change Management of Customer-Managed Users, Roles, and Catalogs After the initial setup of users, roles, and catalogs, the evolution of the SAP S/4HANA Cloud tenant requires action due to different software lifecycle events that bring changes to the SAP IAM content. One of the main drivers of IAM changes are changes made by SAP, as described in Section 17.2.3. Some changes on a technical/administration level that do not impact the authorizations of users are taken over automatically. Information on changes made by SAP to its delivered roles and catalogs are provided in written form by the “What’s New” documentation. In addition, the IAM Upgrade tools show changes that impact the customer-managed roles and catalogs. For all other changes with an impact on user authorizations, you as an IAM administrator have the final control—for example: SAP decreases the authorization defaults in the predelivered PFCG role. In that case, a user with a role created from the old version would have more access than SAP foresees. You can decide if the user should also get less authorization or if the less restricted access is still appropriate.

SAP allows additional access by adding authorizations to the predelivered PFCG role. In that case, a user with a business role created from the old version would have less access than SAP foresees. You can decide if the user should also get more authorization or if the more restricted access is still appropriate. You can add restrictions related to new authorization checks (delivered switched off by SAP in release N) to business roles assigned to your users. Because the correspondent check is switched off, these restrictions currently do not take effect for the users. In the next release (N + 1), there is no immediate need for adaptations by the IAM administrator. The check is now active (switched on), and the restrictions maintained in the previous release (N) apply for the users. SAP provides correspondent tools that give an overview of IAM objects changed in the upgrade (like the introduction of a new restriction type). These tools allow you to navigate from the overview to the affected objects that need to be updated by the administrator. The goal is that these changes and adjustments can be done in the most efficient way while still giving the IAM administrator full control over all user-related authorization changes. 17.2.5

Auditors

SAP S/4HANA Cloud includes tools that help to answer the typical monitoring needs of a system auditor in the context of identity and access management. Typically, the same tools used for IAM administration can also be used by an auditor via a business role, where the write access is fully excluded by maintaining a no access restriction for write access category. Let’s discuss the most important monitoring needs and look briefly at how the required information can be accessed. The customer-configured authentication rules and password policies for business users can be monitored in the change log of the Identity Authentication service. For inbound communication with a communication user, the Communication Arrangements app (included in business catalog SAP_CORE_BC_COM) shows for a communication arrangement the currently assigned communication user. There you can see the configured inbound authentication method for that user and access the change log of the communication arrangement. The list of the existing users of an SAP S/4HANA Cloud tenant and their locking status can be displayed for technical users in the Display Technical Users app (included in business catalog SAP_CORE_BC_IAM_UM), such as for the initial user (SAP_CUST_INI), communication user, print user, SAP-managed communication user, SAP-managed technical user, and SAP-managed support user. For business users, the same information is listed in the Maintain Business Users app (included in business catalogs SAP_CORE_BC_IAM_UM and SAP_CORE_BC_IAM_RA). Deleted business users can be listed during their retention period in the Maintain Deleted Business Users app (included in business catalog SAP_CORE_BC_IAM_UMD). A required segregation of duty for authorization and user administration can be achieved by the assignment of the User Management (business catalog SAP_CORE_BC_IAM_UM), Role Management (business catalog SAP_CORE_BC_IAM_RM) and Role Assignment

(business catalog SAP_CORE_BC_IAM_RA) business catalogs to a fully separate group of business users. The number of assigned business roles for a business user can be monitored in the Maintain Business Users app (included in business catalogs SAP_CORE_BC_IAM_UM and SAP_CORE_BC_IAM_RA). This tool also allows you to check the use of business user accounts by listing, for example, business users that have not logged on at all or have not logged on after a given key date. The IAM Key Figures application (included in the business catalogs SAP_CORE_BC_IAM_RA, SAP_CORE_BC_IAM_RM, and SAP_CORE_BC_IAM_UM) provides an overview of the usage of business user accounts in a graphical dashboard. The change log for business roles includes the adding and removing of business catalogs, the maintenance of restrictions, and the assignment of a business role to a business user. It can be accessed from within the Maintain Business Role app (included in business catalogs SAP_CORE_BC_IAM_RA and SAP_CORE_BC_IAM_RM). For the change log for business users, the attributes changed in the Identity Authentication service can be monitored in the corresponding audit log, including, for example, a password change. The logged actions for a business user in the S/4HANA Cloud tenant (user created, user deleted, lock status changed, validity period changed, alias changed, business roles changes) are accessible via the Maintain Business Users app (included in business catalogs SAP_CORE_BC_IAM_UM and SAP_CORE_BC_IAM_RA). The Maintain Communication Users tool (included in business catalog SAP_CORE_BC_COM) allows you to access the change log for customer-managed communication users, including, for example, password changes. The Display Security Audit Log app (included in business catalog SAP_CORE_BC_SEC_SAL) shows security-related system information, such as unsuccessful logon attempts or changes of field values via the ABAP debugger by SAP support users. Which events are logged is preconfigured by SAP and cannot be changed by administrators.

17.3

Summary

In this chapter, you learned about IAM in SAP S/4HANA Cloud, SAP’s simplified way of managing authorizations. We introduced the key entities of IAM, such as business catalogs, business roles, and restriction types, and the tools that are used to maintain them. We also explained how these concepts relate to SAP Fiori pages and sections. We also discussed how users are authenticated as well as the authorization aspects of communication arrangements for configuring integration with other systems. Further, we looked at business users and how they are assigned roles and catalogs, which define the business functions and objects they can access. Here you learned about the templates that are provided by SAP and how you can use them to define your specific content. We further discussed the different aspects of the lifecycle of IAM entities due to changes from SAP or new requirements from the customer’s business. Finally, we explained how IAM supports auditors in getting the required access.

18

Output Management This chapter explains the architecture of output management with focus on SAP S/4HANA Cloud. You’ll learn how documents can be sent to printers installed in your on-premise landscape and how sending emails and EDI output works in a cloud environment.

Because output management in a cloud environment works via a different technical setup than in an on-premise installation, SAP has developed a new output management solution called SAP S/4HANA output management. This framework solves original cloud challenges such as printing documents on local printers that are not directly addressable from the cloud, and it brings new innovations to already existing technologies including form templates and email. The newly implemented SAP S/4HANA output control reuse service unifies all output-related tasks for business applications with more complex output scenarios. This chapter describes the basic services of output management and the output control reuse service, which makes use of the basic services and provides even more features for consuming applications.

18.1

Architecture Overview

SAP S/4HANA output management is the combination of individual services for document creation, document output, and central output control on top of these basic services (see Figure 18.1). What functionality of output management a business application provides to the user in what output scenario depends mainly on the business application itself. For example, a purchaser creates a purchase order, which technically uses output control to determine the relevant output based on defined business rules (customizing). The output is sent to the supplier using a purchase order form template (document creation), which is printed (document output). In another scenario, an email is sent, which directly uses the document output APIs of Business Communication Services (BCS)—for example, in SAP Business Workflow. The following sections look deeper at each part and provide more details of the involved components.

Figure 18.1

Architecture of SAP S/4HANA Output Management

18.2

Printing

For printing services in a SaaS solution, an application running in the cloud must send documents to the physical printer in the SaaS user’s network. Therefore, SAP has introduced print queue s (replacing the spool system from SAP ERP in SAP S/4HANA Cloud), to which applications send their print requests. These requests are fetched by SAP Cloud Print Manager, which runs in the user’s network and sends the requests to local printers. The overall printing process can be divided into two parts. The first one is an administrative part: the key user defines one or more custom print queues in the SAP S/4HANA Cloud tenant and configures the receiving side in the key user’s network—for example, by installing SAP Cloud Print Manager on a local PC. The second part is the actual runtime, which starts when a business application sends a document to a print queue. This document is then routed to the assigned physical printer and printed. The overall architecture for printing is illustrated in Figure 18.2.

Figure 18.2

Printing Architecture

The administrative part starts with the Maintain Print Queues SAP Fiori app within the SAP S/4HANA Cloud tenant. It allows you to create, change, or delete custom print queues. Each print queue can be defined according to the type of document that should be printed via this queue. The most common document type is PDF, but other types are also available, supporting different specific printer languages, such as PostScript, printer command language (PCL), and different kinds of barcode printer languages used for special purposes.

Besides this preclassification of documents, no other printer-specific configuration needs to be set up in SAP S/4HANA Cloud. To print documents from a print queue, SAP offers two possibilities via APIs. The first one is an SAP proprietary API for SAP Cloud Print Manager. SAP Cloud Print Manager is a Microsoft Windows-based print server from SAP that offers basic printing functionality. It’s suitable for common office printing. SAP Cloud Print Manager can be downloaded from within the SAP S/4HANA Cloud tenant via the Install Additional Software app. The second API is called Cloud Print Service (see Figure 18.2). It’s an open interface that serves any third-party output-management system (OMS). An OMS usually provides larger and more specific printing functionality than what is offered by SAP. By calling the Print Queue Web Access service in SAP S/4HANA Cloud, SAP Cloud Print Manager and an OMS can assign physical printers to a print queue, sending documents to the printer and status feedback to the SAP S/4HANA Cloud tenant. The API for SAP Cloud Print Manager offers basic functionality with a static set of attributes. The Cloud Print Service API for OMS is more flexible regarding print parameters and additional metadata information that can be used by the OMS. After completing this administrative setup, applications can start sending documents to the custom print queues. First, the application needs to store the document to be printed so that it can be asynchronously pulled from SAP S/4HANA Cloud later. Then the application creates a print queue item, passing in the reference of the stored document. When the corresponding print queue is queried for new print queue items, the document is retrieved from the document store and sent to the SAP Cloud Print Manager or the OMS system. From there, it’s passed to the physical printer or stored in the file system.

18.3

Email

When applications want to send emails from SAP S/4HANA Cloud, they use the business communication services (BCS), as they do in on-premise systems as well. BCS provides interfaces in which the contents of the different body parts of an email— the main body and attachments—as well as further information, like the sender and recipients, are passed in. BCS combines this information in the standardized format, which is used for communication via the SMTP email protocol, and hands over the email to an SAPmanaged mail server for further distribution to the recipients (see Figure 18.3). BCS holds the information concerning the emails only temporarily to allow for monitoring of the emails; it doesn’t act as document storage for the emails. The Display Email Transmissions SAP Fiori app allows for monitoring the sending status of the outgoing emails. With SAP S/4HANA Cloud, SAP has as introduced email templates, which can be used to define the subject and body of an outgoing email, including variable parts that are replaced at runtime. This feature is supported in SAP S/4HANA on-premise too. As a key user, you can copy the predelivered email templates provided by SAP and modify them to your needs. For this, the Maintain Email Templates SAP Fiori app is provided.

Figure 18.3

Email Processing

18.4

Electronic Data Interchange

SAP S/4HANA output management (via SAP S/4HANA output control) supports communication scenarios that need to send an electronic message to a business receiver. Business applications typically use this capability for scenarios in which the same business document can be sent via multiple channels. For example, a purchase order can be printed, emailed, or sent as an XML message. However, it’s always the same purchase order. The benefit of using output control for such scenarios is that the business user can manage Electronic Data Interchange (EDI) output along with printouts and emails centrally. Technically, all electronic data interchange in SAP S/4HANA Cloud requires a defined communication scenario with dedicated inbound and outbound service interfaces (see also Chapter 6, Section 6.4). These communication scenarios are defined and owned by the corresponding business application. SAP S/4HANA output control is then used to determine the receiver and to trigger the sending of the message through the defined outbound service interfaces. For example, to send the purchase order in XML, output control is used to determine the correct supplier and to read the defined destination as defined in the communication arrangement for this supplier. The determined information is then passed to the purchase order application, which assembles the XML message from the purchase order data and sends it to the determined destination. The documentation of the communication scenario usually describes the required setup, including all steps in output control.

18.5

Form Templates

Forms processing is the central component for creating and rendering print forms like sales orders or purchase orders in SAP S/4HANA Cloud. The architecture for forms processing in SAP S/4HANA Cloud is shown in Figure 18.4. Form template s, as well as logos and texts to be used in the form templates, are stored within the SAP S/4HANA Cloud tenant. Key users can copy SAP forms, create their own forms, and modify them by using the Maintain Form Templates SAP Fiori app. The Manage Logos/Texts app allows key users to define pictures and texts that can be used to apply custom branding in the form templates. If an output is triggered, the application calls the forms processing component. Forms processing takes care of merging the runtime data provided by the application with the correct form template. The document is rendered by the SAP Cloud Platform Forms service by Adobe in the requested output format. The output format can be defined by the calling application. Possible output formats include PDF, PCL, and PostScript. The usage of SAP Cloud Platform Forms by Adobe is included in the SAP S/4HANA Cloud subscription and part of standard provisioning. The forms processing functionality is also available in SAP S/4HANA on-premise, but with more options based on Adobe Document Services.

Figure 18.4

Forms in SAP S/4HANA Cloud

18.6

Output Control

The output management services described thus far are separate services with different interfaces and ways to consume them. The SAP S/4HANA output control reuse service hides this variety from business applications and offers them a unified interface for consuming output management services. Output control uses the basic output management services and adds more features (such as parameter determination, output of attachments, and output history) for applications that require advanced output scenarios. The architecture is shown in Figure 18.5. SAP S/4HANA output control can perform communication with business receivers using print, email, or EDI. To find out which communication channel should be used in a particular business transaction, the output parameter determination based on the Business Rules Framework is invoked. After that, the actual output is done by print, email, or EDI.

Figure 18.5

Output Control

All applications that need to communicate with business receivers use SAP S/4HANA output control, which is mainly represented by the output request business object. Each instance of a business transaction, such as creation of a purchase order, creates one corresponding output request instance (one-to-one relationship). They are linked by a unique identifier defined by the business transaction, such as a purchase order number. This unique identifier is stored in the output item. One item of the output request represents a single output of the business transaction, such as one item for the printout of a purchase order or a second one for an email output. The item holds all relevant output settings, such as output type (for example, purchase order), receiver (for example, supplier), or channel (print, email, EDI) and is determined via the output parameter determination. Each item has a status that indicates the state of the output (In Preparation, To Be Output, Completed, or Error). The output request therefore represents the set of processed or planned outputs related to the referenced business transaction.

When the output parameter determination is triggered by the business application, the first step is to determine the output type that is valid for the current transaction. For example, the purchase order can have an output type that is totally different from the outbound delivery. Once the output type is determined, all dependent settings are retrieved by evaluating the corresponding business rules. The results of output parameter determination are output items that define the sender and receiver, communication channel, and dispatch time. The resulting items are stored as output request items and then can be fetched for output processing. For a given set of output items, the business application decides when they should be output. This is represented by the dispatch time on each output item which can be set to either Immediately or Scheduled. All items with an immediate dispatch time are processed when the user saves the transaction. Scheduled items require running an application-specific batch job to trigger the output processing. In both cases, all output request items are processed according to their output settings. During processing, corresponding application data is retrieved for each output item when necessary, such as for form rendering. The rendered document is saved automatically in the document storage, and the reference to it is stored in the output item. Based on the communication channel, such as printer, email or EDI, the processing logic then calls the corresponding processing component, which finally dispatches the message to the recipient. Afterwards, the status is updated in the output request item table. If the status is Completed, the output item cannot be changed anymore and becomes a historic item. The set of all historic items for a business document is called the output history. Users can check in detail for each output item what was output when, by whom, and which documents were involved. For document-based channels like print and email, it’s possible to include attachments in the output, either as individual files or merged with the main document (based on a form template). Once the output is completed, these documents become part of the output history as well. SAP S/4HANA output control is used in SAP S/4HANA on-premise and SAP S/4HANA Cloud.

18.7

Summary

In this chapter, you learned about the overall architecture of SAP S/4HANA output management. You’ve seen how print, email, and EDI output is performed and which dedicated steps are required to use them in SAP S/4HANA Cloud. Printing from the cloud requires a different architecture to send documents to local printers. At the end of the chapter, we took a deeper look at the SAP S/4HANA output control reuse service, which is used by many business applications to provide a unified solution for more complex output tasks.

19

Cloud Operations The cloud is different. It’s about services, resource sharing, and always being on. Accordingly, the architecture of SAP S/4HANA Cloud implements multitenancy, blue-green deployment, and built-in support.

Although SAP operates SAP S/4HANA Cloud, not the IT department of the subscriber, it may be useful to know the basics presented in this chapter about how SAP S/4HANA Cloud is set up and operated (Section 19.1) and in which datacenters it’s available (Section 19.2). We’ll also explain its multitenancy architecture (Section 19.3), describe how software maintenance works (Section 19.4) and how users benefit from built-in support (Section 19.5).

19.1

SAP S/4HANA Cloud Landscape

An SAP S/4HANA on-premise system consists of the SAP HANA database management system plus a set of ABAP application servers, which run the SAP S/4HANA software. Typically, organizations operate SAP S/4HANA in a landscape track of three connected systems: 1. A development system for defining business configuration and extending the application with additional custom developments 2. A test system to ensure the quality and correctness of newly maintained configuration settings and new custom developments 3. A production system to run the actual business processes for the organization Business configuration settings and new developments (code changes) are transferred along this track using the adaptation transport organizer (ATO). SAP S/4HANA Cloud is different. First, we have tenant s. SAP S/4HANA Cloud has implemented a multitenant architecture that allows the SaaS provider to share storage and computing resources across the tenants. Each subscriber to the SAP S/4HANA Cloud service gets their own SAP S/4HANA Cloud tenant. The tenant’s data is isolated from and invisible to the other tenants (see Section 19.3). Second, the SAP S/4HANA Cloud offering consists of a set of services. The most prominent one is the service that provides most of the business functionality and shares the ABAP code line with SAP S/4HANA on-premise. However, this service is complemented by additional technical and business services, such as the following: Identity Authentication service of SAP Cloud Platform, which provides authentication mechanisms and secure single sign-on (see Chapter 17, Section 17.1.2) Identity Provisioning service of SAP Cloud Platform, which automates identity lifecycle processes and distributes identities among SAP Cloud tenants SAP Cloud Platform Forms by Adobe to generate print and interactive forms (see Chapter 18, Section 18.5)

SAP Analytics Cloud, SAP’s business intelligence platform, which provides analytical reports and dashboards (see Chapter 4, Section 4.1.1) Enterprise contract assembly, a cloud service which provides authoring and configuration tools to create legal documents (see Chapter 14, Section 14.4) Trading platform integration, a cloud service that connects external trading platforms (see Chapter 14, Section 14.8.5) SAP Excise Tax Management, to calculate the specific excise duties states impose, for example, on petroleum products, tobacco, alcohol, and alcoholic beverages Thus, if you subscribe to SAP S/4HANA Cloud, SAP’s cloud operations unit provisions a set of connected tenants for you. In all cases, this set of tenants includes the SAP S/4HANA Cloud ABAP tenant, Identity Authentication service tenant, Identity Provisioning service tenant, SAP Analytics Cloud tenant, and SAP Cloud Platform by Adobe tenant. Tenant of other services can be added on request or require separate subscriptions. Like the three-system landscape track typically set up to operate SAP S/4HANA onpremise, a test and configuration and a production set of tenants, and in most cases also a development set of tenants, is provided for each subscriber to SAP S/4HANA Cloud. SAP is constantly developing further cloud services to enrich the SAP S/4HANA Cloud portfolio too. For operating a large number of SAP S/4HANA Cloud tenants in a stable, cost-effective, and scalable way, standardization is key. If all tenants have the same characteristics, you can automate and/or centralize processes. Here are some examples: All SAP S/4HANA Cloud subscribers get the same set of default tenants (tenant landscape). All SAP S/4HANA Cloud tenants share the same software lifecycle management tools and processes. All SAP S/4HANA Cloud ABAP tenants have the same SAP HANA database sizing (see Chapter 20, Section 20.2). All SAP S/4HANA Cloud servers measure the date and time according to coordinated universal time (UTC). However, not all SAP S/4HANA Cloud tenants run at the same location.

19.2

Data Centers

SAP operates SAP S/4HANA Cloud in several data centers around the world (see Figure 19.1). Basically, there are two selection criteria for choosing the right data center: 1. Legal requirements such as Chinese cybersecurity law or the European GDPR, which limit data access to certain territories. 2. Latency. Although the architecture of SAP S/4HANA Cloud ensures good performance and scalability, SAP has limited control over the network between the customer’s web browser and the SAP S/4HANA Cloud tenants. Thus, SAP chooses a data center based on a smart algorithm to minimize latency effects—for example, by taking the distance to the majority of users into account.

Figure 19.1

Data Centers for Operating SAP S/4HANA Cloud (October 2nd, 2020)

To broaden coverage, SAP enables SAP S/4HANA Cloud services for the platforms of hyperscale providers such as Alibaba Cloud, Google Cloud Platform, and Microsoft Azure. SAP operations backs up the data of the different cloud services to be able to recover it in case of an incident with data loss (disaster recovery). The SAP S/4HANA Cloud ABAP service also can be operated in a setup in which the data is replicated to a failover production tenant in a failover data center. In this case, the SAP HANA database continuously synchronizes its data with the failover SAP HANA instance, using its system replication functionality. All business data is always available in the failover SAP HANA instance and even preloaded into main memory. This setup is called activeactive: one SAP HANA database instance is used productively with read and write actions, whereas the failover instance provides data as read-only. In case of a disaster, the admin makes the failover SAP HANA instance the primary instance. IT departments can use SAP HANA system replication to create a failover setup for SAP S/4HANA onpremise too. For the current list of data centers where SAP S/4HANA Cloud is operated, see SAP Trust Center at www.sap.com/about/trust-center.html. Here you’ll find the availability status of SAP S/4HANA Cloud per data center, too (see Figure 19.2). SAP continuously monitors the availability, connectivity, and load of the SAP S/4HANA Cloud-related application servers and database systems. To do so, health checks run in each system and report key performance indicators (KPIs). In case of strange behavior or an unexpected hardware failure, SAP operations can react proactively. SAP operations has no access to the business data in the SAP S/4HANA Cloud tenants.

Figure 19.2

SAP S/4HANA Cloud Availability Status in SAP Trust Center (August 13, 2020)

19.3

Multitenancy

Multitenancy is a software architecture in which multiple tenants of a cloud service share software resources with the goal of distributing the related costs and efforts over these tenants while still isolating each tenant from the others. All cloud services that are part of the SAP S/4HANA Cloud SaaS are built according to a multitenancy architecture. In this section, we’ll explain the multitenancy architecture of the central SAP S/4HANA Cloud ABAP service, which hosts the majority of the business applications. The multitenancy architecture of the SAP S/4HANA Cloud ABAP service reduces tenantspecific infrastructure costs and operation efforts while offering tenant isolation on the same level as ensured by completely separated systems; it’s impossible that one tenant can access another tenant’s data. Each tenant has own dedicated ABAP application servers running on separate SAP HANA tenant databases. Tenant database s are independent databases within one SAP HANA database system, and they contain all tenant-specific application data and configuration. In addition to these tenant-specific databases, each SAP HANA system has one more database, which contains such resources of the ABAP systems that are equal by definition in all tenants, such as the ABAP source code delivered by SAP. This means that such resources are not stored in each of the tenant databases of SAP HANA Cloud but stored only once in the shared database. With that, the main parts of an ABAP application stack are shared, and by working with SAP HANA tenant databases, even the main elements of an SAP HANA system are shared between multiple tenants. Hence, software is shared on all layers—on the SAP S/4HANA Cloud application layer, the SAP ABAP application server, and the SAP HANA layer—and there are joint software update events for all tenants belonging to the same multitenant SAP HANA system. Each tenant is completely isolated from the others by having its own ABAP application servers and tenant database. Each tenant is self-contained and fully isolated so that each tenant has, for example, its own database catalog, persistence, and backups. The technical configuration of SAP HANA blocks cross-tenant database accesses on the SQL level and write access from the ABAP runtime to the shared tenant database. Only software updates and the corresponding non-ABAP tools have write access to the shared database. With a one code line approach for SAP S/4HANA Cloud and SAP S/4HANA on-premise (see Chapter 1, Section 1.2.2), the multitenancy architecture has to be designed and implemented in such a way that no existing installation is disrupted and the required changes on the application side are very limited in number. This is critical to keeping the risk of any quality regressions very low. 19.3.1

The System Architecture of SAP S/4HANA

In the SAP S/4HANA on-premise system architecture, each ABAP system consists of a set of ABAP application servers that are connected to one SAP HANA database system. The system architecture is the same for all system types, be it a development system, a quality system, or a production system. If multiple ABAP systems are operated, they

may run on shared hardware (see the typical SAP S/4HANA on-premise landscape in Figure 19.3). Even several SAP HANA systems may run on shared hardware, depending on the required SAP HANA database size. However, apart from the shared hardware, the SAP S/4HANA systems do not share other resources: neither the software of the ABAP system nor the SAP HANA system and database.

Figure 19.3

System Setup of On-Premise Edition of SAP S/4HANA

Such a shared-nothing software architecture means that each new SAP S/4HANA system requires resources for a full ABAP system and a full SAP HANA database. Administrators have to install, manage, and upgrade each of these systems separately. Costs and efforts rise linearly with each new system. From a cost perspective, this does not leverage any economies of scale, and in terms of effort, it soon becomes unmanageable when the number of systems increases to thousands and more—which reflects the envisioned scope for SAP S/4HANA Cloud. 19.3.2

Sharing the SAP HANA Database System

SAP S/4HANA Cloud achieves multitenancy by sharing the SAP HANA system between multiple tenants. As explained above, SAP HANA supports multiple isolated databases in a single SAP HANA system. These are referred to as tenant databases. Because SAP S/4HANA pushes large parts of the application processing to SAP HANA, all tenant databases share not only the same installation of the database system software and the same system administration, but also to a large extent the same computing resources. All SAP HANA servers that do not persist data, such as the compile server and the preprocessor server, also run alongside the central system database and serve all tenant databases of an SAP HANA system. However, each database is self-contained and fully isolated. Each has, its own database catalog, persistence, and backups. Although all database objects, such as schemas, tables, views, and procedures, are local to the tenant database, cross-database queries are possible if configured on the SAP HANA system level. As noted, SAP S/4HANA Cloud is leveraging the SAP HANA tenant databases in such a way that each tenant gets its own tenant database and multiple tenants, depending on the size of their data, are running in the same SAP HANA database system—well isolated, as described earlier, but still sharing the system database, name server, and

other central SAP HANA system resources (see Figure 19.4). This frees up memory space so that more tenants can be served with the same hardware. It also reduces operation effort activities because far fewer SAP HANA systems are required and administration activities on the SAP HANA system level are correspondingly decreased.

Figure 19.4

19.3.3

SAP HANA System with Multiple Tenant Databases and Shared Database

Sharing of ABAP System Resources

The SAP HANA tenant database architecture is addressing resource sharing on the SAP HANA level. In addition, resources of the tenant’s ABAP system are shared by using what we call the shared database. This is the second pillar of SAP S/4HANA Cloud’s multitenancy architecture. From a technical perspective, a shared database is just another SAP HANA tenant database, but one that isn’t connected to an ABAP system. And there is only one shared database per SAP HANA system (see Figure 19.4). Now, this shared database makes it possible to move all the ABAP system resources, which are by definition equal and immutable, from the tenant databases into the shared database. Finally, the shared database contains all ABAP system resources that are identical in all ABAP systems and that cannot be changed by customers in their tenants—for example, the source code of ABAP programs, system documentation, code page information, or the standardized texts of error and success messages used in SAP programs to inform end users of a certain processing status. By sharing these data in the shared database, the memory consumption of the tenant databases is reduced correspondingly. An additional benefit is that during upgrade to a new SAP S/4HANA Cloud release, all resources stored in the shared database need to be upgraded only once per SAP HANA system, not individually for each tenant. This saves tenant-specific upgrade effort. It’s important to mention that an individual tenant cannot change or update any tables and data in the shared database because these changes would impact all ABAP application servers running on tenant databases in the same SAP HANA system. All tenant-specific data, like transactional business data, configuration settings, or master data, as well as all types of tenant-specific extensions, are not stored in the shared database but in the individual tenant databases. 19.3.4

The Table Sharing Architecture in Detail

In the on-premise setup, each SAP S/4HANA ABAP system has its own SAP HANA database. According to the multitenancy architecture, all SAP S/4HANA ABAP application servers share important system tables, like the tables that store the source code or contain the code page catalog. But how does an ABAP application server access these tables persisted in the shared database? The SAP engineers had to solve this problem without completely reimplementing the database access layer of the ABAP application server and without adapting hundreds of thousands of places in the ABAP and SAP kernel source code where such tables are accessed. To do so, the ABAP Dictionary (DDIC) of the ABAP application server had to change the way it creates and updates tables. Basically, the ABAP application layer knows the tenant database only and has no information about the shared database. Now, the DDIC does not create tables directly in the tenant database as before but creates database views only. If the ABAP application server wants to access a shared table to read the ABAP source code, it calls the corresponding database view, that automatically redirects the requests to the corresponding table in the shared database. These central changes to the ABAP application server architecture ensure that the complete application layer continues to work as before and can stay fully oblivious to the changes to the DDIC and database layer. The SAP HANA system is configured in such a way that the tenant databases are allowed to read from the shared database but not to write to it, as shown in Figure 19.5.

Figure 19.5

Completely Shared Table: ABAP Dictionary Table TCP00 (Code Page Catalog)

Tables that contain to a large extent data delivered by SAP but that can contain tenantspecific entries also can be shared as well. In such tables, SAP data and tenant data is separated by namespaces. Putting these tables completely into the shared database does not makes sense for two reasons: 1. Tenant-specific data must not be shared (tenant isolation). 2. The ABAP system must not write to the shared database. Therefore, instead of only creating a database view that redirects to the shared database, the DDIC creates a database view and a local table in the tenant database in addition to the table that exists in the shared database. The view in the tenant database is not simply redirecting to the table in the shared database but is selecting from both the shared table and the tenant table via a union all statement. The view merges the entries from the table in the shared database with the tenant-specific entries from the tenant database—just as if they are stored in one table.

This element of the multitenancy architecture enables resource sharing not only on the table level but also on the table record level, whereas the table entries belonging to the SAP namespace are shared and the ones belonging to the customer namespace are not, as shown in Figure 19.6. At runtime, the SQL interface of the ABAP application server redirects SQL accesses according to their type—read or write access—to the right database artifact. It checks change requests to tenant data for compliance with namespaces (key ranges) that are assigned to the tenant database and directs the request accordingly to the tenant table in the tenant database. Changes to shared data are completely blocked. Read accesses are directed to the database view that merges the entries from the shared table with the ones from the tenant table (see Figure 19.7).

Figure 19.6

Sharing on Table Record Level: ABAP Dictionary Table T100 (Messages)

Figure 19.7

Write and Read Accesses to Table Shared on Record Level

In this section we explained how the multitenancy architecture of SAP S/4HANA Cloud enables resource sharing between tenants on database, application server, and application levels. Multiple tenants share one SAP HANA database system, which includes a shared database that stores application server and application data equally and immutably for all tenants. To keep all tenant-specific data completely isolated from all other tenants, each tenant has its own tenant database within the SAP HANA database system.

19.4

Software Maintenance

SAP continuously updates SAP S/4HANA Cloud for multiple reasons. The software must comply with the newest legislations and regulations, provide a new functional scope and innovative features, and improve existing functionality and—if required—fix errors. For consumers of SAP S/4HANA Cloud, such software maintenance—technically called upgrade s and update s—is twofold. On the one hand, SAP customers welcome new features and legal compliance; on the other hand, they do not want disruptions in executing their regular business in SAP S/4HANA Cloud. Disruption can be system downtime, a request to test the customer’s specific configuration with the updated software, or a changed user interface that requires retraining users. Thus, SAP intends to manage the software lifecycle of SAP S/4HANA Cloud as smoothly as possible so as not disturb the users of the SaaS. The only exception to this no user impact paradigm is a functional upgrade desired by the SaaS customer. This implies that customer configuration and data is not changed. Of course, there is also a customer lifecycle in which customers implement business processes. The following sections cover the basic concepts for managing software lifecycles from both the customer and SAP perspective. 19.4.1

Maintenance Events

During a maintenance event, software and/or configuration of SAP S/4HANA Cloud is updated. At the time of writing, the following maintenance events are scheduled for SAP S/4HANA Cloud (see Service Level Agreement for SAP Cloud Services, accessible on SAP Trust Center at https://www.sap.com/germany/about/trustcenter/agreements/cloud.html): SAP applies corrections to all SAP S/4HANA Cloud tenants as a hotfix collection (HFC) every two weeks, if required. If an individual tenant completely breaks because of a software error, SAP can apply an emergency patch (EP) when needed. Every three months, SAP upgrades the SAP S/4HANA Cloud ABAP service to the next major software version. As of October 2020, subscribers to SAP S/4HANA Cloud have to plan for maintenance windows during which the tenant is not available for use. However, to ensure minimum business interruption due to maintenance, SAP is working on an architecture enabling zero downtime maintenance. To do so, SAP will apply a standard cloud architecture best practice called blue-green deployment (see, for example, https://martinfowler.com/bliki/BlueGreenDeployment.html). Today, this architecture is already being piloted with SAP S/4HANA Cloud tenants. 19.4.2

Blue-Green Deployment

The basic idea is to temporarily set up an additional green tenant for updating the software while users continue to work in the blue tenant. Thus, during the maintenance

phase, these two tenants coexist for a limited period of time. Once the maintenance phase is over, all subsequent user logins are guided to the now updated green tenant; the now outdated blue tenant is disassembled when no user is logged in anymore. The blue-green deployment brings several challenges: Setting up the additional green tenant with exactly the same data as the blue tenant while business users continue to create new data. After the update of the green tenant is completed, importing all business transactions that happened in the blue tenant in the meantime Thus, the key to this architecture is to preserve all data entered while running business processes in the blue tenant. To enable this, SAP engineers decided that both ABAP tenants in SAP S/4HANA Cloud—blue and green—will share the same persistency. In technical terms, the ABAP application servers of the blue and the green tenant use the same tenant database of the SAP HANA database system. According to the multitenancy architecture explained in Section 19.3, each tenant has its own tenant database that stores the business data of the tenant. All tenants of a multitenancy cluster have access to the shared database, which contains, for example, the software source code. As noted, the green tenant is the one in which software and configuration changes are applied during maintenance. If the blue and green tenant share the same persistency, how is it possible to update the ABAP Dictionary (DDIC) for the green tenant while shielding the blue tenant from exactly these changes? This is achieved by adding an additional database schema—the access schema. This access schema acts as a proxy between coding in the application server and persistency. It hides the changes made to database tables from the applications running in the blue tenant (see Figure 19.8). A full color download of this figure is available at www.sap-press.com/5189. For example, say that during upgrade, the data type of a table field changes. This change is implemented in the data schema of the database. If an application of the blue tenant accesses the table, the access layer maps the data of this table field to the old, unchanged data type for both for read and write access. If the same application of the green tenant accesses the table, the access layer provides the data of the same table field with the new data type.

Figure 19.8

Blue-Green Procedure for Near-Zero Downtime Maintenance of SAP S/4HANA Cloud

19.5

Built-in Support

Built-in support, also referred to as embedded support, provides key users of SAP S/4HANA Cloud with direct in-solution access to SAP support through an intelligent digital support assistant and represents a fundamental shift in how SAP customers interact with SAP support in the cloud. It simplifies the process of requesting help, or it can help directly in resolving issues through machine learning, sophisticated intentmatching features, and context-sensitive knowledge that provides personalized support and self-services. Before providing details on built-in support, it’s important to understand how SAP support is delivered and accessed in general and how it worked without having built-in support in place. SAP Services and Support offers a variety of dedicated toolsets to provide access to SAP’s knowledge base and support functionalities. The access is provided through publicly available sites such as the SAP Support Portal or via the SAP ONE Support Launchpad, which requires a login with a support user ID (S-user ID). With these tools, you can start searching for solutions and documentation, report incidents, download software, or send service requests to operations teams at SAP. The support process for SAP S/4HANA Cloud differentiates between business users and key users. Business user s perform role-specific business tasks, such as in procurement. Key user s are experts with deep domain knowledge to configure business processes and to support business users. They are the ones who can contact SAP support when necessary. To access the gated SAP support services, they need an Suser ID. An S-user ID gives you access not only to support tools, but also to product documentation, training courses, and many more service and information offerings. To access the support applications in the SAP ONE Support Launchpad, key users need an S-user ID. When a product is purchased, SAP creates the initial S-user ID with the highest level of authorization, referred to as the super administrator. For security reasons, SAP is not allowed to create or to manage additional S-user IDs for SAP customers. As an SAP customer, only you can maintain additional S-user IDs, their user data, and their authorizations for the support applications. 19.5.1

Support Journey without Built-in Support

Figure 19.9 illustrates a support setup without built-in support. It shows the common support infrastructure and support channels, which usually span various support organizations and toolsets of your company and, if needed, traverse the support tools and organizations of SAP as well. Without embedded support, a typical support flow looks like this: When business users experience an issue or have a question concerning an application (such as data inconsistencies, error codes, or performance problems), they first try to find a possible solution using self-service offerings and documentation that can be available in their own company. If this is not successful, the business user usually contacts the support organizations of the company through various channels, such as

creating an incident report, live chat, phone, messenger, email, walk in, and so on. When the business user creates a support case, a key user from the company’s support team takes care of this support case and looks for a possible solution. When key users need additional knowledge or help from SAP, they can contact SAP support from the SAP ONE Support Launchpad. From there, they can conduct searches, start expert chats to discuss the situation with a support expert from SAP, or create a support incident that describes the request. SAP Support will interact with the key user until a satisfactory answer has been found. During these interactions with SAP support, key users need to provide contextual information, which helps to analyze the situation. This includes, for example, the technical description of the related application, the identification of the SAP S/4HANA system itself, screenshots, dump information, and access information. The access information comprises a consent for SAP support to reproduce situations directly in the customer’s SAP S/4HANA systems, as well as the business user ID (see Chapter 17, Section 17.2.2). Note that this business user is sometimes referred to as cloud business user, in short CB user.

Figure 19.9

Traditional Support without Built-in Support

Each business user in the system is represented by a business partner (see Chapter 8, Section 8.3). SAP support uses the business partner ID of the business user that faced the problem to generate a temporary user ID with exactly the same role information as the business user. Besides the controlled access to the provided access information from SAP support staff, SAP ensures secure access through dedicated cloud access management tools.

19.5.2

Built-in Support Architecture

Section 19.5.1 gave a high-level overview of common support processes and tools. With built-in support, context-based access to support is built directly into SAP S/4HANA Cloud and allows key users to interact with support through an embedded digital support assistant user interface. Figure 19.10 depicts the built-in support architecture for SAP S/4HANA Cloud.

Figure 19.10

Built-in Support

Two components have been added for built-in support: the built-in support assistant, which is integrated into SAP S/4HANA Cloud, and the built-in support services, which connect to SAP support services. Context-Awareness The key advantage of built-in support is to provide context-based assistance to the user. Therefore, it’s essential to know the situation a user is in and that the contextual information is not lost throughout the interactions with support. Relevant contextual information is available automatically to create a better self-service experience. With automatically provided context information, the creation of support tickets can be prevented and support incidents can be resolved faster. No expert knowledge is required to collect and share relevant contextual information, such as application, user, and system contexts, which are essential for finding solutions or getting help from SAP support experts. This information is either collected

automatically or the user is guided to provide relevant data, which keeps the effort required from the key user to an absolute minimum. Contextual information is collected automatically in a secure and legally compliant way (for example, compliant with GDPR). The context information consists of two parts, the backend context and the frontend context: 1. Frontend context information is provided through APIs of SAP Fiori launchpad. It includes application descriptor files, system IDs, application component names, user context from system logon (user ID, name, email), and the related business partner ID. 2. The ABAP application server provides APIs to read the backend context information, which provides references to system dumps, exceptions, and system statistics, for example. This backend information is important for further root-cause analysis conducted by support engineers and development. The contextual information from the frontend is used to prefill mandatory data fields in the incident description, which helps to ensure high quality data with the advantage of simplifying the work for the key user. For example, it is no longer required to specify an application component, system ID, or product information. The descriptor files and information on the business partner ID aim to reduce the number of interaction roundtrips between SAP support and the key user. This helps to provide a faster resolution time and less effort for key users to collect and share required information with SAP support. The user input (such as issue descriptions and error codes) and automatically collected contextual information can be used to search for possible solutions through the embedded search service of built-in support, which connects to various information providers at SAP. Easy Access to Support Services and Content As described in Section 19.5.1, visitors to the SAP ONE Support Launchpad need to be logged on via the S-user ID. With built-in support, this logon is needed exactly once per S-user ID to retrieve a valid access token from the authentication service of SAP ONE Support Launchpad. Built-in support provides features that allow a user to link a successfully retrieved token to the currently used SAP S/4HANA user ID. The authentication of any subsequent request through built-in support is then secured through this linked access token. For example, no extra logon is required to create, view, or update an incident. The users can just select a user from a list of linked Susers. These tokens can be revoked by the users at any time and work for active S-users only. Key users can access the following support services: Chatbot services Through a conversational UI, a user can converse with a chatbot or with a human being. The chatbot focuses on automation flows, guidance, and search of knowledge content. SAP is working on enhancing the chatbot, with the option to transfer a conversation to a human so that a chat session with an expert can be established from within the built-in support UI.

Search service The search service helps the business user and key user in finding solutions, documentation, and tutorials or provides support guidance. The search is based on contextual information and user input. This service uses natural language processing. You can type search terms as sentences in natural language—for example, “I need help with business roles.” You can also use keywords, such as create incident, or identifiers, such as an incident ID or an SAP Knowledge Base article ID. The services connect to SAP information resources to provide relevant content (such as SAP Knowledge Base articles or tutorials), references (such as to incidents), or suggested actions (such as to create an incident). Integration services The integration services allow built-in support to communicate with the relevant SAP applications—in our case, SAP S/4HANA Cloud—and help to analyze the current situation of a user’s context. The integration into SAP support services helps to seamlessly connect to support channels, content, and solution proposals. Interaction services The interaction services focus on the interactions with the key users, which spans providing UIs, secure access to support actions, and management of caches and objects for faster and smooth interactions.

19.6

Summary

SAP S/4HANA Cloud is a SaaS solution provided by a set of multitenant cloud services. Each SAP S/4HANA Cloud customer gets a set of tenants that are operated in one of the data centers spread across the globe. The cloud service which hosts most of the business applications is the SAP S/4HANA Cloud ABAP service, which shares the central parts of its application code line with SAP S/4HANA on-premise. In a multitenancy cluster of the SAP S/4HANA ABAP service, each tenant separates its data in a tenant database while sharing runtime resources and the shared database with all tenants. Thanks to the blue-green deployment procedure, SAP can apply software updates with close to zero downtime. The SAP S/4HANA architecture embeds support functionality, from logs to providing context information for root-cause analysis to various interaction channels with support engineers.

20

Sizing and Performance in the Cloud The performance of SAP S/4HANA Cloud relies on performance-optimized programming, adequate hardware sizing, and fair resource sharing. The green cloud makes the SAP S/4HANA Cloud offering carbon-neutral.

With the cloud deployment model, SAP operates the customer’s SAP S/4HANA tenant in the cloud. In addition to the goal to run each size of business with SAP S/4HANA Cloud with optimal performance and low costs, reducing energy consumption and greenhouse gas emissions are also crucial factors for supporting SAP sustainability efforts at the same time. SAP defines the hardware sizing and, according to the multitenancy architecture (see Chapter 19, Section 19.3), some resources are shared with other SAP S/4HANA Cloud tenants. Customers typically ask the following questions: How does SAP achieve fair sharing of resources? How does SAP prevent other tenants from using up the shared resources and making my performance suffer? How is sizing done, and how can a company be sure that it meets its requirements? What happens if demand grows over time? How elastic is the capacity management of SAP operations? Users and local systems access SAP S/4HANA Cloud over the internet. How does SAP address the performance risk caused by network latency? This goal of this chapter is to answer these questions. It explains what principles SAP development applies to enable good response time in the cloud (Section 20.1), describes sizing of SAP S/4HANA Cloud (Section 20.2), as well as scalability, elasticity, capacity management, and resource sharing (Section 20.3). Section 20.4 discusses how SAP enables environment-friendly sustainable computing with SAP S/4HANA Cloud.

20.1

Performance-Optimized Programming

This section explains what principles SAP development applies to enable good performance in the cloud. Performance-optimized programming usually also equates to energy-efficient programming. By adhering to these goals and guidelines, SAP is focusing on a green IT strategy. Let’s take a typical data center environment for high-transaction B2B applications as example. If a sustainable programming measure is successfully implemented in such an environment and one CPU second is saved as a result, this saving corresponds to an energy reduction of 10 watt-seconds per transaction. If such a transaction is performed by 1.5 million users 20 times a day on 230 working days per year, the total energy saved is 19 megawatt hours. If 1,000 developers optimize five transactions in this way, the energy saved amounts to the equivalent of the annual power consumption of over 30,000 two-person households (according to the power consumption index of the German government).

20.1.1

Minimal Number of Network Round Trips and Transferred Data Volume

Both end-to-end response time and throughput depend on the network and wide area network (WAN) performance (see Figure 20.1). Distributed landscapes today result in an increasing number of users connected to applications and services from dispersed sites with varying degrees of available network bandwidth and latency. User access takes place through a local area network (LAN) and a WAN. As a basis for the successful planning and productive operation of a high-performing network infrastructure, sizing of network bandwidth plays an important role. Its objective is to avoid a performance bottleneck caused by insufficient network bandwidth and ensure reasonable network response times, but network bandwidth sizing alone cannot guarantee specific network response times.

Figure 20.1

Impact of Network on End-to-End Response Time

Network performance is defined by two network key performance indicators (KPIs): Bandwidth, which defines the ability to transfer a certain amount of data in a given time period. Latency, which is the delay between the sending and arriving of a data package of minimal size Network bandwidth is a resource shared by many users. If the available bandwidth is exhausted, the network becomes a bottleneck for all network traffic. Interactive applications suffer the most from overloaded networks. Due to users’ unpredictable behavior, concurrent requests from multiple users can interfere with each other. This results in varying network response times, especially if you have many users on a relatively slow network connection. Other applications that compete for available bandwidth may also influence response time. Latency time depends on the number of network hops (nodes) and the propagation time required for a signal to get from A to B, between and on these nodes. Today, network latency figures are the most significant parameter in poor response times. The (public) network of the internet is not under control by SAP. Considered just from the basis of bandwidth and latency, there is a difference in network quality between an (corporate) intranet and the (public) internet. The difference is less technical than organizational in nature. An intranet is wholly owned and managed by a company or

organization, whereas your internet service provider (ISP) has complete control over just your access area (within a radius of some miles) and limited influence over end-to-end quality. On-premise applications typically run in an intranet, while the internet is more relevant for mobile and SaaS. SAP tackles the network challenge from an architectural perspective. The following two KPIs are used for SAP applications, including SAP S/4HANA Cloud, instead of measuring the network time directly: Transferred data volume per user interaction step measures the data transferred between the UI frontend and application server. Number of sequential network round trips per user interaction step does not consider round trips at lower network layers, but at the application layer, and can be directly controlled by the application, such as HTTP messages. A sequential round trip needs to wait for the response of another round trip before it can start. These two KPIs can be measured precisely, and results are reproducible. With the two KPI values, the required network time can be estimated for any given quality of network connections. They can be easily adapted for server-to-server communication. Therefore, the following design principles are set for SAP product engineering: An application triggers a minimal number of sequential round trips and transfers only necessary data to the frontend. The conclusion is obvious: the more round trips, the higher the impact of the network performance and the worse the application’s end-toend response time. An application transfers no more than 10 KB to 20 KB of data per user interaction step. Major strategies to optimize network performance include compression and frontend caching; both are implemented in and are part of standard SAP software. 20.1.2

Content Delivery Networks

SAP applications cache static resources at the frontend. For initial load of frontend caches and update of resources after upgrades and hotfixes, a Content Delivery Network (CDN) is used. In general, the enablement of the consumption of a CDN, including device caches (browser, mobile apps, etc.) for metadata and code lists (static dropdown lists) minimizes the access times to static content. Because a CDN is used for all SAP S/4HANA Cloud users, we have to distinguish between the bandwidth to the next CDN node and the bandwidth to the SAP S/4HANA Cloud data center. SAP Fiori is based on SAPUI5 technology (see Chapter 3, Section 3.1). SAP S/4HANA Cloud uses a CDN for the SAPUI5 core libraries—which means that the SAPUI5 core libraries are delivered by the CDN, and other static resources are retrieved from the SAP S/4HANA Cloud data center. Both are then cached in the local browser cache. Dynamic business data is never cached at the frontend.

20.1.3

Buffers and Caches

SAP S/4HANA Cloud makes use of caches and buffers on all tiers where appropriate, like the following: Browser caches CDNs SAP Gateway metadata caches Table buffers SQL plan caches This makes also a positive impact on energy consumption, which is further explained in Section 20.4. SAP engineering is leveraging the caching and buffering frameworks offered on SAP HANA and on the ABAP application server. Such technically redundant data allows you to reduce the dependency of processing times from the amount of data (or the data constellation) that needs to be processed—which is a fundamental requirement for any scalable software solution. 20.1.4

Nonerratic Performance

From an architectural perspective, SAP product engineering ensures that the software shows consistent response time and resource consumption. The design principle is that for the response time of an application, there is minimal dependency on either data volumes or data constellations. For example, if you run a month-end closing, the processing time depends on the amount of processed data. If you run a month-end closing for October (which is a fixed amount of processed data), the processing time should remain constant and be independent of the amount of data persisted, in especially independent of how many months are already stored in the corresponding table.

20.2

Sizing

You can only perform adequate hardware sizing when a software application is scalable. The purpose of hardware sizing is to determine the hardware resources necessary to process performance-critical business processes, thus preventing hardware resources from becoming bottlenecks. In general, sizing is the translation of business requirements into hardware requirements. In this context, sizing means determining the hardware requirements for SAP S/4HANA Cloud, such as network bandwidth, physical memory, CPU power measured in SAPS, and I/O capacity to run software solutions and applications. SAP Application Performance Standard (SAPS) is a hardware-independent unit of measurement that describes the performance of a system configuration in the SAP environment (See SAP standard application benchmarks at https://www.sap.com/about/benchmark/measuring.html). SAP has defined a mathematical model and procedure to predict resource consumption based on a reasonable number of input parameters and assumptions. An initial sizing is necessary to predict the resource consumption for long-term capacity planning and to enable a forecast of the expected costs. The following sizing KPIs are available: CPU time of business transactions or tasks is measured to determine the number of processors required. Disk size and disk I/O are measured to determine the rate of database table growth and the file system footprint. The number of insert operations for the database, write operations for the file system, and bytes that are written to the database or file system are measured. Memory is measured in different ways depending on the type of application. For some applications, it’s sufficient to measure the user session context, while for others it’s necessary to also measure application-specific buffers and shared spaces and temporary allocations by stateless requests, among other factors. Network load measurements refer to the number of round trips for each dialog step and the number of bytes sent and received for each round trip. The measurements are used to determine the network bandwidth requirements. In general, SAP provides standard sizing guidelines, published at www.sap.com/sizing. SAP customers usually conduct their specific sizing because they know their business processes best. Finally, the hardware vendors or infrastructure-as-a-service (IaaS) providers are responsible for providing hardware that will meet the derived throughput and response time requirements. For SAP S/4HANA Cloud, SAP as the SaaS provider takes care of sizing and provisioning. However sizing information is still important for customers of SAP S/4HANA Cloud. The SAP S/4HANA Cloud subscription includes a certain amount of SAP HANA memory —currently 100 GB of SAP HANA database memory—for the customer’s business data. In a pay-per-use scenario some forecast calculations are necessary to decide on the most adequate subscription model. To predict the SAP HANA memory required by an

application within the customer’s business context, SAP offers two sizing procedures: 1. Quick Sizer for SAP S/4HANA Cloud for greenfield sizing 2. Sizing report for brownfield sizing We’ll now describe both. The Quick Sizer is an web-based self-service sizing tool offered by SAP that can be used for new sizing from scratch (greenfield sizing). It’s free of charge. Customers and prospects only need an S-user ID to create sizing projects. As of June 2018, SAP provides the Quick Sizer tool for greenfield sizing of SAP S/4HANA Cloud. You can use the tool to calculate the expected SAP HANA RAM for business data and frontend network load for SAP S/4HANA Cloud according to your business context. The Quick Sizer can be accessed at www.sap.com/sizing. 20.2.1

Example for Greenfield Sizing

As an example, imagine that you create a new sizing project and enter the number of concurrent users and/or throughput numbers in the sizing questionnaire (see Figure 20.2). From a business perspective, the requirement is to process 15 million sales orders per year (220 working days, with each working day from 9:00 a.m. to 18:00 p.m.). The assumption is that each sales order has 10 line items and the retention time in memory is 24 months. In addition, there is the requirement to process 500,000 sales orders with 10 line items per hour. The assumption is that the peak occurs from 12:00 p.m. to 13:00 p.m.

Figure 20.2

Quick Sizer for SAP S/4HANA Cloud: Input Business Requirements

With this business input, the Quick Sizer for SAP S/4HANA Cloud calculates the result, which is a demand of roughly 137 GB of SAP HANA memory for business data (see Figure 20.3).

Figure 20.3

20.2.2

Quick Sizer for SAP S/4HANA Cloud: Results

Brownfield Sizing

In a migration scenario (brownfield sizing), you can use the sizing report /SDF/HDB_SIZING, described in SAP Note 1872170 (Suite on HANA Sizing Report). The sizing report includes the sizing projections, based on the actual table sizes in the existing SAP ERP system, as well as an estimation of how much the memory footprint can be reduced using functionalities that SAP HANA will enable. It runs on SAP_BASIS 620 and higher and is suitable for sizing of all SAP Business Suite products (SAP ERP, SAP CRM, SAP SCM, SAP SRM, and so on). The sizing report takes a snapshot in time. Any business data growth between that date and the go-live date should be considered. The results under For S/4HANA Cloud (see Figure 20.4) provide the Estimated Memory Size of Business Data, which is the input to decide on the most adequate subscription model.

Figure 20.4

SAP HANA Sizing Report: Results

20.3

Elasticity and Fair Resource Sharing

This section explains how resources are managed dynamically to provide SAP S/4HANA tenants with more resources when they need it. It explains also how SAP ensures that a tenant doesn’t consume too many resources at the cost of other tenants. In this context, scalability is about growing to meet service demands. Applications might scale horizontally by adding more instances of a service (scale out) or vertically by adding virtual CPUs or memory to existing instances (scale up). A scalable system is one in which the workload that the system can process goes up with the number of resources given to the system; that is, its capacity actually “scales” with available resources. Elasticity is about being able to adapt to actual user and service needs as they occur. Taking advantage of elasticity usually requires a scalable system; otherwise, the extra resources will have little effect. Elasticity is the ability to fit the resources needed to cope with loads dynamically, usually in relation to scale out, so that when load increases you scale by adding more resources, and when demand wanes you shrink back and remove unneeded resources. In general, there are three challenges that need to be tackled with regard to elastic scalability (see Figure 20.5): Wasted capacity: load increase When the load increases, more resources need to be added. To keep the wasted capacity as small as possible, you must elastically scale by adding more resources. Wasted capacity: load decrease When the demand wanes, resources need to shrink back. To keep the wasted capacity as small as possible, you must elastically shrink back and remove unneeded resources. Unexpected peaks With fixed capacities, unexpected peaks cannot be covered with the existing resources. To prevent the impact of unexpected peaks, you must elastically scale as closely as possible to the actual demand, while at the same time keeping the wasted capacity as small as possible.

Figure 20.5

20.3.1

Challenge: Elastic Scalability

Dynamic Capacity Management

In an SAP S/4HANA Cloud multitenancy cluster, resources are shared between tenants. SAP’s strategy is to focus on a high tenant density on each server, which optimizes the resource utilization on the one hand, while ensuring optimal performance on the other (see Chapter 19, Section 19.3). SAP operations uses its own developed Dynamic Capacity Management tool to elastically adjust the hardware capacity within an SAP S/4HANA multitenancy cluster. To optimally use memory and CPU resources, the operations model considers the following aspects: Tenant capacity planning Enlarge tenant size. Reduce tenant size. Move tenant in (onboarding). Landscape capacity planning Move tenant out to a different server (different multitenancy cluster). If the server runs out of free memory, a tenant that requires more memory needs to move to another server. Which tenant(s) should be moved? To which server should the tenant(s) be moved? The fully automated Dynamic Capacity Management tool sets limits on the resource consumption of the tenant databases (quotas). It makes sure that a tenant always has enough resources to run and at the same time prevents a tenant from using too many resources and becoming a threat to other tenants in the same server. To manage the performance quality for all tenants and users, SAP suspends or limits use of the cloud service if continued use would result in material harm to the cloud service or its users. SAP sets resource usage quotas in order to, among other reasons, protect the entire system landscape from occasional, sometimes also accidental or unwanted load peaks.

Figure 20.6 shows the memory use of different SAP S/4HANA Cloud tenants in one multitenancy cluster. Each tenant has its own size, its own distinct load, and its own working time. All tenants can grow and have more data and more load. In Figure 20.6, tenant 1 grows using previously free space. If tenant 1 grows further, the SAP HANA database would require memory from all other tenants regardless of their sizes, current usage, or current load. To prevent that one tenant from growing too much and affecting other tenants in an unwanted way, memory quotas and CPU quotas are used. A fundamental SAP HANA resource is memory. SAP needs to protect all tenants (against the “noisy neighbor”) in the multitenancy cluster against unintended load peaks, so the maximum allowed resource consumption is limited for each tenant by a memory quota. On top of a memory quota, a quota of the available CPU resources is assigned to each tenant. This is realized by managing the size of the thread pools available to a tenant and each statement. Besides quotas per tenant, SAP also maintains certain quotas within each tenant on the SQL request level. These quotas protect the tenant and therefore other users/processes and requests against unintentional, very resource-intensive individual SQL requests.

Figure 20.6

Memory Distribution in SAP HANA per Tenant

Statements resulting in repeated load peaks are detected by the Dynamic Capacity Management tool. It then increases the SQL request quotas. In addition to optimizing such kinds of statements already during the development of the SAP S/4HANA Cloud solution, SAP has set a fixed quota in all SAP development and test systems to detect and optimize such kinds of statements from an architectural perspective during performance tests. In an SAP S/4HANA Cloud production tenant, all statements are monitored in certain quotas for response time, CPU time, and memory consumption. If these are violated, they will then be processed by SAP development to further optimize such statements.

20.4

Sustainability

Sustainability is an integral part of SAP’s vision to “help the world run better and improve people’s lives” (see https://www.sap.com/corporate/en/purpose.html). The three pillars of SAP’s sustainability strategy are creating economic, environmental, and social impact. Taking a closer look at the environmental pillar, reducing energy consumption and greenhouse gas emissions are crucial factors. On the topic of energy consumption, SAP data centers have become a primary focus as more and more business move to the cloud. The energy consumption in SAP data centers is closely related to technology innovation and the overall adoption of SAP cloud solutions. Following its environmental strategy, SAP created its green cloud program (see SAP Integrated Report 2019 at https://www.sap.com/docs/download/investors/2019/sap-2019-integrated-report.pdf). The green cloud means that SAP’s cloud offerings are carbon-neutral thanks to purchasing 100% renewable electricity certificates and compensation by offsets. For software developers, being sustainable and contributing to green IT means designing software programs that make efficient use of computing resources. For SAP developers and architects, this is essential when considering the huge number of business transactions worldwide that are handled by SAP S/4HANA in one way or another. The challenge is to find the right balance between end-to-end response time, throughput, and economical and sustainable resource consumption. Excellent performance has several aspects: Minimal response time Minimal resource consumption Maximal throughput Scalability and simple sizing algorithms Consistent response time and resource consumption The four major energy guzzlers in software development are CPU, working memory, hard disk, and network. As the available resources are limited, performance optimization strategies have side effects that must be taken into account, and in many cases there are trade-offs. Reducing consumption at one point always affects the energy appetite of the other three factors. SAP’s optimization strategy is based on three cornerstones (see Figure 20.7): In-memory computing rather than disk I/O Caches rather than CPU cycles Content delivery networks and code pushdown rather than high data transfer and a large number of round trips Although this strategy leads to increased energy consumption in memory, the total energy balance is positive thanks to savings in the other components—that is, CPU, hard disk, and network. At the same time, SAP improves the end-to-end response time and the user experience.

Figure 20.7

Optimization Strategies

20.5

Summary

When developing SAP S/4HANA applications, SAP developers apply the principles of performance-optimized programming. A user interaction step should require a minimal number of sequential roundtrips between front end and application server and transfer at maximum 20 KB of data. Caches and buffers on all architecture layers as well as the use of content delivery networks bring the data closer to the user and improve end-toend response time. Subscribers of SAP S/4HANA Cloud can use Quick Sizer for SAP S/4HANA Cloud for greenfield sizing Sizing report for brownfield sizing to decide for the most adequate subscription model. Both tools predict the required SAP HANA memory by an application within the customer's business context. SAP operations uses its own developed Dynamic Capacity Management tool to elastically adjust the hardware capacity within an SAP S/4HANA multitenancy cluster. The fully automated Dynamic Capacity Management tool sets limits on the resource consumption of the tenant databases. It makes sure that a tenant always has enough resources to run and at the same time prevents a tenant from using too many resources and becoming a threat to other tenants in the same server. Performance-optimized programming and designing software programs that make efficient use of computing resources are not only prerequisites for good end-to-end response times and throughput but contribute to SAP’s sustainability goals. In addition, as part of the green cloud program SAP’s data centers are carbon-neutral.

21

Cloud Security and Compliance Consumers of cloud services expect high security standards. SAP engineering and operations work hand in hand with each other to ensure security and compliance.

Security and compliance are more important than ever to prevent manifold security threats from happening. The architecture and technical environment of SAP S/4HANA Cloud is sophisticated and requires a comprehensive approach to protect customer data. Clearly, cloud users expect security in the sense of threat protection and compliance. However, to make companies trust in the security of a solution, cloud providers have to keep their customers informed and provide transparency about security. A combination of several security measures is key to success. These measures are secure design and architecture, secure operations, audit and compliance, and transparency. In this chapter, we start with the general operational setup of the SAP S/4HANA Cloud software, which includes the secure network layout. After that, we describe how customer data in SAP software is protected against unauthorized access, manipulation, and loss (Section 21.1). Then, we elaborate on our measures to detect attacks early and react immediately (Section 21.2). Finally, Section 21.3 gives an overview of the security and operations standards to which development and operations of SAP S/4HANA Cloud comply.

21.1

Network and Data Security Architecture

SAP S/4HANA Cloud is operated in a network that is separated into several zones, and each zone contains multiple segments. The separation into zones and segments allows implementing trust boundaries. For each zone, the security requirements and measures are determined by the exposure of the contained systems (such as internet and intranet) and the classification of the data handled by those systems. The network layout of SAP S/4HANA Cloud differs slightly depending on the platform used. Figure 21.1 shows an exemplary network layout on a hyperscaler such as Google Cloud Platform. A dedicated virtual private cloud (VPC, for a project or system) is implemented to host the tenants of SAP S/4HANA Cloud. This is separated into availability zones. The administration area from within SAP operations administrates and operates the SAP S/4HANA Cloud tenants is separated into two layers: 1. The project/admin VPC serves several project or system VPCs within the same region, one per the EMEA, APJ, and North America regions. 2. The central administration network segment hosts the central cloud lifecycle management tools and is attached to the SAP corporate network.

Figure 21.1

21.1.1

Network Layout

Access Levels

Being a public cloud software-as-a-service offering, the service provider and the service consumer share access in SAP S/4HANA Cloud, as depicted in Figure 21.2. Note that on some layers, the service provider and the service consumer work simultaneously but independently.

Figure 21.2

21.1.2

Different Span of Control

Resource and Data Separation

The virtualized ABAP application server instances are attributed solely to one customer tenant. Thus, each customer has its own virtual machines, which process the business logic. This way, a separation of customer resources is guaranteed, which supports security. On the persistence level, each SAP S/4HANA Cloud tenant has a separate tenant database (see Chapter 19, Section 19.3), which, even though it is part of an overall SAP HANA database, is isolated from other tenant databases in the same database system. Security group s prevent the communication of one customer’s resources with another customer’s resources. A security group is a set of network traffic rules that limit the communication on the hypervisor level so that only the permitted source and destinations and protocols and ports can be used, such as for technical system-to-

system communication. For example, communication is permitted between the different application server instances belonging to one tenant. 21.1.3

Resource Sharing

To achieve cloud qualities such as scalability, low TCO, mass upgrade, high availability, and disaster recovery, some parts of the infrastructure used by a customer are shared with other customers. This is the case for parts of the network; internet-facing components such as firewalls, load balancers, and web dispatchers; as well as central parts of the database. 21.1.4

Data Security and Data Residency

SAP S/4HANA Cloud encrypts customer data when in transit and at rest. In-transit data encryption is applied when data goes in and out of the cloud environment and network, as well as within the network. Data-at-rest encryption is available both at the database level by using SAP HANA database encryption and at the file-system level for encryption of local file systems and central file systems. Data is always kept in the same jurisdiction and local area, such as Europe, China, or North America. SAP S/4HANA Cloud uses SAP Cloud Platform and its services (see Chapter 19, Section 19.1). The ABAP services of SAP S/4HANA Cloud and the services of SAP Cloud Platform are hosted by a pair of data centers in the same region. 21.1.5

Business Continuity and Disaster Recovery

There are three security objectives: confidentiality, integrity, and availability. The latter is achieved by an appropriate architecture as well as adequate operations processes. Technical single points of failures are avoided, and components with a particular importance are shadowed by one or more secondary instances. The secondary instances can take over the task of the primary instance in case of a failure. Every component classified as critical must adhere to certain recovery times. For high availability (HA) and disaster recovery (DR), data is constantly replicated to a secondary component. Disaster recovery for SAP S/4HANA Cloud is offered separately; that is, it isn’t available on all platforms and is subject to extra fees. As of now, the platforms used are the Converged Cloud infrastructure as a service (IaaS) from SAP, Google Cloud Platform, Alibaba Cloud, and Microsoft Azure. Backup of customer data is replicated to a remote site to have it available for recovery in the event of data loss in the primary location.

21.2

Security Processes

Almost all operations processes have security aspects. In this section, we discuss the processes that have a strong or sole focus on security. 21.2.1

Security Configuration Compliance Monitoring

This process scans cloud service infrastructure assets such as virtual machine databases, web dispatchers, and hypervisors for compliance with SAP-internal security procedures and hardening guidelines. SAP operations uses tools such as TrueSight Automation for Servers for scanning these assets weekly. Results are reported in dashboards and followed up through reviews, meetings, and tickets. For deployments in hyperscalers such as Google Cloud Platform, the compliance and configuration of the environment is also checked with the Prisma Cloud solution from Palo Alto Networks. The compliance of the application layer is checked daily with an application-embedded health check framework, which reports deviations in the service provider cockpit, where SAP administrators monitor them and react if required. 21.2.2

Security Event Management and Incident Response

The infrastructure assets of the cloud service are configured to generate logs that are constantly forwarded to central security event management at SAP. There, pattern analyses are run to detect abnormal behavior, in which case security events are raised. A security event can transform into a security incident. In the case of a data breach, SAP notifies their customers, as well as the required external authorities. 21.2.3

Infrastructure Vulnerability Scanning

Infrastructure vulnerability scanning happens both on internally accessible assets and externally reachable assets using third-party scan solutions from companies such as Rapid7 Inc. or Tenable Inc. The scans are performed and authenticated whenever possible to keep the number of false positives low. During an authenticated scan the scanner has the permission to log on to the scan target and thus can scan with a deeper reach. In addition, third-party contractors scan the SAP S/4HANA Cloud software and the externally reachable infrastructure for vulnerabilities. They scan, for example, firewalls and web dispatchers regularly from the outside using the public web access. 21.2.4

Malware Management

Virtual machines are scanned at least weekly for malware. Malware definitions are kept up to date, and the local scan engines are permanently connected to the central scan solution. In addition, files such as images and documents which a user attaches to a business object, are scanned for malware when uploaded to the SAP S/4HANA Cloud tenant.

21.2.5

Security Patch Management

SAP communicates available infrastructure and operating system security patches internally using the vulnerability announcement service. SAP operations pushes the patches according to defined SLAs and criticalities to the corresponding environment. Patch monitoring is available in several tools, one of them being cloud reporting. 21.2.6

User Management

The cloud access manager is an SAP-internal tool which controls all access of SAP users such as administrators to the cloud environment. Different access levels and organizational restrictions apply when a user requests access. A hierarchy of approval steps is maintained for each access request. Each access is granted for a limited time. All administrative access requires two-factor authentication. 21.2.7

Hacking Simulation

At least once a year, the cloud infrastructure, application, operations processes, and tools are subject to a test in which third-party testers are equipped with substantial authorization to do security checks of the cloud infrastructure of SAP. A review of concepts and documents is also part of the assessment.

21.3

Certification and Compliance

Another important pillar of security and compliance is the constant verification of correct implementation and adherence to standards. Third-party certificates and reports are published in the SAP Trust Center site at https://www.sap.com/about/trustcenter/certification-compliance.html. Beyond these third-party reports, customers increasingly ask for visibility into their cloud tenant, its configuration, and its security settings, down to infrastructure level. For this, several SAP Fiori apps are provided, such as Security Audit Log Viewer (see Chapter 17, Section 17.2.5). 21.3.1

SAP Operations

SAP S/4HANA Cloud and main parts of its infrastructure are certified for ISO 27001 (Information Security), ISO 27017 (Cloud Security), ISO 27018 (Cloud Data Privacy), and ISO 22301 (Business Continuity). This holds worldwide, except for China, where there is a lower demand for ISO-based certifications. SAP S/4HANA Cloud provides Service Organization Controls (SOC) 1 Type II reports, as well as SOC 2 Type II reports for all regions. SAP S/4HANA Cloud has a BSI-C5 report, which is based on the Cloud Computing Compliance Criteria Catalog, and was the first of the cloud solutions from SAP to adhere to this standard. The BSI-C5 standard is mainly relevant in Germany but also requested in other European countries. The SAP Cloud Security Framework contains various policies, standards, procedures, and guidelines that are implemented in SAP S/4HANA Cloud. Of note are the Secure Cloud Delivery Standard and the Service Continuity Management Standard, as well architecture standards such as the Access to Critical Infrastructure Reference Architecture and the Hyperscaler Reference Architecture and Preapproved Scenarios. For each component, technology, and vendor, the SAP Cloud Security Framework contains hardening procedures that are implemented in SAP S/4HANA Cloud. Some of the applied hardening procedures are Application Server ABAP Hardening Procedure, Linux Server Hardening Procedure, and SAP HANA 2.0 Hardening Procedure. 21.3.2

SAP Software Development

SAP S/4HANA and SAP S/4HANA Cloud are developed according to SAP’s secure software development lifecycle and the SAP innovation cycle, which both provide a stringent framework to ensure security- and privacy-aware software development. The development of SAP S/4HANA Cloud is ISO 27001 certified. Data protection is supported through application of the requirements of BS 10012. Thus, the solution can be used in compliance with GDPR (see Chapter 7).

21.4

Summary

The complexity of SAP S/4HANA Cloud, paired with its business value for running mission-critical business processes, requires a thorough approach to security and compliance. This is addressed by a secure-by-default design, including a secure architecture and secure network design, which is the first pillar of security and compliance. The second pillar consists of flawless operation security processes, such as user management. The third pillar involves auditing the solutions and their operations regularly, both internally and externally, while the fourth pillar shares transparency with SAP customers about these audit results while also proving customers with insight into the security of their tenant and its environment. The execution of these four pillars is closely interconnected, which is key to the overall success of the solution.

22

Outlook We are committed to supporting every customer to become an intelligent enterprise – breaking down silos and adopting intelligent, dynamic, crossfunctional business processes that deliver optimal experiences. — Christian Klein, CEO of SAP SE

For many enterprises, the next step on the journey toward adapting to the digital economy is powered by intelligence. Machine learning and image recognition will be the norm. Manual tasks will be automated to a large extent. IT systems will become conversational. Modern ERP solutions such as SAP S/4HANA are enabling and driving this change by using, for example, the embedded analytics, machine learning, and intelligent situations architecture and business application examples we have described in this book. The architecture strategy for SAP S/4HANA is geared to supporting companies on that journey. SAP software engineers are constantly evolving the ERP capabilities of SAP S/4HANA, reimagining processes based on the most suitable and innovative technology at hand, such as the machine learning scenarios discussed earlier. Although SAP will continue to offer SAP S/4HANA as two distinct products –an onpremise installation and a software-as-as-service (SaaS) offering – an intelligent enterprise system is best built with a cloud architecture to not only meet the qualities of scalability and pay-per-use, but also resilience, high availability, ease of use, and ease of implementation. In the years to come, enterprises will operate a network of interconnected on-premise solutions together with cloud-based services. This means that ERP providers like SAP must offer a suite experience across individual services and solutions and, for example, cover business configuration, user experience, master data distribution, and convenient setup of the integration. Companies will constantly invest in automation of business processes and look for corresponding software support. However, businesses will be driven not only by shareholders’ interest but also by noneconomic parameters such as environmental health, sustainability, and employee satisfaction. Intelligent ERP solutions like SAP S/4HANA provide the required transparency to help businesses make the right decisions. SAP S/4HANA is a next-generation intelligent ERP system. Its underlying architecture, as described in this book, is instrumental in SAP’s vision to “help the world run better and improve people’s lives” as we continue to help ensure the best possible support for companies to cope with the challenges of the digital era.

The Authors Editors

Thomas Saueressig has global responsibility for all business software applications including all functional areas from product strategy and management to product development and innovation as well as product delivery and support. In addition, he is also in charge of all development cross-functions at SAP, comprising the overall quality of SAP software products, its user experience and the underlying global cloud infrastructure ensuring a consistent level of scalability, reliability, performance, security, and data protection across all solutions for SAP’s customers. He has a degree in Business Information Technology from the University of Cooperative Education in Mannheim (Germany) and a joint executive MBA from ESSEC (France) and Mannheim Business School (Germany).

Tobias Stein (Development Senior Executive – Head of SAP S/4HANA Architecture) is a software architecture expert with more than 22 years of experience with SAP’s ERP, Business Suite and SAP S/4HANA application and solution architectures. As member of SAP’s global leadership team, he is heading the central architecture unit in the SAP S/4HANA development organization. He is a TOGAF-certified Enterprise Architect. He can be found at www.linkedin.com/in/tobiaswstein.

Jochen Boeder (Vice President – SAP S/4HANA Architecture) has more than 15 years of experience in the architecture of SAP business applications. At SAP, he leads a team of senior architects in the SAP S/4HANA engineering organization. He has co-authored several IT books, most prominently, The Architecture of SAP ERP, Tredition, 2014. He can be found at www.linkedin.com/in/jochen-boeder.

Dr. Wolfram Kleis (Software Architect – SAP S/4HANA Architecture) joined SAP more than 20 years ago and held several positions as developer and architect. Most recently he engineered cloud applications for Internet of Things before joining central SAP S/4HANA architecture unit. He has a strong passion for describing software architecture.

Authors

Dr. Erich Ackermann (Software Architect – SAP S/4HANA Architecture) joined SAP in 1993 and worked 20+ years as developer and architect in the context of customizing and lifecycle management tools and concepts. In this role he also supported customer projects. Recently he took over new responsibilities as architect for IAM topics.

Akhil Agarwal (Development Architect – Cloud-Native Applications) holds 20 years of experience in design and development of enterprise applications in SAP, both in the onpremise and cloud world. Currently working as architect for many cloud applications, with high focus on secure integration. Co-leads a global network of architects who identify and resolve key challenges faced by teams in cloud development.

Tom Anderson (Product Manager – Leasing and Real Estate) joined SAP in 2008 as an application consultant and is currently responsible for SAP’s Real Estate Management products used by commercial, corporate, and residential real estate organizations.

Martin Alt (Development Architect SAP S/4HANA Sales), with a master degree in computer science, started his career at SAP in 1991 as a developer and later held several positions as development manager, solution manager, project manager and software architect in different areas of SAP ERP and CRM with a strong focus on Industry Solutions for Media and Telecommunications. Since 2014 he was responsible as one of the lead architects for the development of SAP S/4HANA Sales.

Mathias Beck (Chief Product Manager, Central Business Configuration) joined SAP in 2002 as developer, architect, chief product owner for banking solutions. Since 2017 member of business content management for SAP S/4HANA. Since 2019, he is responsible for the central business content model for All SAP solutions. He holds a master’s degree in information systems and a German diploma in business computer science.

Markus Berg is the engineering lead of SAP S/4HANA Output Management. He joined SAP in 2005 as an application developer in Business ByDesign and has more than 10 years of experience with output management in a cloud environment.

Christoph Birkenhauer (Chief Development Architect – Globalization Services Financials) is the lead architect of SAP S/4HANA for advanced compliance reporting, which evolved from a side project that he started in 2014. He has more than 20 years of experience in the development of various finance and non-finance applications within SAP on-premises and cloud products. He holds a master’s degree in computer science.

Renzo Colle (Chief Software Architect – SAP S/4HANA Architecture and ABAP Restful Application Programming Model) joined SAP in 1997 and held several positions as developer, software architect, manager, and project manager. He is responsible for the end-to-end programming model of SAP S/4HANA, lead architect for the ABAP Restful

Application Programming Model, inventor of the Business Object Processing Framework (BOPF).

Dr. Andreas Daum (Chief Development Architect – Extended Warehouse Management) works in extended warehouse management (EWM) development since 2003 and was assigned as EWM lead architect in 2007. He joined SAP in 1998 and worked in several positions in the development of SAP R/3 Logistics Execution, SAP APO, and a prototype of an ESOA-based suite of applications. Before joining SAP he gained experience in the development of real time computer systems, and the design of distributed data acquisition systems. Find him at www.linkedin.com/in/andreas-daum-38a08b89.

Bastian Distler (Chief Product Owner – Central Finance) sees integration as a key strength of SAP and has always been involved in integration topics, especially between logistics and finance since he joined SAP in 2003. Today he is chief product owner for central finance which deals with integration and centralization of finance in multi-system landscapes.

Dr. Georg Dopf (Vice President, S/4HANA Finance Business Architecture) started at SAP in 1991 as developer and worked since then in nearly all functional and management roles of the finance development organization. He has a lot of experience in both on-premise and cloud development. His passion is to innovate and simplify core finance business architectures. Accordingly, the concepts around the Universal Journal in SAP S/4HANA Finance are a recent field of work.

Dr. Andrey Engelko (Software Architect – SAP S/4HANA Architecture) joined SAP in 2000 and worked as developer and architect in different areas. Currently responsible for analytics and search architecture in SAP S/4HANA. He can be found at www.linkedin.com/in/andrey-engelko-41a20372.

Dr. Harald Evers (Chief Development Architect – S/4HANA Architecture) has more than 17 years of experience in the development and architecture of SAP business applications. For SAP S/4HANA his focus is on user experience, SAP Fiori and UI

technologies in application development and integration, as well as the mobile and conversational UX architecture.

Kolja Ewering (Chief Product Expert) is responsible for products like SAP Multi-Bank Connectivity and SAP S/4HANA Finance for advanced payment management. He joined SAP in 2003 and worked as developer, project manager, product manager. Most recently he is heading the product area for Financial Services and Treasury within SAP Innovative Business Solutions. He can be found at www.linkedin.com/in/kolja-ewering.

Holger Faber (Chief Expert/Product Owner Financial Planning) joined SAP in 2002 as IT consultant and since then worked in various positions as project and program manager, manager, and expert in the areas of IT, consulting, controlling and development. Working on financial planning topics since 2004, he has deep knowledge in planning tools and processes.

Chakram Govindarajan (Development Architect – S/4HANA Business Partner Master Data) has been working with SAP since 2007 with experience in consulting, support and product development. In his current role as development architect, he is responsible for managing the business partner master data including the customer and supplier master data and building the required tools for SAP ECC to SAP S/4HANA conversion.

Dr. Benjamin Heilbrunn (Development Architect – Intelligent Enterprise & Cross Architecture) is passionate about cloud-native applications and software architecture. As one of the founders of the SAP Cloud SDK, he mainly shaped the SAP Cloud SDK Continuous Delivery Toolkit. He holds a PhD in computer science.

Torsten Heise (Development Architect – Available-to-Promise) joined SAP in 2001 and worked as a developer and architect in various topics, like business context viewer, master data governance, central finance, and available-to-promise.

Sandra Herden (Product Owner Margin Analysis) joined SAP in 1997 as developer and has since worked in a variety of positions as project and program manager, partner project lead and product owner in the areas of HCM, banking and currently accounting. She can be found at www.linkedin.com/in/sandra-herden.

Michael Herrmann, CISSP (SAP S/4HANA Cloud Delivery Security Officer) Michael is responsible for technical and operations security as well as for compliance. He is a member of the German Information Systems Audit and Control Association (ISACA) working group cloud computing.

Gabi Hoffmann (Business Process Architect – Finance Accounting) started 22 years ago at SAP. In the first step with a focus on the logistical area and over the time the expansion to the finance area. A major theme over many years has been the revenue recognition. Now the focus is on the development of E2E processes.

Thomas Hoffmann (Chief Development Architect – Finance) joined SAP in 1992 as developer in the area of SAP R/3 core development. Since 1997, he’s worked as developer and architect for SAP Business Suite, Business ByDesign, and SAP S/4HANA in the finance area. Currently responsible for posting and clearing applications in financials.

Rudolf Hois (Vice President SAP S/4HANA Product Management) In his more than 20 years at SAP, Rudolf helped develop the Busines Suite and SAP S/4HANA in various engineering roles, continuously engaging with customers to build their enterprise architectures and adoption strategies.

Christian Hoppe (Product Owner Bank Statement Processing – SAP S/4HANA Finance) has worked for 22 years with SAP products and joined SAP 13 years ago as consultant for financial applications. After having joined the SAP S/4HANA finance engineering unit, he worked in different roles before becoming the product owner for bank statement processing and automation in S/4HANA Finance.

Jan Hrastnik (Chief Development Architect – SAP S/4HANA Architecture) has held several positions in the development organizations of SAP for more than 15 years. At present, he focuses on the SAP S/4HANA virtual data model and on core data services.

Dr. Dietmar Kaiser (Development Architect – Central Finance) joined SAP in 1998 and worked as a developer and architect in SAP Business Suite, Business ByDesign and SAP S/4HANA on various topics in the areas of real estate management, cash and liquidity management, identity and access management, and security. Currently, he works as a development architect in the area of Central Finance.

Marlene Katzschner (Product Owner – Predictive Accounting) joined SAP in 2003 and held several positions as developer, project manager and solution architect in the area of Finance for SAP Business ByDesign and SAP S/4HANA. Currently she has the role of a product owner for predictive accounting.

Andreas Kemmler (Chief Development Architect – SAP S/4HANA Architecture) has more than 22 years of experience in the development and architecture of SAP business applications. He was the SAP ERP architecture lead for the migration of the Business Suite to SAP HANA, responsible for the suite on SAP HANA development and architecture guidelines and most recently for the adaptation of the SAP S/4HANA cloud multi-tenancy and business continuity/zero downtime maintenance concepts.

Dr. Joachim Kenntner (Chief Development Expert – Finance) joined SAP in 1996 and worked as developer, manager and architect on different finance topics in SAP Business Suite, Business ByDesign, and SAP S/4HANA. Currently responsible for the Universal Journal and working on SAP S/4HANA conversion, performance, and extensibility topics.

Dr. Thomas Kern (Application Development Architect SAP S/4HANA Production Planning) joined SAP in 1993 as developer and since then worked for various SAP

products (SAP R/3, APO, Business by Design) in the area of production planning. He holds a master’s degree in mathematics and a PhD in natural science.

Markus Klein (Development Architect Inventory Accounting – SAP S/4HANA Finance) joined SAP in 1995 and worked for SAP Business Suite, Business ByDesign, and SAP S/4HANA as developer and architect on business topic inventory accounting and on integration aspects of logistic applications into financials.

Ralf Kühner (Development Architect LoB Finance) joined SAP in 1997 as developer. He has experience in development for SAP Business Suite, Business ByDesign, and applications for Internet of Things. Now he is working as development architect in SAP S/4HANA Finance in the area of cost object controlling.

Pradeep Kumar (Development Architect SAP S/4HANA Produce) holds a master's degree in computer applications from NIT Karnataka. He has more than 13 years of

SAP experience in various development organizations of SAP. In his current role of development architect, he is responsible for architecture of topic production planning and detailed scheduling in the area of produce.

Volker Lehnert (Senior Director Data Protection SAP S/4HANA). Having been involved in the implementation of security and data protection requirements for 20 years, Volker is certainly one of the most experienced minds in the technical data protection of ERP solutions. He is the corresponding author of Authorizations in SAP Software: Design and Configuration and GDPR and SAP: Data Privacy with SAP Business Suite and SAP S/4HANA, as well as contributor in numerous other works of Rheinwerk. In this book he contributes the chapter Data Protection and Privacy.

Dr. Roland Lucius (Chief Development Architect – SAP S/4HANA Architecture) has more than 19 years of experience in the development and architecture of SAP business applications. He has worked for more than 8 years on the topics application security and identity and access management.

Hristo Matev (Senior Developer – Central Finance) joined SAP almost 15 years ago. Over the years, he worked on multiple on-premise and cloud applications in the areas of SRM, SCM and currently SAP S/4HANA Finance. He also focused on topics like identity and access management, security, and performance.

Dr. Knut Manske (Development Senior Manager – SAP S/4HANA Situation Services) is the engineering lead for intelligent situation handling and responsibility management. He has worked on various innovation topics to develop ideas and inventions that add value for customers. He joined SAP in 2004 to establish and lead SAP Research in Darmstadt with focus topic “workplace of the future”.

Dr. Petra Meyer (Chief Development Architect – Line of Business Produce) joined SAP in 1998 after studying mathematics. She worked in several roles in development with focus on production and production master data. In 2012, she moved into SAP ERP on SAP HANA and became the product owner for classification and variant configuration.

Here she initiated the development of SAP S/4HANA for Advanced Variant Configuration and has been the product owner until March 2020.

Divya Chandrika Mohan (Project Manager and Communications Lead) holds a Masters in bank management with experience in a palette of areas such as sales, customer service and training, bringing in innovative practices in relationship management, customer retention initiatives, and business process standardization. Is currently the project manager and the communications lead for the localization toolkit for SAP S/4HANA Cloud.

Dr. Klaus G. Müller (Chief Development Architect – Treasury) joined SAP in 1996 and worked as developer, architect and product expert for various topics in Treasury and Banking in SAP Business Suite, Business ByDesign, and SAP S/4HANA. He’s currently working as architect for cross topics and interoperability of all treasury applications.

Aalbert de Niet (Development Architect for SAP Document Compliance) has been working with SAP since 2001 with experience in consulting, support and product development. In his current role as development architect, he is responsible for legal compliance in processes involving the exchange of electronic documents.

Ingrid Nikkels (Global Solution Manager of SAP S/4HANA for Enterprise Contract Management and Assembly) has more than 25 years of experience in working on ERP implementation projects, of which 10 years for contract management solutions and processes. She holds a bachelor’s degree in economics linguistic and an MBA in international management.

Birgit Oettinger (Product Owner for Management Accounting) is closely involved in key innovation topics for SAP S/4HANA Finance. Birgit has more than 20 years of experience in the area of SAP Financials and got her start at SAP as an FI/CO consultant with focus on product costing and integration topics. She has been responsible for a wide range of products and areas in financials and gained deep insights into various cloud and on-premise solutions.

Till Oppert (Product Owner Tax Calculation – SAP S/4HANA) has more than 20 years of experience in the FI development as developer, development expert, and product owner. He worked as a developer on SAP ERP Business Suite, SAP Business by Design, and SAP S/4HANA, and is the author of the chapter Tax and Legal Management.

Dr. Bernd Rödel (Chief Development Architect – Line of Business Produce) is one lead architect within a central architecture team in the area of core logistics, where he has supported SAP S/4HANA development for the last three years. He joined SAP in 2000 and worked as a developer and architect in various teams. He has a master’s degree in biology, a master’s degree in computer science, and a PhD in natural science.

Dr. Siar Sarferaz (Chief Development Architect for SAP S/4HANA) began his career at SAP spanning over 20 years, with various positions. He is the lead architect for machine learning in SAP S/4HANA and is responsible for all concepts for infusing intelligence

into business processes. He studied computer science and philosophy and holds a Ph.D. in computer science.

Dr. Carsten Scheuch (Chief Development Architect – S/4HANA Service) has worked at SAP for more than 20 years and has been working in the areas of materials management (ERP), CRM, and SAP S/4HANA. He has been the lead architect in the integration of CRM functionality into SAP S/4HANA.

Wolfram Schick (Chief Product Owner – Available-to-Promise) has more than 20 years of experience at SAP in various development and management roles. Wolfram helped implement, design, and position the APO global ATP solution, which up-to-date is still the most mature and complete ATP-solution in the market. In the last 5 years, Wolfram was leading the development of the new advanced ATP solution in SAP S/4HANA.

Dr. Erich Schulzke (Lead Architect for SAP S/4HANA Procurement) joined SAP 13 years ago with a PhD in Theoretical Neurophysics. In his current role as lead architect he is passionate about SAP Fiori, the application programming model, machine learning, and innovation topics in general.

Josef Schmidt (Chief Product Owner – Built-in Support) joined SAP more than 20 years ago, spent 9 years of his SAP career in China and held several positions as support engineer, developer, architect and team manager. Passionate about support, technology and customers, Josef is focusing on built-in support to change the way customers can get support from SAP.

Vitor Eduardo Seifert Bazzo (Product Owner for SAP Document Compliance) began his career at SAP in 2008 and has been working with Globalization Services since 2012 to facilitate legal compliance in processes involving the exchange of electronic documents.

Akshay Sinha (Development Architect – SAP S/4HANA Master Data) holds a Master's degree in technology from Birla Institute of Technology & Science, Pilani. He has worked at SAP for more than 14 years and is currently working as a development architect for master data in SAP S/4HANA. An active blogger, his other interests include deep learning and convolutional neural networks.

Dr. Uwe Sodan (Project Management – S/4HANA Architecture) joined SAP in 1996 and held several positions as developer, architect, manager, and project manager. Most recently, he’s worked as project manager for SAP S/4HANA enabling architecture execution and lifecycle management. You can find him at www.linkedin.com/in/uwesodan/.

Weijia Sun (Product Expert for Group Reporting – S/4HANA Finance) joined SAP in 2005 and worked as application developer, consultant, product manager and manager, mainly focusing on financials, planning and consolidation, data warehouse, and analytics. She holds bachelor’s degrees in electric engineering and master’s in business administration.

Philipp Stotz (Associate Development Architect – SAP S/4HANA Procurement) holds a master's degree in business informatics and has been part of the SAP universe since 2011, where he started as a developer and is now a software architect in SAP Procurement, with a focus on service procurement. You can find him at www.linkedin.com/in/philipp-stotz89.

Jochen Thierer (Head of Development for Governance, Risk, and Compliance) and his team are responsible for the GRC products in SAP’s portfolio. In close collaboration with SAP GRC Product and Solution Management, his team manages all SAP GRC product facets, including product strategy and roadmap. Jochen is in this role since 2010; after joining SAP in 1994, he assumed various responsibilities in consulting, training, software architecture and software development cross multiple SAP products, most notably developer and development manager for SAP Global Trade Services (SAP GTS).

Detlef Thoms (Chief Product Expert – Performance and Scalability) has more than 22 years of SAP experience that he has gained in several areas like development support (available-to-promise), supply chain management consulting and solution management for master data governance. In January 2011 he joined the product management team for performance and scalability across the entire SAP portfolio with the focus to ensure that SAP software is performance-optimized, sustainable, and designed to capitalize on the latest hardware and technology trends. He can be found at www.linkedin.com/in/detlef-thoms-39782723.

Kumar Vikas (Lead Development Architect- Procurement) joined SAP in 2006 and held several positions as developer, architect, and project lead. Most recently, he’s engineered the central procurement solution. His passion is to innovate together with customers to simplify the procurement processes.

Dr. Martin von der Emde (Product Owner for Digital Payments) joined SAP in 1996. He’s worked as developer, consultant, and product expert for SAP Business Suite, SAP Business ByDesign, SAP S/4HANA, and SAP Cloud Platform applications with focus on payment processing.

Helena Vossen (User Assistance Manager – User Assistance Architect for Contract Accounting) has more than 20 years of experience in documenting finance applications in the industry environment. At SAP, she leads a team of authors and translators in the SAP S/4HANA organization. Documentation is a team effort, so special thanks go to Tom Brauer (product owner for contract accounting) and Heiko Mann (development architect for contract accounting).

Qiang Wang (Chief Development Architect – Treasury) joined SAP China in 1996 and worked as consultant, developer, manager, and architect on broad finance topics in SAP Business Suite and SAP S/4HANA. He is currently responsible for the architecture of SAP S/4HANA Cash Management.

Felix Wente (Chief Development Architect SAP S/4HANA Architecture) joined SAP 20 years ago and worked as a developer, software architect and development manager in SAP NetWeaver, SAP CRM, SAP Business ByDesign, and SAP S/4HANA. Currently,

he is a member of the central architecture team of SAP S/4HANA focusing on the topic extensibility.

Dr. Klaus Weiss (Chief Development Architect Globalization Services Financials) joined SAP in 2000 and worked for more than 20 years as a developer and software architect in the area of Financials software development. Since 2011 he is the lead architect of Globalization Services Financials.

Index ↓A ↓B ↓C ↓D ↓E ↓F ↓G ↓H ↓I ↓J ↓K ↓L ↓M ↓O ↓P ↓Q ↓R ↓S ↓T ↓U ↓V ↓W ___Application Interface Framework (AIF) [→ Section 9.12]

A⇑ ABAP authorization concept [→ Section 7.2] [→ Section 17.1] ABAP CDS reader operator [→ Section 6.8] ABAP Dictionary [→ Section 8.1] [→ Section 19.3] [→ Section 19.4] ABAP RESTful application programming model [→ Section 2.1] [→ Section 2.2] [→ Section 6.1] BOPF-managed implementation [→ Section 2.2] determination [→ Section 2.2] draft feature [→ Section 2.2] implementation [→ Section 2.2] implementation types [→ Section 2.2] managed implementation [→ Section 2.2] transactional buffer [→ Section 2.2] unmanaged implementation [→ Section 2.2] validation [→ Section 2.2] ABAP-managed database procedures (AMDPs) [→ Section 4.2] [→ Section 9.10] Access schema [→ Section 19.4] sequence [→ Section 9.11] Account assignment [→ Section 14.2] Accounting interface [→ Section 14.5] [→ Section 14.5] [→ Section 14.5] [→ Section 14.9] Accounting principle [→ Section 14.2] Accounting views of logistics information (AVL) [→ Section 14.9] Active-active [→ Section 19.2] Actual costing [→ Section 14.2] Adaptation Transport Organizer (ATO) [→ Section 5.1] Advanced adapter engine extended [→ Section 6.6] Advanced compliance reporting [→ Section 14.9] [→ Section 15.1]

Advanced planning and optimization [→ Section 8.1] Advanced variant configurator [→ Section 8.2] [→ Section 10.4] Alternative-Based Confirmation [→ Section 12.5] Analytic engine, runtime objects [→ Section 4.1] Analytical applications [→ Section 2.1] Analytical list pages [→ Section 4.1] Analytical model analytical query view [→ Section 4.1] cube view [→ Section 4.1] dimension view [→ Section 4.1] hierarchy view [→ Section 4.1] Analytics [→ Section 4.1] extensibility [→ Section 4.1] Anchor object [→ Section 4.3] API consumption [→ Section 5.2] Ariba Network [→ Section 9.12] Audit [→ Section 12.6] Authorization [→ Section 7.2] [→ Section 9.3] [→ Section 17.1] Authorization object extension [→ Section 17.2] Automatic Payment Processing [→ Section 14.6] Availability group [→ Section 13.4] Availability zones [→ Section 21.1] Available-to-promise (ATP) [→ Section 9.1]

B⇑ Backorder processing [→ Section 12.5] Backup [→ Section 21.1] Backward integration [→ Section 10.5] Balance sheet valuation [→ Section 14.2] Bandwidth [→ Section 20.1] Bank statement processing [→ Section 14.7] BAPI → see [Business Application Programming Interfaces (BAPIs)] Basic views [→ Section 2.1]

Batch [→ Section 12.6] Behavior definition [→ Section 2.2] model [→ Section 2.1] Bill of materials (BOM) [→ Section 8.1] [→ Section 8.2] [→ Section 12.3] [→ Section 14.5] Billable items [→ Section 14.7] Billing [→ Section 9.9] Billing document [→ Section 9.9] request [→ Section 9.9] [→ Section 10.5] Billing due list [→ Section 9.9] Blue-green deployment [→ Section 19.4] BOM explosion [→ Section 8.2] [→ Section 12.5] Brownfield [→ Section 20.2] Budget availability control [→ Section 14.5] Built-in support [→ Section 19.5] Business adaptation catalog [→ Section 16.2] Business Application Programming Interfaces (BAPIs) [→ Section 6.1] Business area [→ Section 16.2] Business catalog [→ Section 9.3] [→ Section 17.1] [→ Section 17.1] template [→ Section 17.2] Business Communication Services (BCS) [→ Section 18.1] [→ Section 18.3] Business event [→ Section 6.7] [→ Section 6.7] architecture [→ Section 6.7] Business object [→ Section 2.2] [→ Section 10.4] [→ Section 17.1] data model [→ Section 2.2] design [→ Section 2.2] Business object implementation [→ Section 2.2] Business object repository [→ Section 6.1] Business package [→ Section 16.2] Business partner [→ Section 8.1] [→ Section 8.3] [→ Section 9.1] [→ Section 13.3] [→ Section 14.2] [→ Section 14.7] [→ Section 14.11] employee role [→ Section 8.3] group [→ Section 14.7] master data [→ Section 8.3]

OData API [→ Section 5.2] time-dependency [→ Section 8.3] Business process model and notation [→ Section 6.6] Business processes [→ Section 16.2] Business role [→ Section 17.1] [→ Section 17.1] template [→ Section 9.3] [→ Section 17.1] Business server pages [→ Section 3.1] Business service implementation [→ Section 2.2] Business services [→ Section 2.2] Business topic [→ Section 16.2] Business transactions [→ Section 10.4] Business transactions framework [→ Section 10.4] [→ Section 10.4] header [→ Section 10.4] item extensions [→ Section 10.4] root components [→ Section 10.4] sets [→ Section 10.4] Business user [→ Section 17.1] [→ Section 17.2] [→ Section 19.5]

C⇑ Capacity management [→ Section 20.3] CDS views [→ Section 5.1] CDS-based data extraction [→ Section 6.8] [→ Section 11.6] CDS-based extraction [→ Section 6.8] Central procurement [→ Section 11.1] [→ Section 11.4] Change control [→ Section 7.2] Change data capture [→ Section 6.8] Characteristic [→ Section 14.5] [→ Section 14.10] group [→ Section 8.2] Classification system [→ Section 8.2] Cloud BAdI [→ Section 5.1] [→ Section 15.2] Cloud Connector [→ Section 6.5] principles [→ Section 6.5] Cloud data integration [→ Section 6.8] Cloud Foundry [→ Section 5.2]

Cloud-native applications [→ Section 5.2] Collections management [→ Section 14.7] Communication channel [→ Section 18.6] system [→ Section 6.4] user [→ Section 6.4] [→ Section 17.2] Company code [→ Section 17.1] Compliance [→ Section 21.3] Composite views [→ Section 2.1] Compositions [→ Section 2.2] Condition technique [→ Section 9.11] [→ Section 12.6] type [→ Section 9.11] Configuration database [→ Section 8.2] Consent [→ Section 7.2] Consumption views [→ Section 2.1] Content Delivery Network (CDN) [→ Section 3.1] [→ Section 20.1] Continuous delivery [→ Section 5.2] [→ Section 5.2] Contract accounting [→ Section 14.7] Control object [→ Section 14.5] Convergent billing [→ Section 9.9] Convergent invoicing (CI) [→ Section 14.7] Conversational AI [→ Section 4.2] Copy control [→ Section 9.5] Core Data Services (CDS) [→ Section 2.1] consumption scenarios [→ Section 2.1] data control language [→ Section 2.2] entities [→ Section 2.1] Core Data Services (CDS) models [→ Section 2.2] Cost analysis [→ Section 14.5] Costing-based profitability analysis [→ Section 14.5] Create, Read, Update, Delete (CRUD) [→ Section 2.1] [→ Section 5.1] Credit decision support [→ Section 14.7]

Credit events [→ Section 14.7] Credit management [→ Section 14.7] Cross-System Process Control Framework (CSPC) [→ Section 14.9] Cube view [→ Section 2.1] Custom Analytical Queries application [→ Section 4.1] Custom business logic [→ Section 5.1] objects [→ Section 5.1] Customer returns [→ Section 9.8] workspace [→ Section 16.2] cXML [→ Section 11.5]

D⇑ Dashboard [→ Section 4.1] Data center [→ Section 19.2] collection [→ Section 14.2] exchange manager [→ Section 10.5] integration [→ Section 6.8] migration [→ Section 8.1] model [→ Section 10.4] [→ Section 10.4] protection [→ Section 7.1] replication framework [→ Section 6.8] [→ Section 6.8] residency [→ Section 21.1] security [→ Section 21.1] subject [→ Section 7.2] [→ Section 7.2] subject rights [→ Section 7.2] Data Control Language [→ Section 2.1] Data protection and privacy (DPP) [→ Section 7.1] [→ Section 8.3] regulations [→ Section 4.1] Database view [→ Section 19.3] Decentralized EWM [→ Section 13.1] Delivery [→ Section 12.6] Delta extraction [→ Section 6.8]

Demand-Driven Material Requirements Planning [→ Section 12.5] Determine actions [→ Section 2.2] DevOps [→ Section 5.2] Direct procurement [→ Section 11.2] Direct requirement element [→ Section 12.1] Disaster recovery [→ Section 19.2] [→ Section 21.1] [→ Section 21.1] Disclosure control [→ Section 7.2] Dispute case [→ Section 14.7] Dispute management [→ Section 14.7] Distribution channel [→ Section 8.1] Division [→ Section 8.1] Dock appointment scheduling [→ Section 13.5] Dunning [→ Section 14.7] Dynamic Capacity Management [→ Section 20.3] [→ Section 20.5]

E⇑ Elasticity [→ Section 20.3] Electronic Banking Internet Communication Standard (EBICS) [→ Section 14.8] Electronic bill presentment and payment (EBPP) [→ Section 14.7] Electronic Data Exchange [→ Section 6.1] Electronic Data Interchange (EDI) [→ Section 9.7] [→ Section 12.5] [→ Section 12.6] [→ Section 18.4] Embedded analytical applications [→ Section 4.1] Embedded analytics [→ Section 4.1] [→ Section 9.10] Embedded EWM [→ Section 13.1] Emergency patch (EP) [→ Section 19.4] Encryption [→ Section 7.2] Enterprise analytics [→ Section 4.1] [→ Section 4.1] Enterprise Contract Management and Assembly (E-CMA) [→ Section 14.4] Enterprise resource planning [→ Appendix SAP] Enterprise search [→ Section 3.2] auxiliary views [→ Section 3.2] personlization [→ Section 3.2]

search scope [→ Section 3.2] Entitled to dispose [→ Section 13.4] Entity data model [→ Section 6.1] relationship model [→ Section 2.1] tag [→ Section 2.2] Entity Manipulation Language [→ Section 2.2] Event channels [→ Section 6.7] Event-based integration [→ Section 6.7] Exception management [→ Section 13.5] Expense planning [→ Section 14.5] Extended warehouse management (EWM) [→ Section 8.1] [→ Section 13.1] master data [→ Section 13.3] monitoring [→ Section 13.6] process automation [→ Section 13.7] reporting [→ Section 13.6] technical frameworks [→ Section 13.9] user interface [→ Section 13.8] Extensibility [→ Section 5.1] [→ Section 8.3] in-app [→ Section 5.1] key user [→ Section 5.1] side-by-side [→ Section 5.2] stability criteria [→ Section 5.1]

F⇑ Failover SAP HANA instance [→ Section 19.2] Feature control [→ Section 2.2] global [→ Section 2.2] instance [→ Section 2.2] static [→ Section 2.2] Field extensibility [→ Section 5.1] service [→ Section 10.2] [→ Section 10.2] Filter object [→ Section 8.1] Finance interface [→ Section 14.1]

Financial closing [→ Section 14.2] Fixed asset [→ Section 14.2] Foreign currency valuation [→ Section 14.2] Form template [→ Section 18.5]

G⇑ General Data Protection Regulation (GDPR) [→ Section 7.1] technical measures [→ Section 7.2] Generally Accepted Accounting Principles (GAAP) [→ Section 14.5] Goods issue [→ Section 8.1] [→ Section 14.2] [→ Section 14.5] [→ Section 14.5] receipt [→ Section 8.1] [→ Section 11.1] [→ Section 12.4] [→ Section 12.4] [→ Section 12.6] [→ Section 12.6] [→ Section 14.10] receipt/invoice receipt [→ Section 4.3] Greenfield [→ Section 20.2] Group reporting [→ Section 14.2]

H⇑ Handling unit management [→ Section 12.6] [→ Section 13.5] Handling units [→ Section 13.4] High availability (HA) [→ Section 21.1] Hotfix collection (HFC) [→ Section 19.4] Hybrid analytical applications [→ Section 4.1] Hyperscale Providers [→ Section 19.2]

I⇑ Identity and Access Management (IAM) [→ Section 17.1] architecture [→ Section 17.1] tools [→ Section 17.1] Identity Authentication service [→ Section 19.1] IDoc [→ Section 6.1] iFlows [→ Section 6.6] Indirect procurement [→ Section 11.2] Information Access (InA) [→ Section 4.1] Information retrieval framework [→ Section 7.2]

Infrastructure vulnerability scanning [→ Section 21.2] In-house repair [→ Section 10.2] [→ Section 10.2] Initial load [→ Section 20.1] Inspection lot [→ Section 12.6] Integration [→ Section 6.1] middleware [→ Section 6.6] Intelligent situation automation [→ Section 4.3] Intelligent situation handling [→ Section 4.3] [→ Section 4.3] [→ Section 9.4] International Article Number (EAN) [→ Section 8.1] International Bank Account Number (IBAN) [→ Section 8.3] International Financial Reporting Standards (IFRS) [→ Section 14.2] International Standard on Assurance Engagements (ISAE) 3000 [→ Section 17.1] Inventory management [→ Section 8.1] [→ Section 14.2] Inventory valuation methods [→ Section 14.2]

J⇑ Jupyter Notebook [→ Section 4.2] Just-in-time (JIT) [→ Section 12.5]

K⇑ Kanban [→ Section 12.5] Key user [→ Section 5.1] [→ Section 19.5] Kubernetes [→ Section 5.2] Kyma environment [→ Section 5.2]

L⇑ Latency [→ Section 20.1] Ledger [→ Section 14.2] [→ Section 14.5] Legal contract [→ Section 14.4] Legal transaction [→ Section 14.4] Lifecycle management [→ Section 5.1] Localization toolkit [→ Section 15.1] [→ Section 15.3] Low-level configuration [→ Section 12.5]

M⇑ Machine learning [→ Section 4.2] architecture [→ Section 4.2] categorization [→ Section 4.2] embedded [→ Section 4.2] matching [→ Section 4.2] prediction [→ Section 4.2] ranking [→ Section 4.2] recommendation [→ Section 4.2] side-by-side architecture [→ Section 4.2] Maintenance event [→ Section 19.4] Malware [→ Section 21.2] Manage KPIs and Reports application [→ Section 4.1] Manufacturing execution systems (MES) [→ Section 12.7] Mapping [→ Section 5.1] Margin analysis [→ Section 14.5] Market segment [→ Section 14.5] [→ Section 14.10] Master data [→ Section 8.1] [→ Section 12.1] data integration [→ Section 6.8] [→ Section 6.8] Material document [→ Section 12.1] flow system [→ Section 13.5] ledger [→ Section 14.2] master [→ Section 8.1] requirements planning [→ Section 12.1] [→ Section 12.5] reservation [→ Section 12.4] Microservice [→ Section 5.2] Migration object [→ Section 8.1] Moving average price [→ Section 8.1] [→ Section 14.2] MRP area [→ Section 12.2] MRP type [→ Section 12.5] Multibank Connectivity [→ Section 14.7] Multidimensional reporting [→ Section 4.1]

Multitenancy [→ Section 19.3] My Situations app [→ Section 9.4]

O⇑ OData APIs [→ Section 6.1] OData service [→ Section 2.2] [→ Section 3.1] OData service extensions [→ Section 5.1] Online transaction processing (OLTP) [→ Section 1.1] Open payables management [→ Section 14.6] Open receivables management [→ Section 14.7] Operational data provider [→ Section 6.8] procurement [→ Section 11.1] Outbound deliveries [→ Section 9.1] delivery [→ Section 12.4] [→ Section 12.4] [→ Section 13.5] Output management [→ Section 18.1] Output Management System (OMS) [→ Section 18.2]

P⇑ Packing instruction [→ Section 12.6] [→ Section 12.6] Payables management [→ Section 14.6] Payment enqueue processing [→ Section 14.6] Payment medium workbench (PMW) [→ Section 14.6] [→ Section 15.3] Peppol [→ Section 15.2] Personal data [→ Section 7.2] PFCG roles [→ Section 17.1] [→ Section 17.2] Phantom assemblies [→ Section 8.2] Physical inventory document [→ Section 12.4] Physical inventory management [→ Section 13.5] Picking [→ Section 9.5] Planned independent requirement [→ Section 12.1] Planned order [→ Section 12.1] [→ Section 12.4] [→ Section 12.5] [→ Section 12.7]

Plant [→ Section 12.2] Postprocessing Framework [→ Section 13.7] Predictive accounting [→ Section 14.5] Predictive analytics integrator [→ Section 9.10] Predictive MRP [→ Section 12.5] Price Control Indicator [→ Section 8.1] Pricing [→ Section 9.11] Pricing procedure [→ Section 9.11] Print queue [→ Section 18.2] Printing user [→ Section 17.2] Privacy [→ Section 7.1] Process orders [→ Section 12.1] [→ Section 14.5] Procurement [→ Section 11.1] processes [→ Section 11.2] Product hierarchies [→ Section 8.1] Product master [→ Section 8.1] [→ Section 8.1] [→ Section 9.1] [→ Section 12.6] [→ Section 12.6] [→ Section 13.3] Production cost [→ Section 14.5] order [→ Section 12.1] [→ Section 12.4] [→ Section 12.7] [→ Section 14.5] supply area [→ Section 12.2] version [→ Section 12.3] Profit center [→ Section 14.2] Profitability analysis [→ Section 14.5] [→ Section 14.5] [→ Section 14.5] Progressive disclosure [→ Section 4.3] Pseudonymization [→ Section 7.2] Purchase order [→ Section 11.1] [→ Section 12.4] [→ Section 14.10] requisition [→ Section 11.1] [→ Section 11.2] [→ Section 11.4] [→ Section 12.1] [→ Section 12.4] [→ Section 12.5] Purchasing contract [→ Section 11.1] group [→ Section 8.1] info record [→ Section 11.1] value key [→ Section 8.1]

Python [→ Section 4.2]

Q⇑ Quality certificate profile [→ Section 12.6] Info Record in Procurement [→ Section 12.6] Info Record in Sales [→ Section 12.6] inspection [→ Section 12.6] [→ Section 13.5] Quant [→ Section 13.4] Quick Sizer [→ Section 20.2] Quota arrangements [→ Section 11.1] Quotation [→ Section 9.1]

R⇑ Read access logging [→ Section 7.2] Receivables management [→ Section 14.7] Reference content [→ Section 16.1] Release for Delivery [→ Section 12.5] Release order [→ Section 9.6] Remote API views [→ Section 2.1] Remote functional call [→ Section 6.1] queued [→ Section 6.1] synchronous [→ Section 6.1] transactional [→ Section 6.1] Replenishment elements [→ Section 12.5] Requests for quotation [→ Section 11.1] Resource [→ Section 13.3] [→ Section 13.7] Responsibility management [→ Section 4.3] [→ Section 10.3] [→ Section 10.3] [→ Section 11.4] Restricted reuse view [→ Section 2.1] Restriction type [→ Section 17.2] Right of Use asset [→ Section 14.2] Routing [→ Section 12.3]

S⇑

Sales [→ Section 9.1] areas [→ Section 8.1] inquiry [→ Section 9.1] [→ Section 9.4] [→ Section 9.4] organization [→ Section 8.1] quotation [→ Section 9.4] scheduling agreement [→ Section 9.7] Sales contracts [→ Section 9.1] [→ Section 9.1] [→ Section 9.6] master contract [→ Section 9.6] quantity contract [→ Section 9.6] value contract [→ Section 9.6] Sales document [→ Section 9.2] structure [→ Section 9.2] types [→ Section 9.2] Sales document category [→ Section 9.2] Sales document type [→ Section 9.2] Sales order [→ Section 2.2] [→ Section 9.1] [→ Section 9.5] [→ Section 12.6] [→ Section 14.2] [→ Section 14.5] [→ Section 14.5] [→ Section 14.7] header [→ Section 2.2] processing [→ Section 9.5] SAP Analytics Cloud [→ Section 3.1] [→ Section 4.1] [→ Section 19.1] embedded [→ Section 4.1] story designer [→ Section 4.1] SAP API Business Hub [→ Section 1.2] [→ Section 5.2] [→ Section 6.2] [→ Section 6.2] [→ Section 10.4] [→ Section 15.2] SAP Application Interface Framework [→ Section 6.3] SAP Ariba [→ Section 11.5] SAP Ariba Buying [→ Section 11.5] SAP Ariba Cloud Integration Gateway [→ Section 9.12] SAP Ariba Network [→ Section 11.5] SAP Ariba Sourcing [→ Section 11.5] [→ Section 11.5] SAP Bank Statement Reprocessing Rules [→ Section 14.7] SAP Billing and Revenue Innovation Management (SAP BRIM) [→ Section 14.7] SAP Business Suite powered by SAP HANA [→ Section 1.1] SAP Cash Application [→ Section 4.2] [→ Section 14.7] SAP Central Business Configuration [→ Section 16.2]

SAP Cloud Platform [→ Section 5.2] [→ Section 5.2] [→ Section 15.2] connectivity [→ Section 5.2] enterprise messaging [→ Section 6.2] [→ Section 6.7] [→ Section 6.7] integration suite [→ Section 6.2] SAP Cloud Platform Extension Factory, Kyma runtime [→ Section 5.2] [→ Section 5.2] SAP Cloud Platform Forms by Adobe [→ Section 19.1] SAP Cloud Platform Integration [→ Section 11.5] [→ Section 15.2] SAP Cloud Platform Integration service [→ Section 6.6] SAP Cloud Platform Master Data Integration service [→ Section 6.8] [→ Section 8.3] SAP Cloud Platform, Cloud Foundry environment [→ Section 5.2] SAP Cloud Platform, Kubernetes environment [→ Section 4.2] SAP Cloud Print Manager [→ Section 18.2] [→ Section 18.2] SAP Cloud SDK [→ Section 5.2] [→ Section 5.2] SAP Cloud Security Framework [→ Section 21.3] SAP Conversational AI [→ Section 4.2] SAP Data Intelligence [→ Section 4.2] [→ Section 4.2] [→ Section 9.12] SAP Digital Payments [→ Section 14.8] SAP Document Compliance [→ Section 15.2] SAP Field Service Management [→ Section 10.5] SAP Fieldglass [→ Section 11.5] SAP Fiori [→ Section 1.1] [→ Section 3.1] [→ Section 17.1] [→ Section 20.1] app reference library [→ Section 3.1] apps [→ Section 3.1] launchpad [→ Section 3.1] [→ Section 17.1] [→ Section 19.5] libraries [→ Section 3.1] library [→ Section 3.1] mobile cards [→ Section 3.1] roles [→ Section 3.1] SAP Fiori elements [→ Section 3.1] [→ Section 3.1] analytical list page [→ Section 3.1] list report [→ Section 3.1] object page [→ Section 3.1] overview page [→ Section 3.1]

worklist [→ Section 3.1] SAP Fiori UI [→ Section 2.1] SAP for Retail [→ Section 8.1] SAP GUI [→ Section 3.1] SAP HANA [→ Section 1.1] search [→ Section 3.2] SAP HANA automated predictive library [→ Section 4.2] SAP HANA predictive analytics library [→ Section 4.2] SAP liveCache [→ Section 12.5] SAP Localization Hub [→ Section 14.3] SAP Master Data Governance [→ Section 11.4] SAP object types [→ Section 6.7] SAP One Support Launchpad [→ Section 19.5] SAP R/2 [→ Appendix SAP] SAP R/3 [→ Appendix SAP] SAP S/4HANA [→ Section 1.1] [→ Section 3.1] [→ Section 4.2] [→ Section 4.2] [→ Section 4.2] [→ Section 11.1] [→ Section 11.5] [→ Section 14.6] API [→ Section 6.1] architecture [→ Section 1.2] authentication [→ Section 7.2] business event [→ Section 6.7] GDPR [→ Section 7.2] integration [→ Section 6.1] machine learning application [→ Section 4.2] physical access control [→ Section 7.2] search [→ Section 3.2] search architecture [→ Section 3.2] search models [→ Section 3.2] system conversion [→ Section 8.3] SAP S/4HANA Cloud [→ Section 4.1] [→ Section 5.2] [→ Section 6.1] [→ Section 6.1] [→ Section 6.4] [→ Section 9.3] activity mapping [→ Section 17.1] authorization object extension [→ Section 17.1] communication arrangement [→ Section 6.4] communication management [→ Section 6.4] communication scenario [→ Section 6.4]

communication system [→ Section 6.4] communication user [→ Section 6.4] performance [→ Section 20.1] restriction field [→ Section 17.1] restriction type [→ Section 17.1] RFC communication [→ Section 6.5] sizing [→ Section 20.1] user types [→ Section 17.2] SAP S/4HANA Embedded Analytics [→ Section 4.1] SAP S/4HANA Finance [→ Section 14.7] SAP S/4HANA Sales [→ Section 9.1] [→ Section 9.2] SAP S/4HANA Sourcing and Procurement [→ Section 11.1] SAP Smart Business [→ Section 4.1] SAP Smart Business app [→ Section 3.1] SAP Solution Builder Tool [→ Section 16.1] SAP Supply Chain Management [→ Section 8.1] SAP Support Portal [→ Section 19.5] SAPUI5 [→ Section 3.1] freestyle application [→ Section 3.1] runtime [→ Section 3.1] Scalability [→ Section 20.3] Schedule line [→ Section 9.5] Scheduling agreement [→ Section 9.1] [→ Section 11.1] [→ Section 12.4] Scikit-Learn [→ Section 4.2] Security [→ Section 21.2] Security Assertion Markup Language (SAML) [→ Section 17.1] [→ Section 17.1] Security group [→ Section 21.1] Segment [→ Section 14.2] Segregation of duties [→ Section 14.11] [→ Section 17.1] [→ Section 17.2] Self-Service Configuration [→ Section 8.1] UIs (SSC UIs) [→ Section 16.1] Serial number [→ Section 12.6] Service business transactions integration [→ Section 10.5] Service confirmation [→ Section 10.1] [→ Section 10.2]

Service contract [→ Section 10.2] [→ Section 10.2] [→ Section 14.2] Service entry sheet [→ Section 11.1] Service operations [→ Section 10.1] architecture [→ Section 10.1] business objects [→ Section 10.2] business partner [→ Section 10.3] master data [→ Section 10.3] organization units [→ Section 10.3] processes [→ Section 10.2] service product [→ Section 10.3] service teams [→ Section 10.3] technical objects [→ Section 10.3] Service order [→ Section 10.1] [→ Section 10.2] Service quotation [→ Section 10.2] Service transactions [→ Section 10.4] advanced variant configuration [→ Section 10.4] partner functions [→ Section 10.4] pricing [→ Section 10.4] status management [→ Section 10.4] transaction history [→ Section 10.4] Service-specific data model [→ Section 2.2] Shipping and receiving [→ Section 13.5] Shipping point [→ Section 9.5] [→ Section 12.2] Side-by-side extensions [→ Section 5.2] custom backend application [→ Section 5.2] custom user interface [→ Section 5.2] Situation indication [→ Section 4.3] notification [→ Section 4.3] page [→ Section 4.3] templates [→ Section 4.3] Situation handling, message-based [→ Section 4.3] Sizing [→ Section 20.2] [→ Section 20.2] SOAP services [→ Section 6.1] Solution business [→ Section 10.2]

Source list [→ Section 11.1] Source of supply [→ Section 11.1] Spool system [→ Section 18.2] Standard costing [→ Section 14.5] price [→ Section 8.1] [→ Section 14.2] Stock management [→ Section 13.4] transfer [→ Section 9.1] transport orders [→ Section 12.6] type [→ Section 13.4] Stock-identifying fields [→ Section 12.5] Storage bin [→ Section 13.2] location [→ Section 12.2] type [→ Section 13.2] Strategic procurement [→ Section 11.1] Structured query language (SQL) [→ Section 2.1] Super administrator [→ Section 19.5] Super BOM [→ Section 8.2] Supervisory control and data acquisition (SCADA) [→ Section 12.7] Supplier invoicing [→ Section 14.6] Supply assignment [→ Section 12.5] protection [→ Section 12.5] S-user ID [→ Section 20.2] Sustainability [→ Section 20.4] System Landscape Transformation Replication Server (SLT) [→ Section 14.9]

T⇑ Tax [→ Section 14.3] Technical architecture foundation [→ Section 2.1] Telegram [→ Section 13.5] [→ Section 13.10] Tenant [→ Section 19.1]

Tenant database [→ Section 19.3] [→ Section 19.4] [→ Section 19.4] TensorFlow [→ Section 4.2] Topc filters [→ Section 6.7] Total cost of implementation (TCI) [→ Section 16.1] Trading platform integration (TPI) [→ Section 14.8] [→ Section 19.1] Transaction /SAPAPO/MAT1 [→ Section 8.1] /SCWM/MAT1 [→ Section 8.1] DRFIMG [→ Section 8.1] MDS_LOAD_COCKPIT [→ Section 8.3] MM01 [→ Section 8.1] MM02 [→ Section 8.1] MM03 [→ Section 8.1] MM41 [→ Section 8.1] MM42 [→ Section 8.1] MM43 [→ Section 8.1] Transactional processing-enabled applications [→ Section 2.1] Transactional view [→ Section 2.1] [→ Section 2.2] Transmission control [→ Section 7.2] Transportation units [→ Section 13.4] Trigger object [→ Section 4.3]

U⇑ Unified connectivity [→ Section 7.2] Units of Measure [→ Section 8.1] Universal Journal [→ Section 14.2] [→ Section 14.5] [→ Section 14.5] [→ Section 14.10] Universal key mapping service [→ Section 11.4] Universal unique identifier [→ Section 2.2] Update [→ Section 19.4] Upgrade [→ Section 19.4] US Generally Accepted Accounting Principles (US GAAP) [→ Section 14.2] User experience (UX) [→ Section 3.1] User propagation [→ Section 6.4]

V⇑ Validation [→ Section 5.1] Valuation area [→ Section 8.1] Value-added service [→ Section 13.5] Variant configuration [→ Section 8.2] [→ Section 8.2] object dependencies [→ Section 8.2] View Browser application [→ Section 4.1] Virtual Data Model (VDM) [→ Section 2.1] [→ Section 2.1] [→ Section 6.8] [→ Section 8.3] [→ Section 9.10] [→ Section 10.4] structure [→ Section 2.1] Virtual elements [→ Section 2.2] Virtual private cloud [→ Section 21.1] Virtual private network (VPN) [→ Section 1.2] Vulnerability announcement service (VAS) [→ Section 21.2]

W⇑ Warehouse automation [→ Section 13.10] control unit [→ Section 13.10] monitor [→ Section 13.8] [→ Section 13.9] number [→ Section 13.2] request processing [→ Section 13.5] Wave management [→ Section 13.5] Web Client UI [→ Section 10.5] Web Dynpro for ABAP [→ Section 3.1] [→ Section 5.1] Work breakdown structure (WBS) [→ Section 14.5] Work center [→ Section 12.3]

Service Pages The following sections contain notes on how you can contact us. In addition, you are provided with further recommendations on the customization of the screen layout for your e-book.

Praise and Criticism We hope that you enjoyed reading this book. If it met your expectations, please do recommend it. If you think there is room for improvement, please get in touch with the editor of the book: Will Jobst. We welcome every suggestion for improvement but, of course, also any praise! You can also share your reading experience via Twitter, Facebook, or email.

Supplements If there are supplements available (sample code, exercise materials, lists, and so on), they will be provided in your online library and on the web catalog page for this book. You can directly navigate to this page using the following link: https://www.sappress.com/5189. Should we learn about typos that alter the meaning or content errors, we will provide a list with corrections there, too.

Technical Issues If you experience technical issues with your e-book or e-book account at SAP PRESS, please feel free to contact our reader service: [email protected]. Please note, however, that issues regarding the screen presentation of the book content are usually not caused by errors in the e-book document. Because nearly every reading device (computer, tablet, smartphone, e-book reader) interprets the EPUB or Mobi file format differently, it is unfortunately impossible to set up the e-book document in such a way that meets the requirements of all use cases. In addition, not all reading devices provide the same text presentation functions and not all functions work properly. Finally, you as the user also define with your settings how the book content is displayed on the screen. he EPUB format, as currently provided and handled by the device manufacturers, is actually primarily suitable for the display of mere text documents, such as novels. Difficulties arise as soon as technical text contains figures, tables, footnotes, marginal notes, or programming code. For more information, please refer to the section Notes on the Screen Presentation and the following section. Should none of the recommended settings satisfy your layout requirements, we recommend that you use the PDF version of the book, which is available for download in your online library.

Recommendations for Screen Presentation and Navigation We recommend using a sans-serif font, such as Arial or Seravek, and a low font size of approx. 30–40% in portrait format and 20–30% in landscape format. The background shouldn’t be too bright. Make use of the hyphenation option. If it doesn't work properly, align the text to the left margin. Otherwise, justify the text. To perform searches in the e-book, the index of the book will reliably guide you to the really relevant pages of the book. If the index doesn't help, you can use the search function of your reading device. Since it is available as a double-page spread in landscape format, the table of contents we’ve included probably gives a better overview of the content and the structure of the book than the corresponding function of your reading device. To enable you to easily open the table of contents anytime, it has been included as a separate entry in the device-generated table of contents. If you want to zoom in on a figure, tap the respective figure once. By tapping once again, you return to the previous screen. If you tap twice (on the iPad), the figure is displayed in the original size and then has to be zoomed in to the desired size. If you tap once, the figure is directly zoomed in and displayed with a higher resolution. For books that contain programming code, please note that the code lines may be wrapped incorrectly or displayed incompletely as of a certain font size. In case of doubt, please reduce the font size.

About Us and Our Program The website http://www.sap-press.com provides detailed and first-hand information on our current publishing program. Here, you can also easily order all of our books and ebooks. Information on Rheinwerk Publishing Inc. and additional contact options can also be found at http://www.sap-press.com.

Legal Notes This section contains the detailed and legally binding usage conditions for this e-book.

Copyright Note This publication is protected by copyright in its entirety. All usage and exploitation rights are reserved by the author and Rheinwerk Publishing; in particular the right of reproduction and the right of distribution, be it in printed or electronic form. © 2021 by Rheinwerk Publishing Inc., Boston (MA)

Your Rights as a User You are entitled to use this e-book for personal purposes only. In particular, you may print the e-book for personal use or copy it as long as you store this copy on a device that is solely and personally used by yourself. You are not entitled to any other usage or exploitation. In particular, it is not permitted to forward electronic or printed copies to third parties. Furthermore, it is not permitted to distribute the e-book on the Internet, in intranets, or in any other way or make it available to third parties. Any public exhibition, other publication, or any reproduction of the e-book beyond personal use are expressly prohibited. The aforementioned does not only apply to the e-book in its entirety but also to parts thereof (e.g., charts, pictures, tables, sections of text). Copyright notes, brands, and other legal reservations as well as the digital watermark may not be removed from the e-book.

Digital Watermark This e-book copy contains a digital watermark, a signature that indicates which person may use this copy. If you, dear reader, are not this person, you are violating the copyright. So please refrain from using this e-book and inform us about this violation. A brief email to [email protected] is sufficient. Thank you!

Trademarks The common names, trade names, descriptions of goods, and so on used in this publication may be trademarks without special identification and subject to legal regulations as such. All of the screenshots and graphics reproduced in this book are subject to copyright © SAP SE, Dietmar-Hopp-Allee 16, 69190 Walldorf, Germany. SAP, ABAP, ASAP, Concur Hipmunk, Duet, Duet Enterprise, ExpenseIt, SAP ActiveAttention, SAP Adaptive Server

Enterprise, SAP Advantage Database Server, SAP ArchiveLink, SAP Ariba, SAP Business ByDesign, SAP Business Explorer (SAP BEx), SAP BusinessObjects, SAP BusinessObjects Explorer, SAP BusinessObjects Web Intelligence, SAP Business One, SAP Business Workflow, SAP BW/4HANA, SAP C/4HANA, SAP Concur, SAP Crystal Reports, SAP EarlyWatch, SAP Fieldglass, SAP Fiori, SAP Global Trade Services (SAP GTS), SAP GoingLive, SAP HANA, SAP Jam, SAP Leonardo, SAP Lumira, SAP MaxDB, SAP NetWeaver, SAP PartnerEdge, SAPPHIRE NOW, SAP PowerBuilder, SAP PowerDesigner, SAP R/2, SAP R/3, SAP Replication Server, SAP Roambi, SAP S/4HANA, SAP S/4HANA Cloud, SAP SQL Anywhere, SAP Strategic Enterprise Management (SAP SEM), SAP SuccessFactors, SAP Vora, TripIt, and Qualtrics are registered or unregistered trademarks of SAP SE, Walldorf, Germany.

Limitation of Liability Regardless of the care that has been taken in creating texts, figures, and programs, neither the publisher nor the author, editor, or translator assume any legal responsibility or any liability for possible errors and their consequences.

The Document Archive The Document Archive contains all figures, tables, and footnotes, if any, for your convenience.

Figure 2.1

Field Names in VDM

Figure 2.2

View Layering: Select-from Relationships

Figure 2.3

View Hierarchies

Figure 2.4

Business Object Definition and Implementation

Figure 2.5

Service Definition and Implementation

Figure 2.6 ABAP RESTful Application Programming Model Runtime Architecture Overview

Figure 2.7

Runtime Architecture with Managed Provider

Figure 2.8

BOPF-Managed Provider

Figure 2.9

Unmanaged Provider with Managed Draft

Figure 3.1

SAP Fiori Launchpad in SAP S/4HANA

Figure 3.2

SAP Fiori Apps and Libraries

Figure 3.3 Artifacts

SAP Fiori Elements Application Patterns and Their

Figure 3.4

Search Architecture in SAP S/4HANA

Figure 4.1 Runtime Diagram of SAP S/4HANA Analytical Infrastructure

Figure 4.2

Organization of Analytical Models in VDM

Figure 4.3 S/4HANA

Design Time of Analytics Infrastructure in SAP

Figure 4.4

SAP S/4HANA Machine Learning Architecture

Figure 4.5

Embedded Machine Learning Architecture

Figure 4.6

Side-by-Side Machine Learning Architecture

Figure 4.7

Architecture of Situation Handling in SAP S/4HANA

Figure 4.8

Intelligent Situation Handling: Overall Concept

Figure 4.9

Situation Indication

Figure 4.10

My Situations App

Figure 4.11 Depiction)

Progressive Information Disclosure (Conceptual

Figure 4.12

Situation Page (Conceptual Depiction)

Figure 5.1

Extensibility in SAP S/4HANA On-Premise and Cloud

Figure 5.2

Continuous Delivery Pipeline of SAP Cloud SDK

Figure 5.3 Extensions for the Intelligent Enterprise: Role of SAP Cloud SDK

Figure 6.1

SAP Application Interface Framework

Figure 6.2

Communication Management in SAP S/4HANA Cloud

Figure 6.3

Cloud Connector

Figure 6.4

SAP Cloud Platform Integration Service

Figure 6.5

Publishing Business Events from SAP S/4HANA

Figure 6.6

CDS-Based Data Extraction

Figure 6.7

Data Replication Framework

Figure 7.1

Simplified Process (Lehnert et al., 2018)

Figure 7.2

Data Subject Rights

Figure 7.3 2018)

Right to Be Forgotten and Context (Lehnert et al.,

Figure II.1

SAP S/4HANA Capabilities

Figure 8.1

Product Master Data Model in SAP S/4HANA

Figure 8.2

Manage Product Hierarchies App

Figure 8.3

Bill of Materials: Simplified Conceptual Model

Figure 8.4

Classification System Information Model

Figure 8.5

Elements of Variant Configuration in SAP S/4HANA

Figure 8.6

Representing Choice of Parts in BOM

Figure 8.7 High-Level Architecture of the Advanced Variant Configurator

Figure 8.8

Embedded Analytics for Configuration Data

Figure 8.9

Business Partner Data Model

Figure 8.10

Architecture of Business Partner Application

Figure 8.11

Conversion Scenarios

Figure 8.12

Business Partner and Customer/Supplier Data Model

Figure 9.1

Overview of Sales in SAP S/4HANA

Figure 9.2

Structure of Sales Documents

Figure 9.3 Use of Intelligent Situations Management in Sales Quotations

Figure 9.4

Processing of Sales Scheduling Agreements

Figure 9.5

Convergent Billing

Figure 9.6

Sales Analytics Using VDM

Figure 9.7 Overview Analytics, Including Predictive Analytics, in SAP S/4HANA Sales

Figure 9.8

Architecture of Pricing

Figure 9.9

Integration Architecture of SAP S/4HANA Sales

Figure 10.1

Service Operations Overview

Figure 10.2

Service Order Detail Page

Figure 10.3

Data Model of Service Business Transactions

Figure 11.1

Architecture of Procurement in SAP S/4HANA

Figure 11.2 Ordering Process for Production-Relevant Raw Materials, Spanning Multiple Personas/User Roles

Figure 11.3

Architecture of a Business Objects in Procurement

Figure 11.4

Architecture of Central Procurement

Figure 11.5 SAP S/4HANA Procurement Integration with SAP Ariba and SAP Fieldglass Solutions

Figure 11.6

Ariba Network Integration

Figure 11.7

Innovations Map for Procurement

Figure 12.1

Logistics Architecture Overview

Figure 12.2

Organizational Units in Core Logistics

Figure 12.3

Master Data Business Objects with Relationships

Figure 12.4

Transactional Business Objects and Process Steps

Figure 12.5 Transactional Business Objects with Business Keys and Semantic Attributes

Figure 12.6 S/4HANA

Inventory Data Model in SAP ERP and in SAP

Figure 12.7

Optimized Low-Level Configuration in SAP HANA

Figure 12.8

MRP Run

Figure 12.9

Kanban Architecture

Figure 12.10

Architecture of Predictive MRP

Figure 12.11

Batch Management

Figure 12.12

Quality Management in Sales

Figure 12.13

Quality Management in Procurement

Figure 13.1 Architecture Overview of Embedded and Decentralized SAP Extended Warehouse Management

Figure 13.2 Application Components of Extended Warehouse Management

Figure 14.1 Overview of Finance and Governance, Risk, and Compliance in SAP S/4HANA

Figure 14.2 Core Elements of the Finance Architecture in SAP S/4HANA (ABAP Platform)

Figure 14.3 Analysis

Overview of Accounting and Financial Planning and

Figure 14.4 Harmonized Data Model in Finance (Example View for SAP S/4HANA Cloud)

Figure 14.5

Inventory Accounting

Figure 14.6

Architecture of Service and Sales Accounting

Figure 14.7

Revenue Recognition

Figure 14.8

SAP Group Reporting Data Collection

Figure 14.9

Group Reporting Core Function

Figure 14.10

Financial Planning Integration with Consolidation

Figure 14.11

Financial Closing Architecture

Figure 14.12

Tax Management

Figure 14.13 Assembly

Architecture of Enterprise Contract Management and

Figure 14.14

Budget Availability Control

Figure 14.15

Architecture Overview of Predictive Accounting

Figure 14.16

Date Usage of Predictive Journal Entries

Figure 14.17 Entries

Simulation and Reduction of Predictive Journal

Figure 14.18

Integrated Financial Planning

Figure 14.19

Architecture of Financial Planning

Figure 14.20

Margin Analysis Architecture

Figure 14.21

Ledger-Specific Processing

Figure 14.22 S/4HANA

Architecture of Cost Object Controlling in SAP

Figure 14.23

Automatic Payment

Figure 14.24

Architecture of Dunning

Figure 14.25

Bank Statement Processing Architecture

Figure 14.26

SAP Cash Application

Figure 14.27

Architecture of Credit Evaluation and Management

Figure 14.28 Payments

Architecture of SAP S/4HANA Cloud for Customer

Figure 14.29

Architecture of Dispute Management

Figure 14.30

Detailed Architecture of Convergent Invoicing

Figure 14.31 Management

Components of SAP Billing and Revenue Innovation

Figure 14.32 S/4HANA

Integration of Contract Accounting within SAP

Figure 14.33

Payment Management Architecture

Figure 14.34

Advanced Payment Management

Figure 14.35 Integration of SAP S/4HANA and the SAP Digital Payments Add-on

Figure 14.36

Bank Relationship Management

Figure 14.37

Cash and Liquidity Management

Figure 14.38

Architecture of SAP Treasury and Risk Management

Figure 14.39 Finance

Integration Architecture of SAP S/4HANA for Central

Figure 14.40

Integration with SAP SLT Server

Figure 14.41

Replication of Accounting Views

Figure 14.42 Entity Relationship Diagram for Sales Order and Accounting View of Sales Order

Figure 14.43 Business Process Flow Involving Concurrent Process Steps in Different Systems

Figure 14.44 Major Components Involved in CSPC Implementation of Business Process

Figure 14.45 CSPC Implementation of Process Flow Involving Concurrent Steps in Different Systems

Figure 14.46

Overview of SAP GRC Solutions

Figure 14.47 Governance

Technical Architecture of SAP Cloud Identity Access

Figure 15.1

Advanced Compliance Reporting Architecture

Figure 15.2

SAP Document Compliance Architecture Overview

Figure 15.3 Compliance

Architecture of Back-End Functions in SAP Document

Figure 15.4 Localization Areas Addressed by Localization Toolkit for SAP S/4HANA Cloud and Underlying Tools and Technologies

Figure 16.1

Content Model of SAP Solution Builder Tool

Figure 16.2

Structure of the Business Adaptation Catalog (BAC)

Figure 16.3

Basic Structure of Business Adaptation Catalog

Figure 17.1

IAM Entities in SAP S/4HANA Cloud

Figure 17.2

Entities Defining Business User’s Overview Page

Figure 17.3

Overview of IAM Content Delivered by SAP

Figure 18.1

Architecture of SAP S/4HANA Output Management

Figure 18.2

Printing Architecture

Figure 18.3

Email Processing

Figure 18.4

Forms in SAP S/4HANA Cloud

Figure 18.5

Output Control

Figure 19.1 Data Centers for Operating SAP S/4HANA Cloud (October 2nd, 2020)

Figure 19.2 SAP S/4HANA Cloud Availability Status in SAP Trust Center (August 13, 2020)

Figure 19.3 S/4HANA

System Setup of On-Premise Edition of SAP

Figure 19.4 SAP HANA System with Multiple Tenant Databases and Shared Database

Figure 19.5 Completely Shared Table: ABAP Dictionary Table TCP00 (Code Page Catalog)

Figure 19.6 Sharing on Table Record Level: ABAP Dictionary Table T100 (Messages)

Figure 19.7 Level

Write and Read Accesses to Table Shared on Record

Figure 19.8 Blue-Green Procedure for Near-Zero Downtime Maintenance of SAP S/4HANA Cloud

Figure 19.9

Traditional Support without Built-in Support

Figure 19.10

Built-in Support

Figure 20.1

Impact of Network on End-to-End Response Time

Figure 20.2 Quick Sizer for SAP S/4HANA Cloud: Input Business Requirements

Figure 20.3

Quick Sizer for SAP S/4HANA Cloud: Results

Figure 20.4

SAP HANA Sizing Report: Results

Figure 20.5

Challenge: Elastic Scalability

Figure 20.6

Memory Distribution in SAP HANA per Tenant

Figure 20.7

Optimization Strategies

Figure 21.1

Network Layout

Figure 21.2

Different Span of Control