Supply Techniques: Reading Paper from the Supply Management Institute's series Purchasing and Supply Management™ [1 ed.] 9783896443366, 9783896733368

Gibt einen breiten Überblick über »Best Practice«-Techniken und Instrumente als Handwerkszeug zur Professionalisierung d

121 105 62MB

English Pages 142 [143] Year 2005

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Supply Techniques: Reading Paper from the Supply Management Institute's series Purchasing and Supply Management™ [1 ed.]
 9783896443366, 9783896733368

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Supply Techniques Reading Paper from the Supply Management Institute's series Purchasing and Supply Management

Supply Management Institute SMI™ Ed. Univ.-Prof. Dr. Christopher Jahns

A collaborative publication from

Publishing AG

Verlag Wissenschaft & Praxis

The German Library - CIP-Affiliation The German Library lists this publication in the German national bibliography;

detailed bibliographic data is available on the Internet at http://dnb.ddb.de.

© SMG™ Publishing AG, 2005 CH-9000 St. Gallen, Teufener Strasse 25

Tel.: +41 (0)71 226 10 60, Fax: +41 (0)71 226 10 69 homepage: www.smg-ag.com, e-mail: [email protected]

All rights reserved.

This publication as a whole and its constituent parts are protected by copyright

laws. Any copying, reproduction or use without written permission from the publisher violates copyright laws and is liable to prosecution. This particularly applies to copying, translating or microfilming as well as storing and processing the contents in electronic systems.

SMG™ Publishing AG ISBN 3-907874-36-6

Verlag Wissenschaft & Praxis ISBN 3-89673-336-2

Printed in Germany

SMI EUROPEAN BUSINESS SCHOOL Jr/.rnwtK»»)

Srhb4i K*M&«rUdlMiiM«MB

PREFACE

Professional supply managers do not only have to know how to negotiate or how to manage suppliers but are also increasingly challenged to work in cross-

functional teams in order to create competitive advantages for their companies. As a result, supply managers repeatedly need to apply new, non-classical supply techniques. A small selection of these supply techniques are presented in this

reading paper that should provide the reader with a general overview and

application examples for each technique. Furthermore, this reading paper would not have been possible with all the

conceptual details and the broad range of examples without the support of the following students: Sinern Atakan, Ankid Kedia, Anne-Kathrin Greiner, Arora

Shaurya, Marco Linz, Valentin Recker, Valerie Heymans, Ingmar Schaaf, Jens Burchardt, Jörg Gerbig and Jan Philipp Lüdtke.

We thank them for their contributions and wish the readers an interesting time.

The authors

Page 1

SMI EUROFEAN BUSINESS SCHOOL IntonwUMH»)

M Pm MAMAU Ml NT INSTni ll

!M>MI

TABLE OF CONTENT

FAILURE MODE AND EFFECTS ANALYSIS................................................... 3 LIFE CYCLE COSTING...................................................................................... 16

MATERIAL GROUP CLASSIFICATION........................................................ 26

QUALITY FUNCTION DEPLOYMENT........................................................... 38 RAPID PROTOTYPING..................................................................................... 47

REVERSE ENGINEERING................................................................................ 62 ABC/XYZ ANALYSIS........................................................................................ 74

SIMULTANEOUS ENGINEERING .................................................................. 83 TOTAL COST OF OWNERSHIP

................................................................ 96

TOTAL QUALITY MANAGEMENT .............................................................. 106 VALUE ANALYSIS I VALUE ENGINEERING.............................................. 120 REFERENCES.................................................................................................... 130

Page 2

SMI EUROFEAN BUSINESS SCHOOL

M;FPtir MAMMMINT INStin »

IummUhmI I'dowuh !M>MI Hum*v»ru»w»«

FAILURE MODE & EFFECTS ANALYSIS (FMEA)

TABLE OF CONTENT

1

BASICS........................................................................................................... 4

2

FUNCTIONALITY OF THE FMEA............................................................... 6

3

PRACTICAL APPLICATIONS .................................................................. 11

4

SUMMARY.................................................................................................... 14

Page 3

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

1

Basics

History and definition of the Failure Mode & Effects Analysis

The failure mode & effects analysis (FMEA) has been developed in the 1960s by

the National Aeronautics and Space Administration (NASA) in the context of the Apollo missions. Firstly being used in the aeronautics and space industry, FMEA has been adapted by production companies (mainly in the automotive industry).1

In Germany, the method was standardized in the industry standard DIN 25 448 under the name “Ausfalleffektanalyse” in 1980. It is also mentioned in the

German DIN EN ISO 9004-1 industry standard as an appropriate quality assurance tool.3 The FMEA is a formalized analytical method with the purpose to identify, analyze and avoid potential failures of products and processes.4 The potential failures are

identified by interdisciplinary teams who also analyze them and assess their

individual risk. The goal is to formulate appropriate methods to prevent the occurrence of failures or to minimize their severity. According to this, the FMEA

can be seen as a method for estimating and grading risks. It represents a guide to where attention and modification would be most effective in reducing the failure

probability.5 As a result, critical components can be identified for which the selection of an appropriate supplier is essential. Description of the Failure Mode & Effects Analysis

The FMEA in general carries out a risk analysis. The risk of a failure is

determined by and increases with its actual impact and its probability of occurrence. The probability of occurrence itself depends on the probability that

the failure appears and the probability that the failure is discovered. The FMEA risk analysis is based on these three risk elements: For each identified potential

failure the severity of its impact as well as its probabilities of appearance and detection are determined. Based in this information, the risk of each potential

failure can be assessed and the potential failures can be rated according to their

1 2 3 4 5

See Schmidt, et al. 1991. See Pfeifer. 2001. See Rinne and Mittag. 1995. See Gilchrist. 2000, p. 16. See Reinhart. 1996.

Page 4

SMI EUROPEAN BUSINESS SCHOOL IrMTMUMtal U««n»h SMiImB KcwhsrUhauMt

importance. In a next step, appropriate preventive actions are formulated in order

to reduce the individual risk. The risk can be reduced either by preventing its

occurrence or by minimizing its impact. Here, failure prevention represents the better alternative.1 In order to prevent failures, the FMEA has to be involved in every step of the product development and production process. It has to assure at

the end of each step that the next step can be enrolled on a nearly failure and risk

free basis.2 The early identification of failures is important because the costs for failure elimination increase with the time (see figure 1). Figure 1: Relationship between point offailure identification and costs forfailure elimination Guarantee costs Costs of reduced availability Assembly costs / logistics costs Costs of spare part inventory

Supply costs Assembly costs

Costs for changes in draft

Source: According to Timischl. 1995, p. 38.

As the FMEA is a rather demanding method, it should be focused on critical aspects of the product development and production process. A FMEA is in

general useful within the context of safety regulations, safety relevant parts, new materials, new developments, significant changes, risky processes etc.4 The

FMEA can be used in this context as a leadership tool setting improvement goals (e.g. by demanding a reduction of the Risk Priority Number (RPN)) which will be

defined later on and by controlling there achievement.5

1 2 3 4 5

See Reinhart. 1996, p. 87. See Reinhart. 1996, p. 88. See Timischl. 1995. See Reinhart. 1996, p. 89. See Kamiske and Brauer. 1999, p. 30.

Page 5

SMI EUROPEAN BUSINESS SCHOOL

SI pro

MASAGIVWM

I XS HI I

II

lnu»TMlMWS tu»

diagrams and tree charts. At the end, a clear and specific description of the product or process must be articulated. The creation of this description ensures

that the responsible fully understands the form, fit, and function of the product or process as well as the logical relationships between the components of the product

or the steps and stages of the process. The major components or steps of the product or process than have to be listed down in a logical manner in Column 1 of

the template (see figure 2).1 Step 2: Risk analysis

For each in the preparation phase defined product component or process step all

potential failure modes have to be identified within the interdisciplinary team. A failure mode is defined as how a system, product, or process is failing. The failure

modes have to be listed in Column 2 of the template (see figure 2). After that, for each of the failure modes a corresponding effect (or effects) must be identified and listed in Column 3 of the FMEA template (see figure 2). A failure effect is

what an internal or external customer will experience or perceive once the failure

occurs. Examples of effects include: inoperability or performance degradation of

the product or process, injury to the user, damage to equipment, etc. Aside from its effect(s), the potential cause(s) of every listed failure mode must be

enumerated in Column 5 of the template (see figure 2). A potential cause should be something that actually triggers the failure to occur. Examples of failure causes

include: improper equipment set-up, operator error, use of wom-out tools, use of incorrect software revision, contamination, etc. Finally, all current control systems

that contribute to the prevention of the occurrence of each of these failure modes

have to be identified and listed in Column 7 of the template (see figure 2).2 Step 3: Risk assessment through Risk Priority Numbers

To each identified effect a severity rating (SEV) of its impact has to be assigned in Column 4 (see figure 2). Each company may develop its own severity rating system, depending on the nature of its business. A common industry standard is to

use a l-to-10 scale system, with the T' corresponding to 'no effect' and the TO'

corresponding to maximum severity, such as the occurrence of personal injury or

1 See Reinhart. 1996, p. 90; Pfeifer. 2001, p. 402. 2 See Reinhart. 1996, p. 90.

Page 8

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

death with no warning or a very costly breakdown of an enormous system. The SEV numbers are shown in Column 4 of the FMEA template (see figure 2). Also

the probability of occurrence for each of the potential failure causes must be quantified. Every failure cause will be assigned a number (PF) indicating this

probability of occurrence. A common standard for this is to assign a T to a cause that is very unlikely to occur and a TO' to a cause that is frequently encountered.

The PF numbers have to be listed in column 6 of the template (see figure 2).

Finally, the effectiveness of each of the listed control methods must be assessed in

terms of the probability of detecting the occurrence of the failure mode or its failure cause. As usual, a number must be assigned to indicate the detection probability (DET) of each control. A common standard for this is to assign a T

for a high and a TO' for a low detection probability. DET numbers are shown in Column 8 of the template (see figure 2). The Risk Priority Number (RPN) is simply the product the failure mode severity (SEV), failure cause probability (PF), and detection probability (DET) ratings.'

Figure 3: RPNformula

RPN = (SEV) x (PF) x (DET) The RPN, which is listed in column 9 of the FMEA template (see figure 2), is

used in prioritizing which items require additional quality planning or action. The RPN can take values between 1 and 1000 and is an indicator for the risk perceived

by the customer with the occurrence of each individual failure. The FMEA

moderator’s role consists in supervising the homogeneity of risk assessment and 2 therewith the comparability of the assigned risks. Step 4: Interpretation of the Risk Priority Numbers and development of preventive actions

A high RPN needs immediate attention since it indicates that the failure mode

can result in an enormous negative effect, its failure cause has a high probability of occurring, and there are insufficient controls to catch it. Thus, action items must be defined to address failure modes that have high RPN's. It is not possible

to define a RPN hurdle rate above which action has to be taken. The literature

1 See Reinhart. 1996, p. 90; Pfeifer. 2001, p. 402; Heinrich. 1996, p. 8. 2 See Reinhart. 1996.

Page 9

SMI EUROPEAN BUSINESS SCHOOL IntorMlMwal Vmamwli !M>MI ItewhwUhMUMt

recommends a value between 80 and 120 as critical RPN. An individual hurdle rate has to be defined for the particular company, product or process. As the risk

assessment is rather subjective, the RPN is not to bee interpreted as an exact figure. The subjectivity has to be taken into account when deciding about preventive actions. Also the composition of the risk has to be taken into account.

Values above 7 for the severity of the impact perceived by the customer are not acceptable. Also a high probability of failure occurrence could be compensated

by a high probability of detection but would lead to significant costs regarding

rework for example. The preventive actions could include inspection, testing, monitoring, redesign, preventative maintenance, redundancy, process evaluation /optimization etc. Column 10 (see figure 2) of the FMEA template is used to list

down applicable preventive actions.1 In general all failure modes should be reviewed and analysed according to the RPN which determines the order and

intensity of the failure treatment.2 In practice, the RPN can be used according to

the Pareto Principle to identify the 20% of the potential failure sources that cause 80% of the potential failures which need to be eliminated firstly.3 Step 5: Determination, execution and control of preventive actions

After appropriate preventive actions are defined, a responsible action owner and a target date of completion for each of the actions have to be assigned. This

makes ownership of the actions clear-cut and facilitates tracking of the actions' progress. The responsible owner and target completion dates must be indicated

in column 11 of the FMEA template (see figure 2). The status or outcome of

each action item must also be indicated in column 12 of the template (see figure 2). After the defined actions have been completed, their over-all effect

on the failure mode must be reassessed and indicated in the columns 13-16 of

the FMEA template (see figure 2). Therefore the SEV, PF, and DET numbers have to be updated accordingly. The new RPN must then be recalculated once the new SEV, PF, and DET numbers have been established. The new RPN helps to decide if more actions are needed or if the actions taken have been

1 See Reinhart. 1996. 2 See Pfeifer. 200, p. 402. 3 See Jones. 2004, p. 24; Kamiske and Brauer. 1999, p. 31.

Page 10

SMI EUROPEAN BUSINESS SCHOOL

SI pro

MASAGIVWM

I XS HI I

II

IrMTMlMml I'wwr*«» SMiImB KcwhsrUlwuMt

sufficient.1 A failure potential can in general be regarded as eliminated when its RPN is significantly less then 100.

3

Practical applications of the Failure Mode & Effects Analysis

In general, most of the examples concerning the actual usage of FMEA given in journal articles or provided by companies or FMEA consultants don’t differ from

the “traditional usage” of FMEA as proactive quality assurance method described above (e.g. at Volvo or Mercedes Benz ). In these cases, FMEA is mainly used in

the context of product or process development as part of a TQM initiative or to fulfill the DIN ISO 9000 standard. Unfortunately, most FMEA information is confidential and therefore mostly not available to the public. Only blank FMEA

templates are published which in general don’t differ that much from the one

shown in this paper. The companies also don’t give a deeper insight into their FMEA processes but they don’t seem to differ from the processes described

earlier. Therefore, this part of the paper focuses on FMEA uses that are not

directly focused on production but rather supply risk management oriented. FMEA for risk management and supplier integration at the NASA When President Kennedy announced of the US manned lunar landing exploration

program, also known as the Apollo program, the founders of NASA decided that they had to have quantitative numerical risk goals for the Apollo mission. After discussion, these goals were decided to be 1 out of 100 for mission completion

(0.990), and that 1 out of 1000 for returning the crew safely (0.999). NASA

officials understood that setting a risk of failure goal was not enough, but rather that identification of potential failures and their risks is essential to a successful design and thus to a successful mission. A tool for risk management of potential technical failures was needed and the FMEA was developed. Only missions above under the critical risk were launched so that FMEA not only was used as a tool for

proactive failure prevention but also for active risk management. During the analyses, each constituent part of a system was reviewed to determine its potential

1 See Reinhart. 1996, p. 92; Pfeifer. 2001, p. 403. 2 See Heinrich. 1996, p. 8. 3 See Koster. 1994.

Page 11

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

lnu»TMlMMI ItewhwUhMUMt

modes of failure and the subsequent effects which that mode of failure would

have upon the component itself, as well as the component of which it was a part, (its impact on the subsystem, system, vehicle, mission, and crew). The goal of this

bottom-up analysis was to identify individual components whose failure might put

the mission at risk. The analysis also indicated potential improvements of the

existing design or, possible design changes to eliminate the failure mode, decrease its frequency to an acceptably low level, or moderate its consequences. Single failures which could not be eliminated or mitigated were collected across the

design on a Critical Items List (CIL). This list allowed giving critical items a special attention in development, manufacture, installation, and test. Since the FMEAs and their associated CILs were critical determinants whether the mission

was launched or not, the process as a whole is often referred to as the

“FMEA/CIL” process. The FMEA/CIL process was a static qualitative risk management tool oriented toward assessing and reducing risk and preventing the loss of crew, vehicle, or mission. The FMEA/CIL approach proved to be successful in producing reliable spacecraft and launch vehicles (based upon the

success of the Apollo program). Today, the NASA is trying to establish more quantitative risk management methods in order to replace the FMEA/CIL.1 But it is still used to a large extend in order to identify technical risks. The NASA is also integrating its suppliers into the FMEA/CIL process in order to double-check the FMEA findings. The supplier has to conduct a FMEA of the particular component

itself. These results then are compared to those of the NASA and a final Critical Items List is formulated together with the supplier. The result of such a double­

check for a body flap in cooperation with McDonnell Douglas can be seen in figure 4. The IAS figures are the Independent Orbiter Assessment RPNs which represent the results of the McDonnell Douglas FMEA. The Issue figures

represent the deviation from NASA’s RPNs which have to be discussed in order

to assess the appropriate risk to each of the body flap component. The final resolution then represents the final risk assessment after discussion. This assures

that both side know about the importance and risks inheriting in the parts and

1 See Pelaccio and Fragola. 1998. 2 See McDonnell Douglas Astronautics.

Page 12

SMI SI lit X

EUROPEAN BUSINESS SCHOOL

M XX AOI Ml X I

I XS I II I

II

ftuM&MfUjiwi'M'MB

UbmwcmIw

leads to a better estimation of the potential failure modes by avoiding that an

aspect is forgotten when conducting the analysis from only one side.

Figure 4: RPN double-check with McDonnell Douglas

BODY FLAP ACTUATOR ASSESSMENT OVERVIEW BF ACTUATOR ASSESSMENT SUMMARY________ FINAL RESOLUTION**

ORIGINAL ASSESSMENT* IOA

NASA

IOA

ISSUES

NASA

ISSUES

FMEA

43

36

7

FMEA

34

34

0

CIL

19

17

3

CIL

15

15

0

I IOA

DRIVE SHAFT

ROTARY ACTUATOR

POWER DRIVE UNIT

NASA ISSUES

IOA

IOA

NASA ISSUES

NASA ISSUES

FMEA

40

33

1

FMEA

2

2

0

FMEA

1

1

0

at

16

14

3

at

2

2

0

at

1

1

0

Source: McDonnell Douglas Astronautics.1

Failure Mode & Effects Analysis and supply management Besides the mentioned possibility to integrate a supplier into a double-check

process in order to justify the company’s own risk assessment a trend becomes obvious: Manufacturing companies include the usage of FMEA as quality

criterion and quality indicator into their supplier evaluation. Examples are Scania,

Metabo and Volvo. Many companies even formulate so called “supplier requirements manuals” determining minimum requirements concerning the

supplier’s quality control system has to fulfill. Most of the automotive parts are required to be evaluated by the use of FMEA in the design process. A FMEA report accompanied with the component or part design is a common practice in

automotive industry. The FMEA can result in high product reliability, better quality planning, less design modification, continuous improvement in product

and process design, and lower manufacturing cost for a manufacturing company. If a company's main purpose of preparing the FMEA report is to fulfil the

customers' demand for it, the benefits of performing FMEA will be reduced, and

1 Link: ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19900001652_1990001652.pdf.

Page 13

SMI EUROPEAN BUSINESS SCHOOL Jnumwlm»»! Voiomi«« !M>lnS K*m:h*rUhM>M>

the costs for the FMEA process are not compensated by the benefits except customers' satisfaction to have the report.1

4

Summary

The FMEA is a formalized analytical method with the purpose to identify, analyze

and avoid potential failures of products and processes.2 There are three stages that are very critical in the FMEA process for the success of the analysis. The first one is the determination of the potential failure modes. The second one is to find the

data for occurrence, detection, and severity rankings. And the third one is the modification of the current product or process design and the development of the control process based on the FMEA report.3 The FEMA can be applied

independent from a company’s industry for systems, products and processes reviews.4 An early identification and elimination of failures is less costly than in

later phases. Therefore the FMEA has to be implemented into the early product

and process design phase.5 Because of the high division of labor and knowledge,

the FMEA has to be carried out by interdisciplinary teams. The FMEA allows collecting a company’s already existing knowledge and prevents from making the

same error twice.6 A high RPN indicates what failure modes are most critical and

where preventive actions are needed. But the subjectivity inherited in the RPN has to be taken into account when deciding about preventive actions.7 In Practice, the RPN can be used according to the Pareto Principle to identify the 20% of the

potential failure sources that cause 80% of the potential failures which need to be eliminated firstly.8 All in all, the FMEA leads to a better understanding of the

company’s own products and processes can result in higher product reliability,

better quality and continuous improvement in product and process design as well as lower manufacturing costs. But the focus should not lie on preparing the FMEA report to fulfil the customers' demand for it because then the benefits of

1 See Teng and Ho. 1996, p. 3. 2 See Gilchrist. 2000, p. 16. 3 See Teng and Ho. 1996, p. 14. 4 See Bruhn and Masing. 1994, p. 489. 5 See Reinhart. 1996, p. 88. 6 See Pfeifer. 2001, p.403. 7 See Reinhart. 1996. 8 See Jones 2004, p. 24; Kamiske and Brauer. 1999, p. 31.

Page 14

SMI EUROPEAN BUSINESS SCHOOL

M.-PPH MANAGIMIN1

INSH»l«l

lnu»TMlMMI ItewhwUhMUMt

performing FMEA will be reduced or eliminated.1 The FMEA has become an

important part of proactive quality control and is highly standardized. But FMEA

is more than just a template. It can be used in rather creative ways e.g in order to

integrate a supplier into the quality control process and to improve the outcome

together with the company or as a quality indicator within a company’s supplier evaluation process as described in Chapter 3. Increasing governmental regulations

together with growing customer expectations and increasing competition demand

for preventive quality controls like the FMEA method in order to assure competitive, safe and reliable products.

1 See Teng and Ho. 1996, p. 3.

Page 15

SMI 5I CP «V MA NA GE MEN iTlN M» TV t E

EUROFEAN BUSINESS SCHOOL IntonwUnH*) UwwmMK BrhMI k»»«run»»»«

LIFE CYCLE COSTING (LCC)

TABLE OF CONTENT

1

INTRODUCTION....................................................................................... 17

2

BASIC LIFE CYCLE CONCEPTS............................................................... 17

3

USING LIFE CYCLE COSTING................................................................. 19

4

BENEFITS OF LCC......................................................................................21

5

LIMITATIONS OF LCC...............................................................................23

6

CONCLUSION............................................................................................ 25

Page 16

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

1

Introduction

Globalization, falling trade barriers as well as technological improvements have caused organizations to reform their cost structures in order to be more competitive. The concept of supply management and cost management is getting

more and more attention everyday as it affects profitability directly. Superior outcomes in purchasing can be the outcome of a better understanding of

the factors facilitating or inhibiting costs in an organization. Many purchasing decisions may have significant impact on the profitability of the company

although such impact may not be readily recognized. It is shown that the delivered price of a piece of productive equipment seldom exceeds 50 percent of the

eventual total cost of ownership of the asset and is generally substantially less.1 In this environment, life cycle costing may help companies to combine cost and technology considerations with a long term strategic cost perspective. This paper

explores the concept of life cycle costing in the supply management context.

Furthermore, it takes a strategic perspective from a user’s point of view to investigate the usefulness and limitations of life cycle costing. First, in order to limit the framework of analysis and prevent any

misinterpretation, a few key concepts of LCC will be explained. After the descriptive framework is built, approaches for analyzing life cycle costing as well

as the measurement of certain elements in LCC will be discussed. Next, relevance and benefits of life cycle costing in the acquisition decision will be investigated.

Then, shortcomings of life cycle costing in the context of supply management will

be pointed out to prevent any misguided deductions. Finally, a summary of key

points will be given.

2

Basic life cycle concepts

Life cycle and life cycle costs

The life cycle of an asset is defined as the time interval between the recognition of a need or an opportunity through the creation of an asset to its final disposal. In

this article, the term life cycle refers to the physical life cycle but not to the 1 Dobler and Burt. 1996.

Page 17

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

economic life cycle that consists of the market introduction, growth, maturity and decline phases. 1 Life cycle cost is the total cost of an item over the period of its

use. It includes all dollar costs associated with acquisition, use, maintenance and disposal of a given product.

Life cycle costing Life cycle costing is the process of identifying and documenting all the costs involved over the life of an asset. Its objective is to determine the total cost of performing a given function during the useful life of the equipment performing

the function. Life-cycle costing is one of several accounting methods that can be used to provide for a more comprehensive view of costs. The figure 1 shows how

LCC relates to total cost and full cost accounting. The difference is the number and type of costs incorporated in the analysis.

Figure 1: Alternative cost accounting methods

Source: Cole and Sterner. 2000.

Major elements of life cycle costs To estimate the total life cycle costs of an asset, it is necessary to identify the key cost elements:

1 See AN AO Guide. 2001.

Page 18

SMI EUROFEAN BUSINESS SCHOOL

M>pm MAMGI Ml NT INMIHII

IntonwUHM*! IfewmMH !M>lnB k»»«run»»»«

- Capital I Acquisition Cost: All initial costs associated with the purchase of the product. It includes purchase price, sales tax, transportation,

installation costs as well as any research, development, testing and

evaluation costs.

- Operating Cost: Energy cost, labor, insurance, overhead charges. - Repair and Maintenance Cost: Routine maintenance while the product is operating properly and repair when it fails. It does not include the costs of work done under warranty, since they are included in the purchase price. The maintenance costs should include only the preventive

maintenance, the scheduled maintenance, and breakdown maintenance.

- Disposal Cost: Costs that are entailed in getting rid of the product at the end of its useful life. It must be assessed directly to the specific product being discarded. Therefore, public costs of waste collection and

treatment are not included. If the product is sold and the proceeds from

the sale exceed the other costs of disposal, the product will have a disposal value that reduces the life cycle cost.

- Contractual Cost: Lund uses the term to account for situations where either a rental arrangement or a service contract may be involved.1 LCC does not make any allowance for depreciation charges because the full

capital cost is already included in the total cost. Depreciation cost is part of an

organization’s profit and loss statement. If depreciation was also included this would be double counting the capital costs.

3

Using life cycle costing

There are two approaches to analyze life cycle costs: 1) total dollar outlay approach and 2) present value approach.

The total dollar outlay approach uses total dollar outlays regardless of when they occur. However, it does not result in an accurate picture of reality since it ignores

time value of money and as well as inflation. When an organization has a choice 1 See Lund. 1978. 2 See Simon. 1975.

Page 19

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

of incurring a cost now or in the future, it will consider the benefits and costs of

alternative funds. Value of future spending is less since it has the potential to be

funded by effective use of existing funds. In order to quantify the time impact on future cash flows, present value approach converts the cash flows to an equivalent

present value. The procurement agency should choose the asset with the lowest

aggregate present value of acquisition cost plus life cycle costs. Simon suggests the following four step procedure for present value approach:1

1.

Estimate annual dollar outlays for each year of the life of the item.

2.

Select an appropriate discount rate to represent the consumer’s time value of money.

3.

Restate all dollar amounts in present dollar terms.

4.

Sum all present dollar amounts to obtain the present value cost of the

item. The formula to convert future receipts and costs to present dollar terms is: Present Value = FV/ (1+r) An

where FV the amount to be spent or received at a point in the future, n the number of intervals between the present and the future transaction (e.g. years), r

the discount rate applicable to the chosen interval.

Present value approach is particularly relevant when the cost drivers are subject to widely different cost escalation over time and when the acquisition decision is influenced by the expected cash requirements. Several considerations, such as lifetime estimation of the asset and the discount

factor, shape the LCC calculations. Operating and service costs are distributed

over the useful life of a product; therefore, expected lifetime is an important

element of life cycle cost calculations. Also the choice of discount factor may

determine the outcome. The discount factor should reflect the opportunity cost of the capital for the company. In LCC, costs that are likely to be of significance

1 See Simon. 1975.

Page 20

11

SMI EUROFEAN BUSINESS SCHOOL lnt»TMUMn»l

M>pm MAMGI Ml NT IMtmil

!M»lnS K»xiv»ru»»w»«

must be identified and measured since the need for quality data is important.

Simon indicates there are three main sources of data for life cycle cost estimation: historical records, manufacturer’s specifications and, thirdly, testing. Historical cost is most valuable when the new item is similar in specifications to previous

models. Simon also claims that this is the most often case since most design

changes are evolutionary rather than revolutionary. Systems should be improved

to improve historical records. Although historical approach is a beneficial source of data, it may also inhibit the exploration and adoption of new materials and

technologies since associated testing costs may be considerably high. Manufacturer’s performance specifications can be a good source of data if

handled with caution. Testing may be the most costly way of acquiring the needed data. Estimates are also used quite frequently in practice. Then, accuracy of

estimated data while calculating life cycle costs depends on three factors: 1) the

experience and sophistication of the manufacturer in making its performance estimates, 2) the buying organization’s experience with similar types of

equipment, and 3) the sophistication of the buyer’s economic forecasting group.1 The procurement agency should be aware of the limitations and constrains that

might restrict the range of acceptable options such as performance or availability

requirements and maximum capital cost limitations when using LCC.

Furthermore, in LCC calculations, costs that will not vary among the alternatives can be eliminated from consideration.

4

Benefits of LCC

LCC is a management approach that focuses on the product itself rather than on

divisions of a company such as purchasing, manufacturing or logistics. This system thinking enables to find the overall optimum in regards to cost. Specifically, the information provided by LCC assists in:

- Investment Appraisal: LCC provides an improved base for evaluating alternative brands and models.

1 Dobler and Burt. 1996.

Page 21

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

- Budgeting: Life cycle costing tries to project the costs that are going to be down the line. Early decisions concerning the product concept,

materials and tools actually define what machinery will be used and give the opportunity to estimate the costs of future investments. Future costs associated with the use and ownership of an asset may be greater

than the initial acquisition cost and may vary considerably among

alternatives. These forecasts guide the budgeting decisions.

- Value-Cost choices: LCC provides an efficient mechanism by which consumers can trade off higher initial prices for long run energy and service efficiencies. The LCC presents an accurate information

framework for cost-benefit judgment. Bainbridge indicates that “at the

United States International University campus, the utility costs are almost 3/4 of a million dollars a year because life cycle costs were not considered when the campus was built in the 1970s (when utilities were

promising energy so cheap it wouldn't have to be metered). Had better

choices been made then, the utility cost could have been reduced by 75 percent or more. As a result, student costs would be lower and money

could be more productively spent on scholarships, staff salaries, books,

computers and education.”'

- Source selection: Different choices may need different sources to operate. For example, one may operate with sun power while the other needs electricity to function. Through LCC, the company has the

framework to observe the sources it will need in the future if it chooses a particular asset.

- Cash flow management: Corporate cash flows must be considered when comparing options. There always will be competing demands for the available cash resources. Cash flow is better managed if the long term needs are known. The life cycle analysis provides a basis for projecting

cash requirements and secures funding.

- Product improvement: LCC shifts consumers' attention away from the initial purchase price as the exclusive economic consideration. Life 1 See Bainbridge. 1997.

Page 22

SMI EUROPEAN BUSINESS SCHOOL

SI pro

MASAGIVWM

I XS HI I

II

IrMTMlMml I'wwr*«» SMiImB KcwhsrUlwuMt

cycle cost criteria place much greater emphasis on product performance which is measured by reliability, efficiency and life. This may cause a

shift in products or models purchased and, consequently, product

manufacturers and merchandisers may have to modify their product

design and marketing strategies to adjust to the new criterion for product choice. Manufacturers of high quality, energy-efficient products may

have a competitive edge in the market. Also, Simon states that “if

purchase decisions are based on acquisition costs alone, there can be a strong incentive for manufacturer to reduce cost at the expense of

reliability and maintainability. Use of LCC, rather than acquisition cost

alone, will provide manufacturers with a positive incentive to improve the reliability and maintainability of their products.”1 Replacement or disposal decision: A comprehensive database of life cycle costs

lets the companies track costs and make decisions whether to continue as it is, whether to make a revision or whether to retire and dispose the asset. Components and features that generate high costs in current system may be identified and

improvements to reduce costs can be done.

5

Limitations of LCC

The notion of LCC is a valuable approach to compare alternatives; however, a

spectrum of practical difficulties limits its widespread usage.

A major concern with the life cycle cost concept is the need to estimate the life cycle costs. LCC is built on a range of key parameters (cost drivers and discount rate) and has various assumptions about current as well as future cash flows. Each

element has limited accuracy and has different impact on the overall outcome.

There is often uncertainty over key long term parameters such as level of use,

impact of new technology and even future functional requirements may be uncertain. The longer the period involved, the more difficult it is to estimate future costs. A detailed analysis is usually inappropriate or impossible. Life cycle costing by definition makes predictions about the future and, therefore, only as

good as the assumptions upon which these predictions are based. Hence, it has 1 See Simon. 1975.

Page 23

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

lnu»TMlMMI ItewhwUhMUMt

been criticized as unreliable. E.g., one of the reasons for the relatively slow introduction of life cycle costing methods to the building industry has been a

feeling that life cycle cost estimates are in some sense inaccurate or based merely on guess work.1 In order to deal with uncertainty, several risk management systems can be incorporated to the LCC. Two particular techniques, sensitivity and probability

analysis, can be used to enhance a life cycle costing system. They are regarded as simulations as opposed to definitive analyses. Sensitivity analysis identifies the

impact of a change in a single value with a ceteris paribus assumption holding all other parameters constant. In general, it does not require that a probability

distribution be associated with each risk element. In probability analysis, all

variable factors have probability distributions and they vary simultaneously. A computer model may allow the creation of scenarios based on different assumptions. IT systems easily permit the agency to ask “what if’ questions and

receive alternative responses. With such procedures, the uncertainty of comparing alternatives can be minimized with minimal additional effort for data collection

Costs are not the only factor in the purchase decision. Other considerations are parts inventory and personal training which may arise from the composition of an

existing item. Additionally, future costs may be less visible since different departments are usually responsible from each stage of the process. The overall costs involved in owning, operating and maintaining an asset are not always

obvious within a company. Purchasing department usually makes the decision to acquire or lease an item. The users use the asset to support the operations and operations department is responsible for the operating and maintenance costs.

Those buying the equipment may have a fixed budget. They will probably,

therefore, be less concerned with the long-term operating cost of the equipment, than completing the purchase without going over budget. Consequently, there's little incentive for the purchasing personnel to pay higher prices for more efficient

equipment. Separation of roles and responsibilities limits the maximum gains from acquiring a certain asset. To prevent sub-optimal decisions from being made

in functional units and to promote overall profitability, purchasing and operations

1 See Norman and Robinson. 1987.

Page 24

SMI EUROPEAN BUSINESS SCHOOL

5I PPIV MAKAOIMIM

ixsiiit

ii

lnu»TMlMWS K« »Sutn»n«.u-.n

1

Introduction

E-procurement has revolutionized the way companies plan and operate their

purchasing processes and demands new solutions to shape the interaction between buyers and suppliers.1 Traditional paper-based activities, such as filling orders and asking for price quotes, have become obsolete.2 Today, companies can simply post their supply needs on the Internet and get quotes from suppliers located all

over the world. Orders can be filled instantly and in real-time. Large cost-saving

potentials are the result of well-implemented electronic supply processes. At the same time, the facilitation of information exchange through e-procurement has brought along a set of difficulties.

Considering the relationship and

communication pathways between suppliers and buyers are being profoundly

challenged, what are the prerequisites for reaping the full potential of internetbased procurement? In particular, it is obvious that there is an increased need to

standardize and coordinate activities, processes and information. Only if eprocurement is able to deal with and reduce complexity instead of aggravating, it

can it be efficient. One way to reduce this complexity and facilitate e-procurement processes is

through material group classification systems, such as e-cl@ss or the United Nations Standard Products and Services Code (UNSPSC). The aim of this article is to define the concept of material group classification in the context of e-

procurement, and to describe its functionality. In particular, the two standards

named above will be taken into consideration because of their wide practical

relevance. Also, this article gives application examples of successful material group classification implementations.

2

Material group classification: definition and relevant context

Material group classification cannot be understood as a stand-alone instrument,

but needs to be inserted into a wider e-procurement framework.

1 See Rüdrich et al. 2004, p. 46. 2 See Zsidisn. 2002, p. 58. 3 See Institut der Deutschen Wirtschaft Köln Consult GmbH. 2000, p. 5.

Page 27

SMI EUROFEAN BUSINESS SCHOOL

51 FFt V MANAGf Ml N T~Tn Ml IVK

ln>»rMtMn»l IfwvmMK !M»lnS Mliv»ru»»w»«

Definition of material group classification As stated in the introduction, the need for standardization has surged through e-

procurement as well as a general increase in complexity (due to a rise in the

variety of products offered and materials needed, e.g.). A powerful lever to reduce complexity is material group classification. It is widely agreed upon the fact that

this tool is a central prerequisite for successful e-procurement.1 To fully

understand the scope of material group classification, it is important to know that e-procurement works with electronic product or material catalogues being posted on the web. This means that at some point, materials need to be uniformly identified and coded. Generally speaking, material identification system can be divided into internal

systems (used for identifying and describing materials within the various

departments of an organization) and external systems that facilitate sales

transactions between members of the supply chain.3 Material group classification is about defining uniform product classes to facilitate communication between the members of the supply chain (internal purchasing

organizations, manufactures, distributors and suppliers). Material groups are

simply a collection of materials or items with common characteristics. They are used to make collective statements about goods and services in procurement.4

Once material groups have been defined, a key is assigned to every single good within the group. These keys are then made available on a central platform to which every interested partner has access. The defined keys allow buyers to more

easily identify various vendors who may be able to provide these items, and also to express their concrete purchasing needs. The usefulness of such as categorization grows with the number of items purchased. A material group classification system has the following characteristics:5

-

1 2 3 4 5

The classification allows for hierarchical sub-categorization

See Puschmann and Alt, 2001, p. 6. See http://www.onlinemarketer.de/know-how/hintergrund/beschaffung.htm. See Dobler. 1996, p. 550. See Anonymous, b. 2001, p. 7. See Granada Research. 2001, p. 3.

Page 28

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K« »Sutn»n«.u-.n

-

Identification keys are assigned in an unambiguous and clear way, so that

it is understood by anyone using this system

-

It is constantly maintained and responsive to changes in the industry, to adapt to changing products, structures and processes

The database is searchable and the search key-word based It contains descriptions of each product class for easy orientation

Advantages ofmaterial group classification

There are numerous advantages of material group classification in the eprocurement context. First of all, it provides uniform and standardized codes for

each and every product or service. Thereby, it contributes to setting up a common

language between the purchasing organization of a company ’, its engineers and suppliers, speeding up the procurement process. It enables spend visibility through

the assignment of costs to clear categories and the examination of historical purchasing patterns. Furthermore, companies can reduce their transaction costs.

In particular, search and information costs are reduced through increased transparency about suppliers and the prices that they offer.

E-cl@ss and UNSPSC: Two standards for material group classification Since material group classification has become a topical issue, two systems have

established themselves on the market and are considered as standard application. The first one is the e-cl@ss system that was first set up in 1997 by the VCI

working group “Materialwirtschaft Technische Güter” (Material Management of

Technical Goods).4 The first version, presented in 1999, was based on a classification system that a diversified and multinational group (VEBA Group) had developed for internal use. The current version was developed in a joint effort

by big companies such as Audi, BASF, Bayer, chemfidence, Degussa, E.ON,

Henkel, SAP Markets Europe, Siemens, Solvay Alkali, Volkswagen and Wacker

Chemie.5 Since then, the aim has been to take more and more branches and

1 2 3 4 5

See Institut der Deutschen Wirtschaft Köln Consult GmbH. 2000, p. 7. See http://www.aasis.state.ar.us/Training/Tutorials/material_groups_tutorial.htm. See http://www.eclass.de/. See Anonymous, b. 2001, p. 5. See Anonymous, c. 2002, p. 8

Page 29

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« !M>lwa K« »Sutn^>.u—n

countries into account to broaden the scope of application. To date, the application is rather limited to European countries. The product keys established by e-cl@ss can be used for engineering (through integration in CAE systems), warehouse

management, plant maintenance, procurement, sales and on virtual marketplaces. The second standard is the UNSPSC (United Nations Standard Product and Services Code). It was developed in conjunction with Dun&Bradstreet, the leader

in establishing standards for products and services. The reach of the UNSPSC is more global and embraced by more companies than e-cl@ss. Both systems can be considered direct competitors in the material group classification market.1 fi­

cuss is free of charge while UNSPSC membership costs $250 a year.

3

Description of the functionality

After the concept of material group classification has been introduced, this section

now elaborates on the functionality of this tool. Although the use of UNSPCS is more widespread, this paper mostly focuses on e-cl@ss, because it contains a larger number of functions. Figure 1: UNSPSC classification hierarchy (possible example) Office equipment and supplies

10

Office machines and their supplies and acessories

11

Office supplies 15

Mailing supplies

16

Writing instruments

17

Ink and lead refills

01

India ink

02

Pen refills

1 See Hantusch. 2001, p. 76-77. Other (mainly German) standards include Proficl@ss or Etim; see Sauer. 2003, p. 40. 2 See Einspom. 2001, p. 18.

Page 30

SMI EUROFEAN BUSINESS SCHOOL ln*»TMUMH»l

5 VsFFt V MA N AO I Ml NT INSTIllII

!Müm* HumA*n*hMMM>

Functional characteristics The underlying logic of material group classification is to assign unambiguous

keys to product and service groups. Both e-cl@ss and UNSPSC are organized

along product codes over four levels of hierarchy. An example of this hierarchy is

shown in figure 1. It shows the classification of the commodity “pen refills”, from the broadest level “segment” to the commodity itself. The allocation of a certain item to a certain group is based on one of the three

following rules:1

1) The product or service supports a common function, purpose or task 2) The product or service is made available by a similar process (i.e.

manufactured by the same company)

3) If none of the first two rules apply, then the product is classified according to the materials it is made of. The hierarchy in the e-cl@ss system is similar, being composed of the four levels

segment, group, subgroup and commodity.2 It contains items categorized into 21 segments, each segment consisting of up to 4 groups with up to 2 subgroups. This means that the database contains not only products, but also individual materials.3 In addition to assigning keys to every product and service, the e-cl@ss system

allows searching the product and services database according to more than 12,000 keywords. Also, a description along certain criteria is provided for each product

(at the lowest hierarchical level). This enables an easy search for adequate items and facilitates communication between business partners. Figure 2 shows the

design of the search engine on www.eclass-online.com.

1 See Granada Research. 2001, p. 13. 2 See www.eclass.de. 3 See Anonymous, b. 2001, p. 8.

Page 31

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal l’jtn»n*u—n

Figure 2: Design ofsearch engine at e-cl@ss BMrtx»*«n

Araurht

Fawoman

KxV«»

>

XX il

al • =az=mD=CTn=H3ri3«BmBB^^ Ham»

|

LE.6&CÜ

|

UiaauUAU

eCI@ss

1 ;niMrm*l»*f>

|

tlsta

|

Ux»

I

^wni"n

Rnleme 5.1 fiNFOJ Sl»nd*rdis»d M»t»n«i »nd S«r»»c» Claxtrftcahcn

|

y«:»««»

|

|

RES

Keyword Search [ Description to eCl^ss-Key ]

Hits; 7«MI

The criteria for each product are listed according to strict rules, abiding

international norms relative to classification schemes.1 In addition, e-cl@ss supports a data exchange format for all members to communicate securely. The structure and functionality of the e-cl@ss system is shown in figure 3. Figure 3: E-cl@ss classification hierarchy (possible example)

Applications of material group classification The broad functionality of e-cl@ss facilitates a number of applications. Material

group classification can be used for a multitude of purposes along the supply

chain, as shown in figure 4.

1 See Anonymous, b. 2001, p. 9. In particular, these norms are ISO 13584, DIN EN 61360, and IEC 65B/349/CD.

Page 32

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

The applications range from the development and production of items to the use on online market places and warehouse management. This means that material

group classification can be used both internally and externally, for information and management purposes within the firm and to satisfy transaction and

communication needs outside the firm with relevant market partners. Internally, such a system is extremely valuable in the sense that it creates awareness about what is needed (taping the full bundling potential) and whom it is purchased from.

This could mean saving up to 20 % on total spending through strategic supplier

relationships or purchasing agreements.2 In groups (conglomerates), material groups can be used to identify bundling potentials across group companies and to

compare prices and conditions? Classifying products can help the control of and compliance to spending limits and authorized commodities imposed to certain

departments.4 The purchasing organization can keep track of the materials ordered and is able to control the costs over time? Additionally, statistical analyses are

being simplified, especially in larger companies with different reporting levels. A

differentiated level of aggregation in the statistics is made possible through

different levels in material groups. In that sense, calculations of the purchasing volumes on different material group levels can be carried out.6

Externally, material group classification supports an efficient supply management. Indeed, using pre-defined and standard keys as a basis for communication reduces

costs and increases supplier transparency. This means actively managing spend

1 2 3 4 5 6

Similar to www.eclass.de See Granada Research. 2001, p. 6. See Anonymous, b. 2001, p. 11. See Granada Research. 2001, p. 5. See www.eclass.de. See Anonymous, b. 2001, p. 11.

Page 33

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« «MiWS K«-S>jtn»n*u-.n

visibility, since direct comparisons with other suppliers are made possible.1 Uniform classification also enables easy contract management, since the

specifications of products to be delivered are standardized and transparent. 2 For example, e-cl@ss can be used to build up electronic catalogues, online-shops or specialized portals. Also, material group classification can help build up better relationships with suppliers through the allocation of a contact person for each

material group. This brings more transparency and trust into the organization. Figure 5 summarizes the applications and advantages in the purchasing process, from both a supplier and purchaser point of view.

Figure 5: Using material group classification: Advantages from a supplier and purchaser point of view Supplier perspective

Client perspective

-The classification number ensures that a supplier’s offer is clearly understood by potential buyers -New clients can be found over the Internet, through the use of platforms -Suppliers can take up the classification system for free without having to set one up themselves -Material group classification provides a platform to negotiate contracts and establish long term relationships

-The standardized number replaces a verbal description and establishes clarity in the purchasing process -The system enables an easy overview of relevant suppliers and their offer -The offers from various suppliers are made comparable

-New suppliers can be found through the use of classification numbers

-Material group classification provides a basis for e-commerce

In addition to the purchasing process, material group classification can help improve processes in marketing and distribution. The advantage of a standard

code is that is can be “understood” and “read” by other applications, such as web

search engines or ERP systems. This means that the fact of assigning a key to an item makes it recognizable in a variety of ways. Companies can benefit from this to propagate commercial information. For instance, products can be registered at

web search engines such as google.com. Also, tagging products increases customer satisfaction. The re-ordering process is simplified, errors and returns are

being minimized through uniform codes. Additionally, communication with 1 See Institut der Deutschen Wirtschaft Köln Consult GmbH. 2000, p. 12. 2 See www.eclass.de.

Page 34

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal l’M>

4

Application examples

In the following, two short examples of how material group classification can be

used as an integral part of purchasing processes are shown.1

Use of e-cl@ss at the Volkswagen-Group The Volkswagen group (including among others Audi AG and Volkswagen AG)

started using e-cl@ss in 2000 to support its B2B market place activities. The introduction was part of a group-wide effort to restructure purchasing processes.

The group now disposes of a uniform classification system, implemented in

Germany, the Czech Republic, Spain, Mexico and Brazil. Plans are to roll e-cl@ss out to even more countries. Ever since, Volkswagen has benefited from cost reductions made possible by the standardization.

Use of e-cl@ss at BASF BASF is a member of the e-cl@ss founding committee, and has been using the

system since 2000. It has been implemented on a company-wide basis for the classification of all materials. It has been connected to the company’s current SAP

software and is used for the procurement of technical goods and services through

(online) catalogues.

5

Conclusion and outlook

Material group classification has become a very important tool in the age of e-

procurement. In order to enable efficient communication of all members along the supply chain, to realize cost savings and gain transparency about suppliers and

purchasing spends, a uniform classification system is crucial. By assigning keys to

materials, products and services, the flow of products and information along the supply chain is facilitated. Overall, material group classification systems increase overall efficiency of supply chains through reduction of unnecessary search and

information processes. It enables to take full advantage of e-procurement. The uses of such a system range from internal uses such as compliance to budget

to external uses, such as the search for new suppliers. Two competing systems 1 See http://www.eclass.de. 2 See Institut der Deutschen Wirtschaft Köln Consult GmbH. 2000, p. 12.

Page 36

SMI EUROPEAN BUSINESS SCHOOL

SI ppiV MA» A GE Mt N it IN St «1U1I

have established themselves on the material group classification market, e-cl@ss and UNSPSC. Although e-cl@ss has to date not established itself globally, it

offers more functions (especially a keyword search function) than the UNSPSC.

The perspectives for material group classifications are bright, since e-procurement

is inevitably going to continue gaining group in corporate purchasing processes. Without the transparency offered by classification systems, online market places

are bound to fail. However, the increased trend towards e-procurement also involves problems. To date, most online market places are inefficient due to a lack

of integration.1 A major challenge is therefore going to be the integration of such

material group standards into existing company software solutions.2

Some experts describe a brighter development potential to the e-cl@ss standard, since it was created by large companies that know about purchasing and

cataloguing needs. The system enables to characterize all the items in the classification and offers a search function, all of which are not available in the

UNSPSC system. But for material group classification to serve is purpose as a transaction and information facilitator for purchasing processes, the additional

transparency created by e-cl@ss is a necessary prerequisite.3

1 See Anonymous, a. 2000, p. 48. 2 See Renner. 2003, p. 17. 3 See Anonymous, c. 2002, p. 8.

Page 37

SMI EUROFEAN BUSINESS SCHOOL IntonwUHual

St PPI V MAN AOI Ml NT IMHIIH

!M>ln* taw

QUALITY FUNCTION DEPLOYMENT (QFD)

TABLE OF CONTENT

1

INTRODUCTION........................................................................................ 39

2

IMPLEMENTING QFD............................................................................... 41

3

QFD IN REAL LIFE..................................................................................... 45

4

CONCLUSION............................................................................................. 46

Page 38

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

1

Introduction

Innovation, we are taught, is a complex and difficult phenomena. Complex

because for every 100 ideas generated, only a minute fraction of them will ever see the day. This is true in all spheres, whether it be product, market or process

innovation. However once in a while an idea emerges which by its sheer utility holds tremendous potential for widespread application. Generally these ideas are so simple and elegant in their form; one wonders why it didn’t exist in the first place. Quality Function Deployment (QFD) is one of those innovations in the

process realm. In the late 1960s, Professors Shigeru Mizuno and Yoji Akao observed that almost

all the techniques for quality management focused on fixing problems after the

product was manufactured. Prevalent techniques at the time were mainly

statistical methods, which involved sampling process data and output at various

intervals, and performing various statistical operations on the data obtained in

order to arrive at an accept/reject decision for the batch. The two main type of statistical quality control methods are statistical process control (SPC) and acceptance sampling. An obvious limitation with these tools is that they are post

facto and hence the cost of a defect is tremendous. For instance, in case of a total

defect in the sample either the entire batch has to be rejected resulting in monetary

loss in terms of the cost of production and opportunity loss in terms of unfilled orders and missed schedules. Further, for mission critical components, sample size has to be 100%, thus restricting the use of destructive sampling and even if individual components pass the acceptance test there is no guarantee that the final

product will pass overall acceptance. Amidst this context Professors Mizuno and

Akao set about to develop a quality control method that would incorporate the

customers view and requirements into the product before manufacture, thereby ensuring customer satisfaction all through the production cycle. What emerged is

known as Quality Function Deployment (QFD). We are now in a position to present the formal definition of QFD: “Quality

Function Deployment (QFD) is the systematic translation of the voice of the customer to actions of the supplier required to meet the customers’ desires, based on a matrix comparing what the customer wants to how the supplier plans to

Page 39

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

provide it. This basic matrix can be expanded to provide additional insight to the

supplier, and cascaded to identify process parameters that must be controlled to

meet the customer requirements.”1 In simpler terms, QFD involves formalizing the requirements of the customer into

a comprehensive entity (called the matrix) thus ensuring that both the spoken and

the unspoken needs of the customer are clearly communicated to both the supplier and the customer (for often the customer himself is not aware of his wants) and

ensuring that the entire process flow is tightly dictated by the matrix. The matrix

itself varies from implementation to implementation and can range from a simple

prioritized list of wants to elaborate multi-dimensional structures incorporating customer wants (the what), supplier processes (the how), time schedules (the when) etc. The degree of complexity will be determined by the complexity and

criticality of the product and that of the process. Ultimately, the aim of QFD is to

provide what the customer really wants, not what the engineering or process team assumes that the customer requires! This is the fundamental (and obscenely

simple) idea behind QFD. Another ‘extra’ included in QFD is the strategy to stay

ahead of the competition (or competitive benchmarking, the comparison of

existing designs and identification of business opportunities in the bargain) and in

the process add value to the organization. Some other definitions relevant to QFD are and as laid out in the Kano Diagram (which attempts to explain what the

customers like and don’t like):

Figure 1: Kano Diagram2

C ä|g Ci" »ZI« US

“Expected Quality: The things that customers expect anyone in your business to

deliver; Exciting Quality: The things that customers don’t even think to expect from you AND would be thrilled if they got it from you”1

1 See Coppola. 1997. 2 See http://www.proactdev.com/pages/ODPDProc.htm

Page 40

&

SMI

EUROPEAN BUSINESS SCHOOL

MA^

iÄMlHW

lnw>raalMh*n«lui«wM>

A brief chronology of Quality Control is provided here:

1700 - 1800: European guilds 1800 - 1900: Industrial revolution 1907 - 1908: AT & T begins product inspection

1924: Control Charts (Shewhart)

1928: Acceptance Sampling refined (Dodge, Romig) 1942: Sequential Sampling (Wald) 1948: DOE in industry (Taguchi) 1954: CUSUM chart (E.S. Page) 1959: EWMA chart (S. Roberts) 1960’s: Zero-defects programs (North America) and QFD (Japan) 1970’s: TQM (North America)

1980: Taguchi methods introduced to North American 1989: Six-sigma approach (Motorola)

We note that quality control has been clustered in terms of a particular type or method gaining prominence for a long period of time, until giving way to another

‘superior’ method. What is important to note however is that all these models are in use in industry and not any one can lay claim to being the ‘best’. For instance both the six-sigma approach (which is a sort of super-set of methods) and control charts are in widespread use today3. Research in this field is in a continuous state

of evolution.

2

Implementing QFD

We are now in a position to delve into the implementation and functionality of

QFD. In its most basic form, QFD involves the following steps. a. Identification of customers: Although this may seems trivial at the outset, it

may not always be the case. Many a times, a company will have more than

one customer and/or will have difficulty in identifying the final customer. For example, a manufacturer of automobiles just does not have to consider the

final driver of the car, but also the government agencies such as the road 1 From Kano Analysis, as described by http://www.pacepilot.com/kanol.shtml 2 From www.stats.uwo.ca/faculty/ braun/ss316b/notes/intro/intro.pdf. 3 See their six-sigma roadmap at http://www.ge.com/sixsigma/SixSigma.pdf.

Page 41

SMI EUROFEAN BUSINESS SCHOOL

5I CP «V MA NA GE MEN iTlN M» TV t E

IntonwUHM*! IfewmMH !M»WI k»»

safety organizations, environmental groups and the car dealer as interested parties and hence in some form or the other as a customer. By missing out on

the expectations of any one of these parties, the manufacturer runs the risk of producing a defective product. To further complicate matters, needs of

different customers could be conflicting, and if not all the interested parties are not identified the company runs the risk of dissatisfying this customer later. A definitive plan of how to gather customer requirements is required.

b. Identify expected quality: The requirements of the customer are to be identified in this phase. This is done through extensive questionnaires and

interviews with the customer. Various techniques have been discussed in

literature, especially with respect to what questions to ask, what to avoid and

how to design questionnaires. Other techniques used are visual methods and in more advanced projects prototyping1.

c. Classifying into the QFD matrix: Also described in literature as prioritizing and quantifying this task involves the classification of customer need into the

predetermined matrix. Defining the matrix is a critical aspect, since the quality

of the matrix will determine the versatility of the design process. We discuss a couple of popular matrices here.

a. Priority list: In its simplest form, the matrix can be a simple list of requirements arranged in a set priority. The manager must understand and

realize that not all demands can be fully met. Some have to be sacrificed in favor of others if only economic sense dictates it so. A possible hierarchy

could be immediate, important, desirable and optional. This simplistic matrix will only be relevant for simple products and process where more

elaborate designs are overkill. b. House of Quality (HOQ)2: The house of quality is a very popular visual tool to quantify quality attributes. This was originally designed by John

Hauser and Don Clausing of MIT in 1988. In its entirety, HOQ encompasses the entire process from date collection to collation and

1 See Ulrich and Eppinger. 2000, Chapter 4. 2 http://www.vr.clemson.edu/credo/classes/qfdhoq.pdf

Page 42

SMI EUROPEAN BUSINESS SCHOOL lnu»TMlMMI ItewhwUhMUMt

interpretation. At the heart lies the matrix, which attempts to align the WHATs and the HOWs with the TARGETS1.

Figure 2: Basic model ofHOQ

1: Customer Requirements (the WHATs or Demanded Quality Hierarchy)

2: Technical Descriptors (the HOWs or Quality Characteristics Hierarchy) 3: Relationship between the WHATs and the HOWs

4: Interrelationship between the technical descriptors

5: Planning Matrix (a.k.a Quality Planning Table) 6: TARGETS (a.k.a Design Planning Table) The part that needs further explanation is the planning matrix and the TARGETS. The planning matrix which is actually an extension to the basic HOQ, is a

prioritized listing of the customer demands, and TARGETS which is the summary

of the relationship matrix weighted by the WHATs and provides a priority list of how to proceed. Many variations of the HOQ have been adopted by various

organizations in their QFD implementations but the basic idea remains the same. In fact HOQ is such a popular concept that it has become synonymous with QFD

although the two concepts were developed differently. Of course HOQ is not without limitations, the primary and obvious one being that as it is being refined, 1 HOQ diagram collated from various sources on HOQ

Page 43

SMI EUROPEAN BUSINESS SCHOOL

m

n-n

uamoimim

i xs 1 i m

11

Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

it is also becoming increasingly complex to handle and implement in real life

situation. As if populating the matrix is not hard enough, it also requires considerable subjective judgment on the part of the managers, judgment which

may not necessarily be correct. The diagram below gives an indication of just how

complicated an HOQ implementation can get. Figure 3: A HOQ matrix

Source: Lowe. 2000. d. Competitive benchmarking: Once the QFD has been implemented it is

important to understand whether the design process has added value to the

product, and whether there exists a competitive advantage due to the new process that the company can take advantage of. This is done by

benchmarking the competitors product against the requirements developed

through the QFD process and determining how they fare. This will also ensure that one can quantify the benefits of the QFD implementation. QFD is not just restricted to being a commercial management tool. Dean1 argues

that QFD in its basic form is meant for small systems, and extends it to large

1 See Dean. 1992.

Page 44

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

systems. However his complex arguments, applied for space engineering process are probably beyond the scope of this article.

3

QFD in real life

QFD has been a very popular concept since its implementation. Toyota Auto Body reported reduction in start-up losses of 61%, while Mazda reduced late design changes by half.1

It has generally been assumed that QFD has been more successful in products than in services. However, QFD is by no means exclusively for one, its principles

are valid in both fields. Following are two cases where QFD has been applied to achieve significant results and value for the implementer. One is a service case and the other a product case. We see that in both cases the gains have been

significant for the company, whether it be increased sales or savings in terms of

loss of investment. Host Marriott, which is mainly into food sales at transportation locations, implemented a comprehensive QFD starting March 1995 to achieve

more than 100% jump in bagel sales. The steps followed by them were executive buy-in, customer deployment, voice of customer deployment, quality deployment, function deployment, reliability deployment, new concept deployment, task

deployment and standardization. This resulted in changes in the products, the supply chain and marketing including the way bagels were displayed and delivery mechanism .

Fusion UV Products, a producer of Ultra Violet curing products, implemented

QFD to realize that their pet project was not going to be acceptable to the customer. They thus managed to stop wasting tremendous amounts of money into a project which they thought held value, but was in reality not required by the

customer. Fusion was developing a patented technology to improve the properties of UV adhesives, but being in the high technology sector, customer involvements was very limited. They were predicting a sales increase of 200-300% with their

new product, until QFD training to many of their managers resulted in increased customer involvement which led them to realize that the value that they were 1 http://www.qfdi.org/what_is_qfd/testimonials.htm. 2 See Lampa and Mazur, 1996.

Page 45

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

lnu»TMlMMI ItewhwUhMUMt

offering was not required by the main market segment! This resulted in them scrapping the project and directing resources to another product which, based on their QFD research the market would value1.

QFD in its basic form can be applied for any service or manufacturing firm such as healthcare and healthcare product manufacturing, banking, telecommunication

service and appliance manufacturing etc. It also covers both the process and the product.

4

Conclusion

In management there are many buzzwords that dominate for periods of time.

Some survive and some die. QFD has been around for quite some time and has

shown its mettle in various companies where it has helped increase value. It was however its successful implementation by Toyota, and its import into North America that resulted in tis widespread use. Today, it along with other concepts

such as six-sigma (which in a sense uses the concepts of QFD and is a superset of many concepts) has gained widespread acceptability and implementation. The

brilliance of QFD probably lies in the fact that it is at its heart a very simple concept, and the concept of keeping customers at the forefront has always been popular with top management. Of course current research is continuing on making

QFD as we know it obsolete and replacing it with enhancements and modifications so as to reduce its limitations.

1 Delgado, Okamitsu and Mazur. 2001.

Page 46

SMI EUROPEAN BUSINESS SCHOOL lnu»TMlMMI ItewhwUhMUMt

RAPID PROTOTYPING (RP)

TABLE OF CONTENT

1

INTRODUCTION.......................................................................................... 48

2

DEFINITION AND ADVANTAGES OF RAPID PROTOTYPING........... 49

3

RAPID PROTOTYPING TECHNOLOGIES................................................ 52

4

RAPID PROTOTYPING IN PRACTICE...................................................... 58

5

CONCLUSION AND CRITICAL REMARKS............................................. 60

Page 47

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

1

Introduction

Over the last decades, globalization of markets and production know-how has increased the national and international competition. Due to this competition, the

pace of innovation has dramatically increased. Nowadays, a primary difference

between successful and unsuccessful organizations is

the ability to respond

to the pace of change.It has become harder to sustain competitive advantages

through superior technology, since product lifecycles shorten and techniques like reverse engineering help companies to benefit from their competitors'

innovations. Hence, it is arguable that sustained competitive advantage cannot be

reached any more through superior technology since it can be copied.3 This statement may be true on a first glance. The development of a single product

cannot be a sustainable competitive advantage any more. Nevertheless, having a

very fast product development process within a company can still be seen as a

competitive advantage. It enables a company reap the benefits of a first mover

advantage by launching new products which utilize superior technology or suit the changed needs of the market before competitors launch similar products. One

important factor in launching new products is the ability to develop prototypes. To stay competitive, research and development costs and time to market have to be

decreased. Rapid prototyping (RP) can contribute in reaching this aim. Hence, this paper will show how RP works and in which way it can help a company to

improve its product development process in regards to costs and time needed. After this short introduction, a definition of rapid prototyping will be given and it will be shown in which areas RP gets used. By explaining the role of RP in a product development process and its typical traits, advantages and disadvantages

will be derived. This article will focus on the explanation of the six most

important rapid prototyping techniques. Afterwards, a comparison out of an

economical perspective will be made. In the next section, some typical

applications of RP will be discussed. The article will be concluded by critically reviewing the main insights.

1 Ulrich. 1997, p. 151. 2 See Barney. 1997, pp. 110-111. 3 See Ulrich. 1997, p. 153.

Page 48

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n*u-.n

2

Definition and advantages of Rapid Prototyping

Rapid prototyping is a collective term for new manufacturing techniques which can be used to build up a prototype with relatively low effort and, therefore, speed up the development process. Originally, it comes out of the engineering area and

describes all technologies that can construct physical models directly from

computer-aided design (CAD).1 All these technologies build on the basic idea that

the software slices the model into thin sections that the machine can build layerby-layer. However, the term is also used for changing the process of prototyping.

This is especially true for the IT area but also for the traditional engineering area. Originally, prototypes are constructed and if they meet the objectives they are used for the production phase as can be seen in figure 1.

Figure 1: Typical steps in an evolutionary prototyping process

Change requirements or redesign prototype

Source: Based on Burt et al. 2003, p. 223.

As soon as all requirements are set, a prototype is designed and built. The design

gets reviewed and if necessary improved. Then it gets tested whether the

1 See Müller et al. 2002, p. 1.

Page 49

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

prototype meets the objective. If it does not meet the objectives either the prototype gets redesigned or the requirements changed. Rapid prototyping works

a little bit different as can be seen in figure 2.

I

Source: Based on Linkweiler. 2002, p. 15; Dolenc.1993, p. 23.

Even though the processes may look very similar at first sight, they are quite different. Especially the idea behind them clearly distinguishes them. The building

of the prototype does not necessarily have the aim of getting a finished prototype

in the end or proving that the prototype is mechanically viable. The aim is very

often to test only a certain function or aspect which demonstrates an important function of the finished product. The prototype itself is produced earlier in the

process. It is not tried to find all mistakes and then build the prototype but to find mistakes by building the prototype.1 The process is an incremental and iterative 1 See Eisenberg. 2004, p. 28.

Page 50

11

SMI EUROPEAN BUSINESS SCHOOL

M.-PH» MANAGIMIN1

Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n*u-.n

process in which the prototype is expected to fail. The rational behind is that changes can be made rapidly and before the production starts. Making changes early in the product development process is approximately ten times cheaper than

doing them shortly before the production starts.1 This is mainly due to the fact that

changes in the early development stage do not affect other elements. Even though

more mistakes are done and more prototypes are produced, time to market

decreases drastically. Depending on the product, the pure production time of a prototype gets typically decreased by 50%.2 Besides, the iterative process itself is

often faster. So, RP has the potential to cut the design to market time by 75% or

more in some cases. In addition to the time savings, the iterative small step orientation of rapid prototyping also increases the likelihood of success that the

finished product works fine in all aspects.4 Since more tests are run and more mistakes get discovered, it is more likely that all mistakes get found. Still, it has to be noted that the process of iterative or rapid prototyping is closely linked to the

RP technology. Without it, it would be neither cost efficient nor time efficient to produce that much prototypes.

Besides, as rapid prototyping leads to earlier prototypes, this also helps in visualizing the product. This is very useful for presentation purposes since it

increases the explanatory power and the comprehension of the object in comparison to an abstract data model on a screen.5 People can experience the

model which helps in articulating what they like or dislike about the product. This also helps in increasing creativity since the human brain can now connect its ideas

to something it experiences way stronger than a two dimensional model.

Furthermore, when different teams on different continents work together on the same project, they can exactly visualize what the other team has done so far since

the RP technologies can be used like a conventional printer and data being received via internet.6 Hence, RP can also foster the exchange of ideas within a

team and especially between locally separated teams.

1 2 3 4 5 6

See Stovicek. 1992, p. 20. See Evans and Campbell. 2003, p. 351. See Balsmeier and Voisin. 1997, p. 23. See Trapp. 1991, p. 74. See Seok-Choun et al. 2001, pp. 1-2; Eisenberg. 2003, p. 28; Stovicek. 1992, p. 23. See Seok-Choun et al. 2001, p. 2; Tay et al. 2001, p. 415.

Page 51

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

3

Rapid Prototyping technologies

Nowadays there are about 20 different rapid prototyping technologies.1

Figure 3: Different Rapid Prototyping Technologies

Powder materials

Solid materials

Liquid materials

Sintering Process

Extrusion Process

Printing or joining process

Selective Laser Sintering

3D Printing

Lamination process

Fused Deposition Modelling

Laminated Object Manufacturing

Photo-masking process

Laser Process

Solid Ground Curing

Stereolithography

Source: Based on Campbell and Martorelli cited in Bartolo and Mitchell. 2003, pp. 150-156.

After naming and distinguishing the most important and common ones in figure 3, some of them explained in detail. The different technologies can be distinguished

according to the different feedstock they use. The first distinction is made

depending on whether the feedstock is solid, liquid or consists of powder materials. On the second level the distinction bases on the physical or chemical

principle with which the feedstock gets processed and changed into a solid prototype.

1 See Müller et al. 2002, p. 1.

Page 52

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n*u-.n

Stereolithography

The first RP machine was produced in 1987 by 3D-Systems and used

stereolithography (SL). It started the era of RP. Still, it is the most popular process.1 SL belongs to the techniques which work with liquid materials. The process is illustrated in figure 4. Figure 4: Stereolithography

Source: Castle Island Co.2

A vat (B) with a moveable platform (A) is filled with a photopolymer (C).

Photopolymers have the chemical property to turn solid when light of the correct color strikes them. Usually, low-powered ultraviolet laser beams are used for this method, but, depending on the polymer, some resins also use visible light. Due to

the laser supplied energy a chemical reaction is induced. Large numbers of small

molecules are bonded and form a highly cross-linked polymer. The laser is traced by a scanner system (D) across the vat according to the cross section of the object

1 See Bartolo and Mitchell. 2003, p. 150. 2 See http://home.att.net/~castleisland/sl.htm, 2004.

Page 53

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

and hardens the polymer in areas where it strikes up to a certain penetration depth.

A single layer of solidified resin is produced. The moveable platform, which was placed just under the layer to hold it, is then lowered by the defined penetration depth of the laser so that new liquid resin can flow onto the working surface. The

laser starts anew and therewith produces another layer. If parts of a new layer do

not touch the former layer, the machine needs to generate stilts. After the single layers of the object are created the object undergoes a series of post-processes that

makes it safe to handle. The object gets cleaned with a solvent, stilts get removed and the model gets once more cured with ultra violet light. In the last step, the

surface gets straightened out by varnishing or coating it.1 Laminated Object Manufacturing Laminated Object Manufacturing2 (LMO) produces the objects out of foils. In

most of the cases, paper is used. However, LMO can also be used with all other

foil liker material which can be cut with a laser, for instance plastics or steel plates. Figure 5 shows how LMO works.

Figure 5: Laminated object manufacturing

Source: Castle Island Co.

1 See Bartolo and Mitchel. 2003, pp. 150-154; See http://home.att.net/~castleisland/sl.htm, 2004. 2 See Castle Island Co.: Castle Island's Worldwide Guide to Rapid Prototyping, http://home.att.net/ ~castleisland/lmo.htm, 2004.

Page 54

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K« »Sutn»n«.u-.n

Two rolls, one for feed (A) and one for waste (D) transport the paper or any other possible foil over the prototype. The lower side of the paper is treated with glue

which makes it possible to agglutinate two different layers. The glue however has first to be activated by heat and pressure through the heated roller (B). The

moveable mirror (C) directs the laser beam so it can cut the paper and bring it into

the desired shape. This method is self-supporting for overhangs since stilts get produced automatically by cutting an additional border. As the cutting generates considerable smoke, the process has to be done within a sealed chamber or needs a charcoal filtration system (E). LOM has certain advantages over other RP

technologies. Firstly, it produces an aesthetically pleasing prototype. Depending on the foil material this differs. Paper, for instance, results in a wood-like prototype. Secondly, the models have a better strength than plastic models. And,

lastly, they do not require post-curing. Hence, they do not shrink or warp. However, LOM is not as accurate as other technologies and has certain problems

with complex geometrical prototypes which require hollow space. Fused Deposition Modeling

The fused deposition modeling1 (FDM) machine is a computer-numerical-

controlled gantry machine as shown in figure 6. Figure 6: Fused deposition modeling

Source: Castle Island Co.2

1 See Bellehumeur et al. 2004, pp. 170-171. 2 See http://home.att.net/~castleisland/fdm.htm, 2004.

Page 55

SMI EUROPEAN BUSINESS SCHOOL

SI pro

MASAGIVWM

I XS HI I

II

lnu»TMlMlnS IWa:h*rUhM>M>

process. As the powder gets kept in a temperature slightly under the melting point,

the laser does only have to change the temperature a little bit so that the process is rather fast. As soon as a layer is finished, the fabrication piston (D) moves down so that the roller (B) can spread new powder from the powder delivery system (E)

into the building cylinder (C). Since the build cylinder is filled with powder, no stilts are needed. It often proves difficult, though, to clean hollow space up. Since

no stilts are needed, compared to stereolithography, finishing time is saved. However, it may take up to two days of cooling time before the parts can be

removed from the machine. A big advantage of SLS is that any thermoplastic

powder can be used. Depending on the used material, the prototypes are mechanically or thermically strainable. Comparison of the different technologies

As seen, all RP technologies are based on the idea of building up a prototype layer by layer. Due to this, the parts can, in contrast to traditional shaping by stock

removal, have almost arbitrary geometric complexity. However, they do not yet reach the accuracy shaping by stock techniques have. Furthermore, the models are

also limited in size and show step effects on the surface due to the layer technique. However, these limitation differ from method to method. To give an advice on

when to use which technology, the four described technologies were rated in

regards to speed, accuracy, size and stability. Depending on a prototype's

requirements, the best suited method should be chosen. The comparison of the criteria shown in table 1 can help with that. However, these criteria are just a little example, since other criteria like cost per prototype or investment costs are very important, too.

Table 1: Comparison ofselected rapid prototyping technologies SL

LOM

FDM

SLS

Speed

very good

Good

poor

good

Accuracy

Very good

Fair

fair

good

Maximum Size

100 * 80 *

118 * 75 *

60 * 50 *

81 *56*

Stability

fair

Fair

very good

very good

Source: Own meta analysis based on Seok-Choun et al. 2001, p. 3; Müller et al. 2002; Castle Island Co. 2004; Chuk and Thomson. 1998, pp. 185-193.

Page 57

SMI EUROPEAN BUSINESS SCHOOL L'«h NrhlUi K*m^»rUhMMMt

4

Rapid Prototyping in practice

According to "Wohlers Report", rapid prototyping is nearly a billion-dollar-a-year industry which is still growing.1 In 2002,22,4% of all models were used for

functional models, 19,2% as patterns for prototype tooling, 15,3% for visual aids for engineering, and another 15% for fit and assembly. The remaining 29% were

utilized for tooling components, direct manufacturing, and other purposes. Nearly

50% of the prototypes were produced for the automotive sector and for consumer goods with the automotive sector being the biggest customer with above 30%. Even though the medical industry with under 10% is not quite as big as other

customers it is also in the top five and one of the industries which heavily rely on

rapid prototyping. The big advantage of RP for his industry is that it can provide relatively cheap models of the unique bone structure of an individual. As every

human being's bones are different, a mass production is impossible. However, RP enables doctors to help their patients in various ways. First of all, bone substitutes respectively their forms with a 100% fit in regards to the original can be made with relatively low costs. Whole bones or also parts of it can be individually matched to fit, which is especially important for sensitive operations like

cranioplastic implants. Secondly, they can also be used for modeling the status quo before an operation. Especially for unique and very difficult and dangerous operations like separating conjoined twins, this method is very useful. The team of

doctors can train on an identical model to get routine in an operation they will

probably do only once in their lifetime. Interesting prospects also arise by combining RP with other research areas like smart polymers and cell adhesion.4

Even thought, this is still in research, it could be used for duplicating organs, the so called organ printing. Embryonic tissue fragments and cell aggregates can fuse

as a fluid. By combining this self-assembly capability with rapid prototyping technology it would, at least in theory, be possible to build a kind of modified

computer printer which would use cell aggregates instead of inks and thermoreversible gel instead of paper. It could then print an organ layer by layer.

1 2 3 4

See Wohlers. 2002, pp. 1; 13-15. See Yaxiong et al. 2003, pp. 167-174. See Hieu et al. 2003, pp. 175-186. See Mironov. 2003, pp. 34-36.

Page 58

SMI EUROPEAN BUSINESS SCHOOL

m

n-n

vunaoi.MIM

ixsum

11

Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

Nevertheless one does not have to go that far to find successful cases of RP. TI for instance claims to have shortened the cycle time, which they define as the time

difference between database receipt to part fabrication completion, from several

months to an average of two to ten days, depending on the complexity of the part. A motor housing of prototype metal, for instance, required 104 touch-labor hours to fabricate with four month of cycle time. With the help of RP the same operation

is done with 8 hours of touch-labor and a cycle time of three days.1 Ford shows another advantage of RP in regards to its presentation capabilities.

After bid requests for a newly designed rocker arm were unanswered because the suppliers had problems with interpreting the blueprint, Ford used RP to produce a

model the parts of the rocker arm. Instead of 90 days of production time with

conventional means, Ford managed to produce them with the help of RP technology within one day. The model parts were than sent to the subcontractors. This time, the bid request showed a high return rate. Within these returns was a very low bid which resulted in annual savings of $3-million.2

The change from an evolutionary to an iterative process which RP technology enables also brings advantages. The Renault Fl team, for example, uses different

RP machinery to build parts of their cars which are then tested in the wind tunnel. Slight differences are tested and the best design will be kept. During the first

month of 2003, more than 2000 parts were tested. Without RP technology this would not have been possible. However, so far, the usage of RP is mostly limited to tests, since the parts used in the race cars itself need to be able to tolerate extreme conditions. Nevertheless, nowadays, some parts like water elbows are

already produced using selective laser sintering technology. In addition to the cost

savings, leadtime has been decreased from eight weeks to eight hours. Being able to produce masses of prototypes can also help in satisfying consumer needs, as a recent case study for handheld video games shows.4 A focus group

compared four different existing video games. From the insight of this comparison, three own systems were developed with CAD and then produced.

1 2 3 4

See Stovicek. 1992, p. 24. See Stovicek. 1992, p. 24. See Kochan. 2003, pp. 336-339. See Lopez and Wright. 2002, pp. 116-125.

Page 59

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

The focus group tested these systems and gave comments. This feedback was then

used to design a final version. The advantages of each preliminary model were

combined in this final model. The tolerance, look, and feel of the final model were

the same it would have when being mass produced. In the second round, RP allowed to build three different systems to be tested. Even though none of those

was taken as the final design, it was already anticipated, that they would only serve as an inspiration. However, without the help of RP which needed three hours to produce systems which allowed physical interaction between the sub­

components, this process would have taken much longer. Without these preliminary prototypes however, the user would not have been able to see, evaluate, and refine their feedback on the ergonomic aspects of the design which

was the most important factor. Even though CAD system enable the user to have a

certain guess, the experience of testing a three dimensional prototype is a lot

richer and allows more accurate feedback. However, without RP technology the

process itself would have taken month instead of weeks.

5

Conclusion and critical remarks

As shown in the previous chapters, rapid prototyping opens up a lot of possibilities. It can decrease costs and improve time-to-market durations. Not only

can companies benefit from the pure fact that they can now produce prototypes cheaper and faster, they can also reap additional benefit by reorganizing their whole product development process. With the use of RP Technology it is possible

to abandon the conventional evolutionary product development approach and

introduce iterative processes in which prototypes are tested even though it is clear that they will not serve as a final model. RP made it possible to produce similar prototypes which distinguish themselves just in one little aspect which can be

compared then. Furthermore, RP also helps within the globalization by building prototypes which people can experience and therefore reducing complexity and

distances as well as language barriers. Especially in the light of outsourcing, the

possibility to develop parts of products in different continents but producing them all on the same machine for assembly seems inviting. Besides, specialized low

quantity orders like special light bulbs or even bone implants can be produced

Page 60

SMI EUROPEAN BUSINESS SCHOOL

M;FHV MANAOIMINl 1NMIWH

Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

more cost efficient. And perhaps, as the technology improves, these machines will even be able to build new organs. However, this is not possible yet. Despite all the advantages rapid prototyping

has, it should not be forgotten, that it still inherits a lot of limitations. First of all,

the size of the buildable parts is heavily restricted. Big prototypes cannot be built yet. In addition to that, the accuracy is not good enough for all prototypes. This

also holds true for the surface which, due to the different layers, is often not smooth enough. And, lastly, the materials which can be used for RP are still

limited. Solid material that can resist high pressure cannot be used yet. Nevertheless, over the past 17 years rapid prototyping enhanced a lot and it is

assumable that most of these limitations are only of a temporary nature. However, even with the limitations RP has proven its worth in a lot of different situation and helped companies to stay competitive and reach sustainable competitive

advantage through improved product development processes.

Page 61

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K« »S>jtn»n«.u-.n

REVERSE ENGINEERING (RE)

TABLE OF CONTENT

1

WHAT IS REVERSE ENGINEERING? HOW IS IT DEFINED?................ 63

2

HOW DOES REVERSE ENGINEERING WORK?...................................... 64

3

WHERE IS IT APPLIED?.............................................................................. 66

4

SUMMARY.....................................................................................................72

Page 62

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K« »Sutn»n«.u-.n

1

What is reverse engineering? How is it defined?

Reverse engineering, the “scientific method of dissecting and measuring a product

in order to duplicate or enhance it”1, has faced a rapid development in the past.

Originally, reverse engineering was used by Japanese companies to improve on competitor’s products and consequently, avoid original design efforts and expenses. This process of redesign started with observing and testing a product.

Afterwards, it was disassembled and the individual components were analyzed in terms of their form, function, assembly tolerances and manufacturing process.

Based on the gained understanding of the execution of a product, it was improved

either at the subsystem (adaptive) or the component (variant) level.2 In manufacturing, reverse engineering is a commonly accepted and often used

practice. Car companies for example, frequently use reverse engineering to obtain information on another company’s product in order to create a competing product.

While reverse engineering often leads to improvement and innovation, it is

frequently used to provide consumers with competing products at lower prices. Japan’s success in product development has led to reverse engineering being

considered by other nations as a design process as well. The Americans have replaced original design courses by courses in reverse engineering as a problem solving approach. The automotive industry for example, uses direct engineering to

replace its more general original design method.4 Reverse engineering has nowadays moved from a practice considered by those who lack original concepts

to an engineering science. It is no longer just used to side-step design and R&D

processes and to get to a market by copying5 a competitor’s product. It is often

connected to the process of rapid prototyping.6 OEMs, manufacturers, fabricators and service shops have recognized that reverse engineering speeds up internal

processes, particularly rapid prototyping. Prototyping is a relatively new manufacturing process which refers to a class of additive layer-based manufacturing technologies, as opposed to traditional material removal

1 Capa. 2002, p. 4. 2 See Brown and Sharpe. 2001, p. 75. 3 See Capa. 2002, p. 4. 4 See Brown and Sharpe. 2001, p. 75. 5 See Jacobs and Mercer. 2004, p. 1-10. 6 See Myudes. 2004, p. 24

Page 63

SMI EUROPEAN BUSINESS SCHOOL L'«h NrhlUi K*m^»rUhMMMt

processes.1 In rapid prototyping, an object is manufactured from a computer

animated design model by direct numerical control in order to create a solid in some predefined shape. An advantage of rapid prototyping is that the complexity

of a part has significantly less impact on the fabrication process than in a

conventional manufacturing process. Additionally, rapid manufacturing requires very little human intervention and setup time. Consequently, even complex production can be very cost-effective. The product cost can be cut by 70% and the

time to market by 90%.2

2

How does reverse engineering work?

Reverse engineering is seen as the fastest way of translating the dimensions of a physical model or shape into the digital realm. On the basis of the digital realm a

manufacturing, machining or repair plan can be written. In praxis, reverse engineering can be divided into data capture / part digitization, processing,

segmentation and fitting and the CAD model creation. Figure 1: Basic Phase ofReverse Engineering.

Source: Varady et al. 1997. 1 SeeWu. 2001. 2 See Pham and Gault. 1998, pp. 1257-1287.

Page 64

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

The first step in creating a CAD model is part digitization. In the digitization

process point coordinates are acquired from part surfaces. The result of part digitization is a data point cloud which is stored as an image. In the context of

digitization, non-contact and tactile methods can be differentiated. Non-contact methods such as optical, acoustical and magnetic methods do not physically touch

the part. Tactile methods touch the part using a mechanical arm. In the case of non-contact methods, optical methods are probably the broadest and most popular methods with relatively fast acquisition rates. Optical methods include four

different means: Structure lighting, spot ranging, range from focus and stereo

scanning.1 Structure lighting involves project patterns of light upon a surface of interest capturing an image of the resulting pattern as reflected by the surface.2

Spot ranging is used to find the range of the object. In this case, a light or ultrasound is projected to the surface. It is reflected by the part surface and

captured by a detector. The coordinates can be calculated by using the travel time,

frequencies and vision systems. Range from focus describes the focus distance and a vision system to determine the coordinates of an object.4 Stereo scanning uses two cameras to view the part from different perspectives. With the help of Triangulation the coordinates are determined.5 While for the non-contact methods lights or lasers are used to determine

coordinates, the tactile method relies on co-ordinated measuring machines or

scanners.6 Tactile methods are among the most robust but also slowest methods. In this context, CMM is the most commonly used device for extracting the 3-D

coordinates from part surfaces.7 Processing describes the transfer of the digitized

data to a CAD system, where surfaces are developed and drawings are finalized.

Besides reducing the risk of measurement errors, processing data electronically significantly reduces the time required for the overall reverse engineering effort.

o

1 See Wu. 2001, p. 3. 2 See Wu. 2001, p. 3. 3 See Wu. 2001, p. 3. 4SeeWu. 2001, p. 3. 5 See Wu. 2001, p. 3. 6 See Wraige. 2002, p. 40. 7 See Wu. 2001, p. 4; Wraige. 2002, p. 40. 8 See Chaneski. 1998, pp. 50-51.

Page 65

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

The objective of segmentation of scanned data points is to divide the data points into a finite number of subsets of range data. Hereby, two different approaches are

used. The edge-based approach is a two stage approach which consists of edge detecting and edge linking. This approach tries to find boundaries in the point data representing edges between the surfaces.1 The face-based segmentation approach tries to infer connected regions of points with similar properties as belonging to

the same surface. The intersections of surfaces then build the edges.2 The surface fitting techniques are classified by interpolation and approximation. Objects with

non-discontinuous surfaces can be modeled by algorithms such as the least square method.3 In the model creation the findings of the preliminary steps are used to create a

model. There are three different approaches to derive that model: surface

modeling, constructive solid geometry and boundary representation.4 A surface model is built by linking together various kind of surfaces to form a larger kind of

composite surface. A constructive solid geometry is constructed from a few

primitives with Boolean operators. Boundary representation characterizes a solid indirectly by representing its boundary surfaces.

3

Where is it applied?

Six typical applications of reverse engineering can be categorized: design, development, tool making, repair, fabrication and manufacturing. The design is

improved by adapting a structure to a mating surface to compress the time-tomarket cycle. The development of products is faster because of rapid prototyping

and prototype testing for ergonomic, flow testing, or other evaluations. In the process of tool making, reverse engineering reduces the time required to develop tooling and improves tool accuracy. Repairing parts is facilitated by just creating

new parts from old, fractured, or worn originals. In the fabrication it is possible to create elements of material-handling systems and improve other processes.

Manufacturing is improved by developing one-off pieces of equipment or 1 2 3 4

See Varady, Martin and Cox. 1997, pp. 255-268. See Wu. 2001, p. 4. See Chivate and Jablokow. 1995, pp. 193-204. See Wu. 2001, pp. 5-7.

Page 66

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

structures.1

Each

of these

enhanced

applications

contributes

to

the

competitiveness of the companies. Companies which won’t apply reverse engineering in the above mentioned areas will suffer a lack of innovation and

efficiency which is crucial in fast-paced markets. Reverse engineering hereby meets the demands of the markets where three important developments can be

perceived: shorter life cycles, product obsolescence, incentives for rapid

introduction of the product.2

First, shorter life cycles have been perceived not only for high tech products but

also for products not typically regarded as high tech products? Second, product obsolescence is occurring more quickly than in the past because new technologies

are reproducing at a very high rate. This results in truncated life cycles with limited maturity stages.4 Third, the market provides numerous incentives for a

rapid introduction of a new product. An innovative first entry into a market gains a monopoly that yields premiums until other competitors enter the markets, too?

Managers have realized the need to enhance production capabilities to support fast

paced entry strategies. In this context, reverse engineering and the implementation of a computer aided design / computer aided manufacturing (CAD / CAM) have significantly improved processes.6 On the other hand, delays in bringing a product

to the market can have devastating result. In the computer industry for example, a delay of 6-8 months in time-to-market will result in a loss of 50-75%?

A growing importance in meeting the demands of the markets such as cutting concept-to-consumer time, improve quality and reduce the costs of new products

also lies in the early involvement of material suppliers in the new development process? Suppliers should be integrated in the value creation of the manufacturer

by playing an active role in the value chain. With the use of suppliers’ networks,

companies can acquire a very high level of technical skills in a specialized area,

1 2 3 4 5 6 7 8

See Mymudes. 2004, p. 24. See Carrello and Franza. 2004, p. 1. See Stalk and Hout. 1990; Leonardo-Barton et al. 1994. See Leonardo-Barton et al. 1994. See Stalk and Hout. 1990; Urban et al. 1986, pp. 645-659; Ittner and Larcker. 1997, pp. 13-23. See Me Dermontt and Marucheck. 1995, pp. 410-418. See Carrello and Franza. 2004, p. 3. See Ragatz et al. 1997, pp. 190-202.

Page 67

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K«-S>jtn»n«.u-.n

which allows them to fulfill unusual or sudden requests quickly and effectively.1 It provides the companies with a competitive advantage which is expressed in a

high degree of flexibility and innovativeness.2

This approach has already shown to be successful in Japanese organizations.3 Research has proven that early and extensive supplier involvement leads to a more

efficient development process. Outsourcing parts of the value creation has a strong leverage effect on the performance of a company. The reduction of one percent of the material costs equals the effect of a revenue increase of 20 percent

(for companies having a three percent revenue margin and 60 percent material cost of total cost).4 Consequently, the real net output ratio decreased under 50 percent for most of the industrial sectors.5

Especially in the German car industry, the vertical range of manufacture has decreased below 25 percent in recent years.6

Figure 2: Vertical range of manufacturing in German car industry

Vertical range of manufacturing in the German car industry 40

20 I------- 1----- 1------ 1-------- 1--------- 1----- 1----------1-----1------- 1------- 1---------- 1---- 1------- 1---------- 1-----1------- 1------- 1------- 1----------1------- 1------- 1 1980 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 2000 01

Source: VDA. 2003.

Car manufacturers have recognized the necessity to invest in tools to improve new product development and reduce time-to-market. General Motors implemented a 1 See Ittner and Larcker. 1997, pp. 13-23; Gupta and Sounder. 1998; Clark and Wheelwright. 1993. 2 See Eds. 2004. 3 See Peterson et al. 2003, p. 284. 4 See Wildemann. 2002, p. 2. 5 See Mohrstadt. 2001, p. 7. 6 See VDA. 2003.

Page 68

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

CAD system and reduced its average engineering design cycle from 40 to 18

months.1 Ford recently reorganized its product development process in order to reduce its product development costs by about 25%.2 As part of this program,

Ford is investing approximately $100 million into a CAD system which is

lowering development and procurement costs. The specific task here was the integration of programming systems for measuring instruments into the product

design process by the automatic preparation of measurement programmes using CAD modeling.3 BMW initiated a reverse engineering project in 2004 which has three broad objectives: time, quality and costs. First, BMW wants to reduce the

planning time of product projects. Second, it tries to improve the quality of the

results of the planning process and third, the planning costs for each product project should be reduced.4 The growing integration of suppliers in the value chain implies a higher responsibility for the suppliers. Products are to be developed and improved more and more by the supplier. It implies that they have to acquire the capabilities of

modem tools such as reverse engineering in order to be able to stand the demands

of their manufacturers. An example where the supplier can be a driving force for

new solutions by developing standard systems and modules early in the development cycle is reverse system engineering.5 Instead of adapting the

standard products through developmental engineering which adds back costs gained by standardizing modules, the supplier provides standard modules that will

meet the greatest number of customer needs and thus, provide economies of scale. Car manufacturers consider these standard modules before deciding on the architecture of the vehicle and the partitioning requirements. They reverse engineer the modules to influence which concepts go forward.6

1 2 3 4 5 6

See Burt and Grant. 2002. See Hanford. 2003. See Breskin. 2003. See GTMA. 2004. See Solving automotive. 2002, p. 2. See Solving automotive 2002, p. 2.

Page 69

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« !M>lwa K«

Figure 3: A double approach to programme development

K double approach to Programme Development

Use Systems Engineering to drive New Technology Use "Reverse”Systems Engineering to drive Cost Reduction

Source: Solving automotive. 2002, p. 3. By integrating the suppliers, car manufacturers are provided with know solutions

that have been pre-engineered, pre-sourced, and pre-assembled. This reduces not only costs but also valuable development time.1 A reason why carmakers have been reluctant to use standard modules is the fear

of loosing their brand image. Suppliers try to prevent that fear by customizing parts by mixing standardized elements which have been pre-engineered. This

provides a customized look while still using standard elements.2

A good example for the application of reversed system engineering is the case of Visteon, a North American OEM with $17.7bn revenues in 2003. Visteon identifies a potential future market for 42-volt charging systems due to the

growing use of electronics (e.g. telephone, navigation) as well as power (e.g.

power windows, power door locks). The solution is a unique system with mechanical, electrical and electronical componentry. Visteon used the approach of

system engineering for developing the technology and reverse system engineering □ for reducing the costs and make it practical.

1 See Peterson et al. 2003, pp. 284-299; Ragatz, et al. 1997, pp. 190-202. 2 See solving automotive. 2002, p. 3. 3 See solving automotive 2002, p. 4.

Page 70

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n*u-.n

Figure 4: From Concept to Application

From Concept to Application

Technology

Bookshelf

• Project Assumptions ►Concept of Operation interface Diagrams Pugh Matrix Trade Study • Schematics • Drawings •CAE Simulations

Reverse Systems Engineering

Source: Solving automotive. 2002, p. 6.

In the first part of the system process, the requirements of the new technology are

identified. The process then considers the best ways of delivering each requirement to find the optimum approach and develop a demonstrator of the new

technology. Having developed the prototype, the reverse system engineering

approach searches for suppliers who made similar elements or technology that are already included in the prototype (e.g. rectifiers, high-mass flywheels, etc.). Their components are back-integrated into the current design with little or no impact on performance. As a result, cost effective sources for materials and the feasibility of

the new technologies is proven. The approach used by Visteon was very successful. They created a product with superior characteristics (improved fuel

economy by 6-12%, reduced emissions by 10-15%) at a lower project cost (engineering changes reduced by 18%, prototype successfully demonstrated after short development time, three patents issued on product).1

1 See solving automotive. 2002, pp. 5-8.

Page 71

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

4

Summary

Reverse engineering has become an essential tool in the manufacturing process.

Companies have to be able to compete in a highly competitive and fast paced market. The market conditions are characterized by shorter life cycles, product

obsolescence and incentives for rapid introduction of the product.1 In this environment, the rapid development of reverse engineering has significantly

improved the design, development, tool making, repair, fabrication and manufacturing processes within companies.2 Especially, the big car manufacturing companies like GM, Ford and BMW have already realized the necessity to

implement reverse engineering in their production and development process. Hereby, this process is more and more extended to the suppliers. Suppliers develop new systems and modules early in the development process and thus, support the manufacturers in the value creation? The partnering between the

manufacturers and their suppliers suggests a “co-destiny” relationship that is

extended throughout the design and sourcing chains. The early integration of suppliers improves the efficiency of the process and at the same time increases flexibility and innovativeness.4 The growing integration of suppliers is connected

with a growing responsibility. Besides learning new skills and adapting

organizations, suppliers will have to create new collaborative environments to speed up development and control R&D costs.5 As in the Visteon case described, suppliers try to make use of techniques such as

“reverse” systems engineering to effectively reduce costs by applying pre-defined modules as solutions. They benefit from economies of scale by providing standard modules

instead of adapting

standard products through

developmental

engineering. These modules are reverse engineered to determine the constituent parts?

In the future, the process of integrating the suppliers in the development and manufacturing process will continue. Suppliers consequently have to remain

1 2 3 4 5 6

See Carillo and Franza. 2004, p. 1. See Mymudes. 2004, p. 24. See VDA. 2003. See Peterson et al. 2003, pp. 284-299; Ragatz et al. 1997, pp. 190-202. See solving automotive. 2002, pp. 5-8. See solving automotive. 2002, pp. 5-8.

Page 72

SMI EUROPEAN BUSINESS SCHOOL

St I’I’t*

MAXAU« Ml XI

I XS I II I

II

IntorMlMwal r«i»vT»h tMtMl ItewhwUhMUMt

flexible to enhance competitiveness in a quickly changing environment. New

technologies such as reverse engineering are meaningful tools to be able to reduce costs and increase innovativeness in order to meet the demands of the

manufacturers and the markets.

Page 73

&

SMI

EUROPEAN BUSINESS SCHOOL

SL P»’«V MAN AGE Ml N

(MtH Utf

Jn*«n>alwN«i r»>»»r»>l « M>k>S ItoM&crUhMMMt

ABC/XYZ ANALYSIS

TABLE OF CONTENT

1

INTRODUCTION.......................................................................................... 75

2

ABC ANALYSIS........................................................................................... 76

3

XYZ ANALYSIS........................................................................................... 79

4

COMBINING ABC AND XYZ ANALYSIS................................................ 81

5

CONCLUSION...............................................................................................82

Page 74

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

1

Introduction

Since each company faces the scarcity of resources they strive for an optimal use and exploitation of the assets at their disposal. This is true for each business unit

along the value chain: sourcing, production, aales and services. A lot of

companies struggle with a high amount of bound capital due to high stock. However, this is caused by only few types of material. This overview not only

wants to introduce the ABC Analysis but also the XYZ Analysis to be helpful classification methods that allocate data according to certain criteria. This is done

in the context of procurement and material disposition activities of a company. Although both methods hold a lot of similarities, they are used in a slightly different way. However, even a combination of them is possible.

After a short introduction to material disposition, it should be explained why requirements planning and the calculation of production demands is crucial for

companies. Going further, the ABC Analysis is explained. After presenting the intention, application and proceeding of this method, the same examination is

done for the XYZ Analysis. The next part will comprise how the two analyses can be used in combination. Finally, the topic is summarized and conclusions are drawn.

Disposition of material and requirements planning Material management comprises plenty of different tasks that are closely related

to production planning. Thus, a central task of material management is the disposition of material calculating amounts of components and parts according to

the given output of end products.1 In other words, material disposition means all planning activities that aim at high availability of materials whilst minimizing stock. Delays in production or delivery of services can seriously harm customer­

client-relationships. Therefore, bottlenecks and shortages are crucial to be

avoided. The main focus of material disposition is to ensure that material is disposable at a sufficient amount whenever orders are received.

Identifying the demand of end products, those products are broken down into

components and individual parts which literature refers to as “secondary

1 See Kurbel. 1998, p. 124.

Page 75

SMI EUROPEAN BUSINESS SCHOOL

M.-PHV MANAG» MINI

INSHH»»

Jnumwlmnal Voiomit« JwhlnS K« »Sutn^>.u—n

demands”1. Secondary demand, in turn, is summed up to production orders and

purchase orders. In general, for the calculation of secondary demands there are

two methods:

1) demand-driven disposition and 2) consumption-driven

disposition: 1

) Demand-driven disposition

Here, the demand of an individual part is exactly derived from the demand figures

of subordinated parts. This method - called brutto-netto-calculation - provides

precise values. However, the exact calculation is rather laborious.2 2

) Consumption-driven disposition

This calculation is less complicated than the demand-driven disposition, albeit

less accurate. Consumption-driven demand accrues from parts with low demand

values and is derived from past figures by means of statistics. Here, between the concepts of Order Point (stock is replenished when inventory falls below certain

level) and Order Cycle (stock is replenished regular periodic intervals which

requires constant consumption) are distinguished.3 The major difference between the two concepts of disposition is the basis of the calculation. Whilst demand-driven disposition builds upon primary demands4

according

to

customer

orders,

consumption-driven

disposition

requires

estimations of demands merely. Whether a part is disposed either demand-driven or consumption-driven is decided on the basis of the ABC Analysis.5

2

ABC Analysis

The ABC Analysis concept has been introduced by H. Ford Dickie, a senior

administrator at General Electrics, in the early 1950s6. Nowadays, it is an

important and simple tool for getting a picture of the actual stock situation.

1 2 3 4 5 6

See Kurbel. 1998, p. 124. See Kurbel. 1998, pp. 126-127. See Kurbel. 1998, pp. 126-127. See Kurbel. 1998, p. 124. See O.S.E., www.o-s-e.de/PPS_SCHULE/material.htm. See www.ABC-Analyse.info: The concept has been included in his article “Shoot for Dollars, not for Cents” in 1951. The “80/20 Rule” of Vilfredo Pareto and the findings of Max O. Lorenz formed the basis for Dickie’s ideas. His argumentation led to the ABC classification.

Page 76

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

There are a lot of different parts in stock. While some of them are less important, others may represent a much higher value. The outcome of this is those parts shall

not be treated equally, but in accordance to their importance.1 Investigation of inventory often reveals that the major part of the company’s

performance is achieved by only a minor portion of components. In the contrary, other stock might contribute only little to the overall turnover.2 Thus, the ABC

Analysis attempts to identify an input-outcome-ratio. Hence, quantities and values are related to each other, pointing out the share of one part in both the total number of parts and the total turnover? In general, this analysis can be understood as a method to organize a huge quantity of data, manufactures or processes. Given criteria such as turnover, profit, wholesale price, annual consumption or production needs are used to group the respective data into three categories

representing high, middle and low consumption value of the manufactures or

processes.4 Therefore, the ABC Analysis can be considered to be the practical

translation of the Pareto allocation. The Pareto principle describes how a small number of weighted elements contributes the predominant part of the total value.

In particular, Vilfredo Pareto examined the national wealth of Italy. He found that

80 per cent of the wealth is concentrated in only 20 per cent of the Italian families.

This is why his ideas are also known as the “80/20 rule”.5 Applied to materials management, the ABC Analysis emanates from the same assumptions. The ABC Analysis aims at getting a picture of the actual situation in stock. It shall

be found out which group of components and single parts should be paid special

attention to. As a result, the ABC Analysis enables material managers to

distinguish between essential and nonessential stock. At the same time, the consideration identifies hints and starting points for enhancements such as rationalization and better utilization of stock which can finally lead to

considerable cost savings due to a cutback of fixed capital. Also, economic

efficiency can be increased by minimizing any efforts of low economic impact.6 Altogether, the ABC Analysis helps to turn the attention to the important category 1 See Kurbel. 1998, p. 125. 2 See Neumann. 1998, p. 27. 3 See Biehler et al. 1992, p. 51. 4 See 4managers, www.4managers.de, key word “ABC-Analyse“. 5 Wikipedia, de.wikipedia.org, key word “Pareto-Verteilung“. 6 See www.ABC-Analyse.info.

Page 77

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

groups. Hence, the company can take steps more purposeful. It assists in

focussing resources to those data that promise to provide the highest returns (prioritization).

Prioritizing the parts and components in stock, they are allocated to three

categories. The limits for each class are based on experience values of the

respective company. That’s why they can vary between different companies. The assortment is arranged in class A, class B and class C. A represents those parts that are most important, of higher value or with a higher-than-average contribution to turnover. On the contrary, the C class contain all parts that are less important whilst being of lower value with only a minor share in the overall turnover of the company.1 The B class parts hold an average relevance. Their

value contribution to the turnover corresponds approximately proportionally to their quantity basis. Figure 1 clarifies the proportioning.

Figure 1: Classification ofstock. Class

Value Contribution

Quantity Contribution

A

Appr. 60 - 85 %

Appr. 10 %

B

Appr. 10-25%

Appr. 20 - 30 %

C

Appr. 5 - 15 %

Appr. 70 - 80 %

Source: www.ABC-Analyse.info These class limits are only guidelines. The spans rather rely on experience values

and refer to past stock allocation. Therefore, the spreads of different companies

diverge. When the spreads are charted in a coordinate system, the points form a Lorenz curve as shown in Appendix 1. It depicts the unequal contribution of the

categories to the overall stock value and quantity. It clearly shows how the minority of parts have a major impact on the overall value. Applying an ABC Analysis, the literature suggests three stages.

1 See www.ABC-Analyse.info.

Page 78

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

In a first step, the criteria to be evaluated are determined. In most cases this is the

annual turnover of the respective parts. Then the relevant stock is listed and later sorted descending according to their annual turnover value.

The second step is there to calculate the portion of the positions of the annual

turnover in per cent. Additionally, the share in the total quantity of parts is determined. At this time, the portions of value and quantity are accumulated.

Thirdly, the parts are assigned to the respective class A, B or C according the

criteria that were decided on in step one.1 Finally, the findings can be edited

graphically (see above).

Now, that the parts and components are classified, what does it mean for material

disposition? On the basis of the classification the several conclusions can be drawn.

A class parts demand the highest attention. Since imprecise planning results in high storage costs and cost due to shortfalls, A class parts are advisable to be

disposed demand-driven. The material manager should focus on them. Close monitoring of those important parts can even lead to cost savings due to a more

exact disposition. On the contrary, C class parts are to be disposed on a consumption-driven basis. Here, unneeded workload should be avoided. Whilst

for A class parts “just-in-time” disposition might be suitable, C class parts should be purchased at optimal cost, i.e. exploitation of economies of scale. The attention drawn to B class parts should be balanced somewhere in the middle between A

parts and C parts. Still, they play an important role. Therefore, a demand-driven

disposition might be suitable, albeit combined with a more formal process than for 2 A parts.

3

XYZ Analysis

Another useful tool for the disposition of material is the XYZ Analysis.

Sometimes referred to as RSU Analysis, it is similar to the ABC Analysis. However, whilst the latter searches for the most successful products, the XYZ

Analysis wants to tell something about the consumption of parts in stock. The 1 See 4managers. 2 See www.ABC-Analyse.info and O.S.E.

Page 79

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« SMilwB K«-S>jtn»n«.u-.n

XYZ Analysis classifies parts and components according to their using. Unlike

the ABC Analysis, here the criteria are not defined exactly. That means that the XYZ Analysis is very flexible to apply. Possible fields of application are: volume analyses, fluctuation in consumption, and accuracy of prediction. As an example,

parts could be categorized according to their fluctuation of consumption.1 It tells

whether parts are used regularly or rather constantly. Consumption figures and prediction can even be coupled. Whereas the ABC Analysis uses calculated

classifications, the XYZ Analysis is build upon more indirect measures, such as statistics of past withdrawals, calculation of variations coefficient, and normal

distribution.2 In materials management the XYZ Analysis allocates the respective

material positions according to their fluctuation of withdrawal, i.e. their variances of stock in a certain period of time. On the basis of these findings, specific patterns of withdrawal can be derived. In turn, these patterns allow for

conclusions and anticipation of future demand and order behaviour. As a result, materials management can decide what quantities to source in what time intervals. So, the XYZ Analysis measures quantitative effects whilst the ABC Analysis

deals with the impact of the material on the overall turnover. Figure 2 shows how

the classes X, Y and Z could be characterized and how the degree of fluemation allows for statements about the predictability for future consumption. Figure 2: The classification of material into classes X, F and Z.

Class

Consumption

X

Constant, Fluemation rarely occurs

Y

Predictability

High

Moderate fluemation often for

seasonal reasons or due to Middle trends Z

Completely irregular

Low

Source: www.ABC-Analyse.info.

1 See Bichler et al. 1992, p. 52. 2 See www.ABC-Analyse.info and www.unister.de.

Page 80

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

The funds commitment decreases from X class parts to Z class parts. This result can be put into action be thinking about just-in-time purchasing for X parts, since

their withdrawal can be predicted precisely. On the contrary, Z parts’ withdrawal

cannot be predicted easily. Therefore, a certain amount should always be hold, in order to avoid any shortfalls. The fluctuation of Y parts ranges between these two

extremes. Therefore, the average stock level of those parts is can be lower than

those of Z parts.1

4

Combining ABC and XYZ Analysis

Often ABC Analysis and XYZ Analysis can be combined. The classes of both methods are mapped in one diagramme. For this purpose, the classes A, B and C

can for example be mapped along the x-axis whilst the classes X, Y and Z mapped along the y-axis. The resulting matrix would like like this:

Figure 3: A combined ABC/XYZ Analysis. A

B

C

X

AX

BX

ex

Y

AY

BY

CY

Z

AZ

BZ

CZ

Source: Biehler et al. 1992, p. 53. This combined version illustrated in figure 3 enhances the efficiency of a pure

ABC Analysis or a pure XYZ Analysis. It can tell something not only about the disposition, but also about the selection of suppliers and the general order processing. The most remarkable combinations are AX parts and CZ parts. When

choosing a vendor, for AX parts his reliability is of utmost importance, since justin-time ordering would not work otherwise. Additionally, the high meaning of

those parts should prompt the material manager to check alternative suppliers. The

material manager should take time for intense price negotiations. When selected a

1 See www.unister.de.

Page 81

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K« »Sutn»n«.u-.n

particular vendor, he should go for long-term partnerships. In doing so, production will not be disturbed by delays in supply. On the other hand, the opposite is true for CZ parts. For them, simple processes

should be in the foreground. Here, frame contracts are sufficient, declaring a

certain amount to be retrieved e.g. on an annual basis. Economies of scale should be used. Since those parts do not greatly contribute to the overall turnover,

ordering processes for them can be kept simple. The efforts for purchasing should

not exceed the low contribution to benefits.1

5

Conclusion

Since no company can afford to waist resources and capacities, cost-benefit-

analyses are useful means to categorize data. In general, every process can be analyzed regarding their priority and economic significance. Especially in the

materials management, the ABC and XYZ Analyses are useful. However, those

methods can also be used to classify customer groups, sales regions and suppliers. Those tools hold a lot of advantages. They do not only analyze complex problems with comparably moderate efforts, they are also relatively simple to apply. The

graphical illustration of the findings is clear and easy to understand. Nevertheless, the really rough classification could be of disadvantage. The ABC Analysis in particular requires the provision of consistent data which might be difficult to

obtain.2 Altogether, the analyses provide a good picture of the actual situation and

helps to identify important key accounts.

1 See www.ABC-Analyse.info. 2 See 4managers.

Page 82

SMI EUROPEAN BUSINESS SCHOOL IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

SIMULTANEOUS ENGINEERING (SE)

TABLE OF CONTENT

1

INTRODUTION AND DEFINITION......................................................... 84

2

FUNCTIONALITY OF SIMULTANEOUS ENGINEERING................... 85

3

SUPPLY RELEVANCE OF SIMULTANEOUS ENGINEERING............ 88

4

APPLICATION OF S E IN THE INDUSTRY PRACTICE........................ 91

5

SUMMARY AND CONCLUSION.............................................................. 93

Page 83

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

1

Introdution and definition

In many industries, time-to-market has been discovered as a key source of success

and potential competitive advantage during the beginning of the 1990s. In the early stages of the last decade, several studies proved that companies in various

industries were able to considerably reduce their development times by

overlapping the related activities.1 Especially the fact that more than 70% of all product-related costs were already occurring throughout the conceptual stages of product development created a significant potential for improvement and value

creation. It was claimed that with the help of this concept up to 30% in development costs could be saved in comparison to conventional processes.2 The method of overlapping development-related activities as well as the

surrounding organizational activities is usually referred to as simultaneous engineering. It is an approach to product design and development which tries to

bundle the knowledge of design engineers, process engineers and production

engineers with the ideas of other business areas, for example purchasing, marketing, suppliers and customers. This methodology shall enhance the

oftentimes lengthy product development process in terms of time and cost, as well

as in terms of quality.3 Instead of dividing the entire development value chain into

sequential modules, partial processes are to be formed which can be undertaken simultaneously or at least overlapping. While the necessary timeframe in a sequential process is dependent on all partial processes, in simultaneous

engineering this timeframe only depends on the duration of the partial processes on the ‘longest path’ through the whole process network.4 This leads to time savings in the work progress itself, as well as in unplanned ‘backward leaps’,

which are supposed to be eliminated entirely after some time. By having all

related business functions work parallel from the very first development stage, they will also be able to participate with their knowledge much earlier, therefore reducing the need for double work. In practice, the development of parts that

could not be engineered cost-efficiently and had to be re-designed afterwards has always been a frequent cause of distress in sequential processes. In simultaneous 1 2 3 4

See Terwiesch and Loch. 1999, p. 455. See Hahn and Kaufmann. 2002, p. 64. See Saunders. 1997, p. 22. See Bogaschewsky and Rollberg. 1998, p. 106.

Page 84

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

engineering, the work happens in interdisciplinary teams under the close

supervision of a project manager.1 The described problems of the formerly practiced “over the wall approach” should at least be reduced significantly, given

a proper implementation. In addition to that, the concept is supposed to benefit the

company through faster order processing, more transparent processes with shorter

decision paths, lower distress costs and a higher quality. It also consolidates the

knowledge that is still spread across divisions and leads to shorter reaction times on customer needs. The literature on simultaneous engineering additionally often

distinguishes between time and information concurrency. While the former refers to activities that are performed in parallel by different people or groups, the latter

refers to the degree to which those groups share information among each other. And while time concurrency is certainly the prerequisite for a simultaneous

engineering process, one cannot be effective without a large degree of information

concurrency either.4 All in all, the literature claims that after implementing simultaneous engineering,

the best practice companies manage to complete their development process two thirds faster than the average of their peers. On top of that, the best firms also incur 84% less construction modifications in the first year of production.5

2

Functionality of simultaneous engineering

The final goal of simultaneous engineering is the integration of R&D, product

design, process planning, manufacturing, assembly and marketing into one

common activity. The centre of this process is formed by the interdisciplinary design and production team which is responsible for the co-ordination of comments and redesign proposals from all domain experts involved. A crucial success factor at this stage is the degree of communication and information flow among them. From a very elevated point of view, the process can be seen as a frequency of roundtable discussions. At first, a conceptual design is forwarded to

all domain experts, who then start to comment and improve it from their 1 SeePepels. 1998, p. 523. 2 Loch and Terwiesch. 1998, p. 1033. 3 See Pepels. 1998, p. 523. 4 Loch and Terwiesch. 1998, p. 1033. 5 See Strub. 1998, p. 339.

Page 85

SMI EUROPEAN BUSINESS SCHOOL lnu»TMlMMI ItewhwUhMUMt

individual domain’s point of view. Throughout several rounds of redesign, these experts conceptualize the product and continually improve it until a broad

consensus is reached. The careful analysis by each domain in the very early product development stage ensures a clear understanding of the production and

assembly process and thus a manufacturable design.1

A less distant perspective on this process shows three elementary guidelines, which direct the design of the new product development process: parallelization,

standardization and integration. Parallelization in this context means shortening or optimizing the process time and first of all involves a concurrency of process steps that are not dependent on each other. In the case of such dependencies, the

dependent step will at least already commence prior to the finalization of its predecessor. However, companies have to be aware that this speed increase will usually come at the expense of higher decision complexity. An increased

information flow across the related divisions and the larger share of uncertain or incomplete information can cause potential distress if not properly managed. To

cope with this added complexity, companies usually have to rely on an adequate

information technology infrastructure that ensures proper synchronization across

the autonomous teams with the help of modem information and communication technology.3

The standardization of the product development process can in this context be

defined as a permanent description and regulation of different process aspects (technical, procedural, organizational) that are independent of individual persons

or events. This is supposed to eliminate unnecessary repetitions of certain process

steps and create a learning effect from past process experiences.4 Last but not least, the integration of the product development process implies the already mentioned involvement of multiple functional domains in the process. Again, the

more domains become integrated, the larger the number of interfaces that need to be managed and the larger the danger for information or co-ordination

1 2 3 4

See Shenas and Derakhshan. 1994, p. 32. See Bullinger. 1997, pp. 15-16. See Bogaschewsky and Rollberg. 1998, p. 178. See Bullinger. 1997, p. 15.

Page 86

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

deficiencies at domain interfaces. Yet, it is important to ensure a joint effort towards the organizational goal rather than the individual domain objectives.1

As the concept of simultaneous engineering represents a dissociation from the traditional sequential effort, it ideally also involves the introduction of a modified organizational structure. An example of this structure can be observed in figure 1.2

The necessary degree of changes in the institutional arrangement depends on a number of factors such as the sophistication of the product, the organizational

capabilities, the availability of resources, the importance of a short product life­ cycle and the company’s commitment to the simultaneous engineering concept.

The lowest scale at which it can still be valid merely involves the described

interfaces between the different domain experts and the use of liaisons between

certain stages of production, who act as go-betweens among the different functional domains. However, this approach is far from ideal and will not be able to fully exploit the total potential of the simultaneous engineering concept. A

much more integrated approach would involve the formation of a single department responsible for both product and process and thus really require a

substantial structural change in the company’s organizational structure. This practice would really foster cross-functional learning through the day-to-day work and greatly increase the opportunity for innovative experiments and technological

synergies.3

1 See Bullinger. 1997, pp. 15-16. 2 See Hahn and Kaufmann. 2002, p. 342. 3 See Shenas and Derakhshan. 1994, p. 34.

Page 87

SMI EUROPEAN BUSINESS SCHOOL

M.-PHV MANAG» MINI

INSHH»»

Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

Figure 1: A simultaneous-engineering organisation superordinate decision gremia

division of the project goals into topic group goals (e g. technology, deadline, quality, environment and costs) coordination of overall project

I

------

division of the goals into manageable parts

project leader/team

- project progress

------------ 1status reports,

topic groups

coordinate topic group workload

autonomous solution derivation in line with the defined goals

report to the principal

-derivations from predetermined specifications

areas of conflict

simultaneous engineering teams

—1problem solutions,

status reports, departments

I

systems suppliers

areas of conflict

Source: Hahn and Kaufmann. 2002, p. 342. It also highly relies on the interpersonal and technical skills of the department

members, who are obviously stemming from a former line division. Therefore, simultaneous engineering often comes with a stronger focus on the importance of human resources. For the concept to be successful it is of crucial importance that

the right people are assigned at the right places and that certain employee skills are cultivated. An active personnel development can help to prepare employees

for the changes in their working environment and improve their ability for interdisciplinary work together with their understanding of different domain

perspectives. Also job rotation can help in the accomplishment of team-able workers. This leaming-by-doing approach in addition increases the cross­

functional expertise of individual employees and can support a smoother process

flow later on.1

3

Relevance of simultaneous engineering

The increased speed with which new technologies and competition is permanently

reshaping the market structure in many industries has forced manufacturers into finding ways to deal with new challenges. In product development and design 1 See Bullinger. 1997 pp. 25-26; Shenas and Derakhshan. 1994, p. 34.

Page 88

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

processes, the OEMs now have to rely increasingly on their vendors. Increased technological sophistication in addition leaves the companies unable to

manufacture complex products without outside knowledge, or at least unable to do so efficiently. Also an increasing need for customized, high performance and

quality parts has elevated the importance of suppliers for the overall company success. Moreover, the introduction of fusion technologies has led to new

challenges for the research and development and design teams of companies involved in high technology businesses. There is an identifiable trend that product development increasingly relies on the suppliers’ R&D and design staff for a significant part of their innovation and development processes, as they simply

lack many of their vendors’ competencies.1 In the light of these facts, supplier selection and evaluation decisions obviously have a very strong long-term effect on the purchasing company’s processes,

relationships and, above all, profitability. Especially the decision for suppliers of strategically important parts with a high dollar-share of the overall product can

have substantial implications for the firm’s ability to generate future profits.

Certain predefined decision criteria as well as the experience of the members of

the simultaneous engineering team can help to generate a list of potential suppliers. These suppliers are then pre-evaluated according to basic criteria, like their product knowledge, development capacity, their expertise in key

technologies, their flexibility and their degree of innovation. Vendor workshops can offer the remaining suppliers a chance to present their case and also provide a ballpark figure of the estimated costs. At the end of this meticulous process, the simultaneous engineering team finally can announce the suppliers they want to

enter a development contract with. This should involve a means of cost control of all suppliers along the predetermined specifications.

Taking the previously mentioned factors into account, the reason for engaging in such a time-consuming process should not need too much explanation. In the

context of a simultaneous engineering process, it should be in the company’s best interest to develop a number of systems suppliers, whom they can grant a certain

1 Shenas and Derakhshan. 1994 pp. 37-38. 2 See Zsidisin, Ellram and Ogden. 2003, p. 144. 3 See Hahn and Kaufmann. 2002, pp. 343-347.

Page 89

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K« »Sutn»n*u-.n

degree of own responsibility in the area of product expertise development and the

solution to project specific problems. In many cases, the supplier is even

integrated into his client’s development process and forms a team together with him.1 This fundamental change in the relationship between companies and their

vendors obviously also leads to a much higher interdependence, where suppliers

cannot be replaced very easily. A careless selection process can thus cause much larger problems later on.2

The co-ordination and transmission of technical information will usually involve the formation of cross-functional teams, also across companies. Like in a solely internal

simultaneous

engineering process,

this

obviously requires

the

participation of domain experts of both OEMs and their suppliers. Interface will

not only take place at the boundaries of an organization, but very often in its centre, depending on the necessary degree of customization and complexity of the product. This close co-operation can lead to better designs with improved quality,

manufacturability and ease of assembly. It also helps to reduce the need for time□ consuming redesigns afterwards.

To accomplish those benefits, the requirements (e.g. quality, time, technology) of all parties involved in the development process should be clarified early on. In the

described form of a simultaneous engineering organization, these parties would

obviously include the organizationally integrated systems suppliers, as can be seen in figure 2. This close and very early integration of suppliers into the simultaneous engineering team helps to reduce complexity as it firstly limits the

number of contacts to a necessary minimum and secondly helps the involved

parties to create a common routine together. At the same time, it also benefits both companies due to a significant further reduction of development time and costs (the prime goal of simultaneous engineering), an early integration of customer needs and high-quality products due constructions and construction processes that

are very well thought-through.4

1 2 3 4

See Arnolds, Heege and Tussing. 1998, p. 267. See Shenas and Derakhshan. 1994, p. 38. See Shenas andDerakhshan. 1994, p. 38. See Hahn and Kaufmann. 2002, p. 347.

Page 90

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

4

Application of simultaneos engineering in the industry practice

The simultaneous engineering approach is increasingly becoming an integral

feature of modem manufacturing practice in many development processes. On the

other hand, firms are also finding it difficult to implement the concept, as there is some resistance to the large degree of organizational changes the concept is

accompanied by.1 An industry in which simultaneous engineering has been quite strong from the beginning (largely because of the short time-to-market of most

Japanese industry players) is the automotive sector, where the overlapping of activities has really managed to speed up development processes, although not always to the same extent. An important success indicator seems to be the nature

of the industry, which plays an important part in determining whether the concept

can be assumed suitable or not. In the different computer industries, or example, where the approach was also implemented, there are some significant differences

across industry sectors. While the comparatively stable and mature mainframe and 1 See Shenas and Derakhshan. 1994, p. 30.

Page 91

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

microcomputer industry firms where indeed able to accomplish considerable

reductions in their time-to-market, this could not be observed in more rapidly

changing markets like printers and personal computers, where this compression strategy did not provide a noteworthy acceleration. The potential of simultaneous engineering hence also seems to depend on industry circumstances and cannot be

applied uniformly.1

To demonstrate the large practice relevance of this topic, some examples of firms really implementing the concept will be outlined in the following: when BMW

decided to construct their new research engineering site, the ‘Fiz centre’ in Munich, they designed it specifically in pursuit of simultaneous engineering principles. Its conception allows a first-concept designer to easily discuss the

manufacturing practicalities of even an outline design idea with production line engineers. The building design has been based on the assumption that if the

physical distance between two design engineers is larger than 150 meters, an easy

idea interchange or problem discussion is discouraged. This degree of commitment to simultaneous engineering principles has lead to a decrease in the development cycle by two full years. Most extensively, however, simultaneous engineering is practiced in Japan, under

the name ‘doki-ka’. It is utilized by a lot of companies, including Sony, Fujitsu

and Matsushita, and even goes beyond the mere technical activities to also include marketing. This very different domain here often provides the design engineers with new product ideas and also receives a backward information flow if a new product can enter the market testing stage or prior ideas cannot be realized. A

market leader in this field is the company Toyota Motors. Its central research and

development department is responsible for the whole of design, development, pilot production as well as testing of the latest Toyota automobile prototypes. And while pilot production is done by the department itself, the real production facilities are located at several miles distance. Perhaps the most important aspect

of the company’s system is the utilization of multi-functional teams along with

1 See Terwiesch and Loch. 1999, p. 456. 2 See Shenas and Derakhshan. 1994, p. 35.

Page 92

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

task rotation. Here, marketing engineers actively participate in the teams and transmit marketing data to the joint design and production department.1 A prominent example from another industry would be the German chemical

company BASF, which has introduced simultaneous engineering to speed up their development process of new chemicals. The lengthy process, which could

formerly last more than a decade, has been reduced by more than seven years as a result of leapfrogging the pilot stage. BASF’s typical linear development

approach used to involve several stages from research, process engineering, evolving the process, translating it into production scale until planning and construction and finally production. This process could now be shortened

considerably by paralleling the second, third and fourth stage - with more to

follow. Although this method obviously requires a rigorous project management effort, the results should really speak for themselves.2

There are obviously companies, who manage to utilize the potential of

simultaneous engineering better than others. In the automotive industry alone, the Japanese players (especially Toyota Motors) seem years ahead, especially of their

American peers. A close look into the company practice might show that there are indeed differences, not in the concept itself but in the degree of organizational

adaptation. It is apparently the honest cross-functional commitment found at

Japanese manufacturers that simply elevates their process efforts to a higher level,

not the differences in the concept itself.

5

Summary and conclusion

The simultaneous engineering concept certainly has a very large success potential

for many manufacturing companies in times when short development cycles become increasingly crucial. While this is not the primary purpose, the process

not only results in shorter product development times, but also in higher quality products which are produced in a cost-effective manner.4 Through its cross­

functional co-operation efforts, the concept also puts other jargon phrases such as 1 2 3 4

See Shenas and Derakhshan. 1994, pp. 35-36. See Whitworth. 1998, pp. 29-30. See Vasilash. 2001, p. 8. See Ziemke and McCollum. 2001, p. 14.

Page 93

SMI EUROPEAN BUSINESS SCHOOL

M.-PPH MANAGIMIN1

INSH»l»l

lnu»TMlMMI ItewhwUhMUMt

‘design for economic manufacture’, ‘design for assembly’ and ‘design for quality’ into effect and can obviously serve to create a stronger conscience for these

important issues in firms that were not really focusing on them before.1 All in all,

the concept provides a useful framework within which the required product introduction procedures can be developed. An individually suited solution should

adhere to the general principles of simultaneous engineering as much as possible,

but also needs to find a way of tackling the new issues that these procedures create, like a much larger degree of complexity and a higher need for modem

technology infrastructure.

But while a lot of companies manage to successfully adopt the concept and benefit from the merits of simultaneous engineering, others are facing difficulties in implementing its techniques. Especially the fact that an implementation of this

approach requires a stronger focus on new and innovative human resource management and a significant organizational redesign makes the process

dangerous for firms that are not really committed to the concept.3 Here, the overlapping activities can also come at the expense of development rework,

especially if the involved uncertainty is not resolved early during a project. Such rework may in some cases certainly even outweigh the benefits of concurrent task execution. As discussed before, simultaneous engineering is also not uniformly

suited for all industries and should probably not be applied in those with a large degree of uncertainty and which are subject to permanent and rapid change.4 With regards to the supply relevance of the concept, there are also some beneficial

side effects of its implementation. The concept forces many manufacturers to reconsider their purchasing organization, its processes, the related education and motivation of employees, and especially their supplier relationships. The necessity

to increasingly integrate suppliers into the development process as opposed to

their traditional role in companies creates a much larger awareness for these kinds of issues in firms.5

' 2 3 4 5

See Saunders. 1997, p. 153. See Hindson; Kochhar and Cook. 1998, p. 258. See Shenas and Derakhshan. 1994, p. 33. See Terwiesch and Loch. 1999, pp. 455-456. See Strub. 1998, p. 362.

Page 94

SMI EUROPEAN BUSINESS SCHOOL £*hb»A k«»

All in all, simultaneous engineering certainly provides a good conceptual framework with a very large upside potential that has already been captures by a

multitude of companies in different industries around the world. As is probably

the case with most such concepts, it can obviously not provide a guarantee for success and firms will still have to put a lot of commitment into it before being

able to really make use of this potential.1

1 See Vasilash. 2001, p. 8.

Page 95

SMI 5 V PI’lV M A N AG I MI NT INMIHI

EUROFEAN BUSINESS SCHOOL kwMloul !MUn*

TOTAL COST OF OWNERSHIP (TCO)

TABLE OF CONTENT

1

INTRODUCTION......................................................................................... 97

2

THEORETICAL FRAMEWORK................................................................. 97

3

TCO IMPLEMENTATION.......................................................................... 100

4

CASE STUDY: THE REAL COST OF LINUX..........................................103

5

CONCLUSION.............................................................................................104

Page 96

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

1

Introduction

The total cost of ownership is one of the most important topics managers consider

in the field of purchasing management today. Cost is a measure of resource consumption. Resources themselves are sometimes hard to define and measure and hence, cost can be used as a useful proxy for them. Therefore, cost is usually

a key decision variable. It reduces the issue of resources to a common metric.1

“Total cost of ownership represents an innovative philosophy aimed at developing an understanding of the “true cost” of doing business with a particular supplier for a particular good or service. TCO looks beyond price to include other major cost

issues which affect critical purchases.” Acquisition cost is just one component of

the total cost of a product, material, or a service. It is in the utmost interest of the

company not to overemphasize on the acquisition cost and ignore other significant ownership and post-ownership costs. The total cost of ownership requires real understanding of all supply chain-related cost of doing business with a particular supplier for a particular service or good.3 The total cost of ownership concept

helps companies to establish cash requirements for an operation/project on a long

run basis, helps them to estimate revenue for project success and determine strategies such as make-buy decisions, choice of process, design, technology and

acquisition/selling strategies. The objective of this overview is to discuss the total

cost of ownership (TCO) concept. It discusses the types of TCO analysis. It also discusses the potential basis for classifying total cost of ownership models.

2

Theoretical framework

The total cost of ownership is the sum total of all costs associated with the acquisition, use and maintenance of a product or service and not just the purchase

price. “The TCO concept requires firms to consider the activities they undertake

that cause them to incur costs. By analyzing flows and activities within each process, a firm can identify which activities add value, and which do not. Further, a firm can determine explicitly which activities it performs and pays for

internally, versus activities performed by others that increase the price of 1 See http://msll.mit.edu/356_1998/costl/CostModeling01a2.htm, 2 See Ellram. 1994, p. 1. 3 See Ellram. 1994, p. 161.

Page 97

SMI EUROFEAN BUSINESS SCHOOL lnu»TMlM«m* ItewhwUhMUMt

purchased goods and services. Thus, a firm can identify the true cost of any

activity, rather than simply the costs allocated or paid externally for that activity.”1 Figure 1: Purchasing activities contributing to the total cost of ownership Ma>aGEMF,NT DrtermuwtiMii «ri' purdUMn| Mrttegy tn writ tnrpnmtr MnHct* litre. evatuMc. pronMlft. far puftlMMnfc CtHwdrlsUc wHh «Mhn ful»c! iöß* Tfimmf uf rurthaMng pt'umnd fanjl iion

y

Source: Ellram and Siferd. 1993, p. 164. TCO andNPV concept

TCO concept can be expressed in terms of Net Present Value (NPV) Concept. The Total cost of ownership is the sum total of the acquisition cost and net present value of the future costs incurred in maintenance, handling and use of the product and service. TCO is like NPV requires analysis of the holding period of the asset.2

TCO = A + P.V. X (Tj + Oi + Mi + Sn)

where

TCO = Total cost of Ownership A = Acquisition cost

P.V. = Present Value

Tj = Training costs in year i Oj = Operating costs in year i Mj = Maintenance cost in year i Sn = Salvage value in year n See Ellram and Siferd. 1993, p. 164. 2 See Burt and Dobler. 2003, p. 171.

Page 98

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« SMilwB K«-S>jtn»n«.u-.n

Benefits of TCO analysis

TCO is beneficial in understanding and managing purchasing costs. It can be used to measure performance of firms and suppliers. It lays a good foundation to

evaluate suppliers. It is an excellent tool for benchmarking and concrete way to measure results of quality improvement efforts. TCO analysis can be used for efficient decision making process such as making supplier selection decisions. It is an excellent communication vehicle between firms and suppliers. It is a way to

get other functions involved in purchasing decisions. TCO analysis also provides excellent data for trend analysis on costs, provided critical data for target pricing.

It also creates awareness of the most significant non price factors that contribute to TCO. It identifies where suppliers should focus improvement efforts. Lastly,

TCO analysis helps identify cost saving opportunities. It forces forms to look at

the internal issues, how their own requirements/specifications may actually increase costs.1

Barriers to TCO implementation

There are several issues and difficulties that companies face while implementing TCO. The first critical issue is that firm must manage and understand costs. It requires deep understanding of the cost structure and projecting future costs into present. Secondly, it is really difficult to have accurate information for the pre­

transaction, transaction and post-transaction cost components. Thirdly, the firms need to define where to begin their TCO efforts. The firm has to choose between starting with one item, a family of items, items that fit into different buying categories, such as a component, a capital equipment etc. Fourthly, it is also to be decided how and where TCO will be used in the organization. Whether it would

be a tool reserved for critical items, or more widely used. Apart from the above mentioned barriers to TCO implementation, there are

cultural issues that relate to general resistance to chance and the ‘not invented

here’ syndrome. TCO implementation also requires education and training of

purchasing managers, including providing them with the necessary tools to use and understand TCO. Resource allocation is also one of the biggest issues in TCO

1 See Ellram. 1994, p. 180. 2 See Cavinto and Kaufmann. 1999, p. 493-495.

Page 99

SMI EUROFEAN BUSINESS SCHOOL

M>pm MAMGI Ml NT INMinil

IntonwUHM*! IfewmMH !M>lnB k»»«run»»»«

implementation. The major resource problem is lack of computer systems to support their TCO efforts. Thus TCO is labor intensive and requires huge man­ hours for implementation. This can create huge unrest and high level of frustration among managers initially.1

3

TCO implementation

There are two approaches for performing TCO analysis namely, a one-time project analysis and a computerized, ongoing system. The former is used to

support a specific decision-making action. “Decisions that are well supported by this one-time project analysis are outsourcing, reducing supply base, forming

alliances, looking for areas for improvement, selecting key suppliers. This approach is used more commonly than the other type of TCO analysis because it is customized to fit the specific situation.” The other type of TCO analysis is the

ongoing computerized system. It is ideal for analyzing recurring purchases, such as raw materials, packaging and supplies.

There are other numerous ways in which TCO analysis can be classified. Total

cost of ownership models could be classified based on a number of factors such as corporate culture, the reason for developing a TCO approach, the importance of various types of purchases to the firm, the firm’s experience in implementing TCO, the complexity of the purchased items of interest, the variability of

purchased items across buy within a purchase category, the availability of computer systems to support the model, and what decisions the model will □ support. As the complexity of the items goes up, the TCO also goes high.

TCO in manufacturing The total cost of ownership includes a variety of costs lumped together and allocated based on production units, labor hours or some other factor. In the case

of product costing, the TCO includes costs such as storage costs, labor costs, delivery costs, tariffs or duties, and other costs based on activities involving the product such as order placement and supplier search and qualification, inspection, 1 See Ellram. 1994, p. 172. 2 See Cavinto and Kaufmann. 1999, p. 489-490. 3 See Ellram. 1994, p. 173.

Page 100

SMI EUROPEAN BUSINESS SCHOOL

M Pl’l V MANAG« MINI

I NS I II I II

IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

warehousing etc. TCO includes costs based on their relative importance or

magnitude of those costs for the items purchased.1 In case of service firms which provide “intangible “ products to satisfy human needs, the total cost of ownership

includes costs such as capital equipment procurement costs, costs of products and

services as well as salaries, wages and benefits such as health and life insurance provided to their employees.2

TCO in retail businesses “Retail businesses sell a product that often must be ordered, received,

inventoried, sold and perhaps, delivered to the customer. The choice of a system that facilitates the processes involved in inventory ordering and turnover will

influence the total cost of inventory ownership. Many major retailers have empowered select suppliers to manage their product inventory for them, thus reducing purchasing overhead and inventory carrying costs without necessarily increasing the product cost. Embracing the Just-in-Time philosophy is another

way to improve quality, cost and time while reducing the total cost of ownership. Lowering the cost of goods sold and the overhead costs associated with procurement, inventory carrying costs and sales improve the bottom line. It is

often easier to lower costs than to increase sales in a competitive business environment.” TCO in supply chain/supply network

“Even a supply management professional or organization can apply the

philosophy and practice of TCO to the strategic optimization of costs within the supply chain.”4 In TCO analysis for the entire supply chain, the cost involved in

each phase or part of the supply chain has to be considered and added in the TCO.

TCO includes not only ownership costs but also post ownership costs. Ownership costs are those costs that are associated with the ongoing use of a purchases product, material or item of equipment after the initial purchase. The ownership

costs include costs that are both qualitative and quantitative in nature. Quantitative costs are costs such as energy usage, downtime, scheduled maintenance, repair 1 2 3 4

See Ellram. 1994, p. See Burt and Dobler. See Burt and Dobler. See Burt and Dobler.

171. 2003, p. 162. 2003, p. 162. 2003, p. 162.

Page 101

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

and financing. Qualitative costs are costs that are difficult to quantify but are important in decision making process. These are costs like ease of use, aesthetic

(psychologically pleasing to the eye), ergonomic (maximizing productivity, reduce fatigue). The other costs that are included in the total cost of ownership in

the entire supply chain are downtime costs (costs due to the reduced production volume and idle resources), risk costs (risks when purchasing from new suppliers,

using new materials, processes or equipment), cycle time costs (reducing new

product’s time to market can increase profitability of the firm), conversion costs

(buying wrong material whether in quality, form or design can increase the cost of conversion), non-value-added costs (maintaining complex and tedious operating

procedures, duplication of efforts, increased wastage), supply chain or supply network costs (costs due to ins and outs of material flow from one organization to

another leads to wastage) etc. The post ownership costs are the costs that could be estimated as cash inflows (such as the sale of used plant and equipment) or outflows (such as demolition of an obsolete facility). The post ownership costs also include costs such as

environmental costs (costs due to the damage to the environment), warranty costs (poorly designed product has to be repaired or replaced by the manufacturer to the customer), product liability costs (liability due to poorly designed or produced goods) and customer dissatisfaction costs (negative publicity frequently results in

lost sales and loss of customer base).1 TCO in IT industry

With the growth of information technology, TCO is becoming an important concern for IT purchasing managers. There are so many hidden and ongoing costs that frequently are not considered when an IT project is originally proposed. TCO concept tries to overcome this problem and evaluates the total cost of the IT

project to the firm. Most of the time, it is easy to calculate the purchase price of

the hardware or the license fees for the vendor's software. IT projects typically include the costs of conversion to the new hardware or software. The Gartner

TCO model2 utilizes two major categories to organize costs namely direct costs

and indirect costs. Direct costs are the costs that are related to the visible IT and 1 See Burt and Dobler. 2003, p. 166. 2 See Bensberg. 2003, p. 1.

Page 102

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

Jnumwlmnal Voiomit« !M>WS tu»

support investments and expenses. The examples are hardware and software costs (the initial purchase or lease costs operations, storage, network equipment costs

etc.), operation costs (including the labor costs for technical operations and support as well as the help desk), and administrative costs (includes an appropriate allocation of finance, HR, administration and procurement department costs). Indirect costs are those costs that are less visible and usually are dispersed

across the business operations organizations. The indirect costs include end user operation costs (costs incurred when individuals gradually evolve to become part of the support structure), downtime costs (This occurs when the end users are interrupted from their regular work when things break or something goes wrong with the system).1

4

Case study: The real cost of Linux

“After years of experimentation with Linux in the enterprise, customers, analysts

and vendors are starting to sing a consistent tune about where Linux makes financial sense and where it doesn’t.” The question is whether TCO of Linux really lower than that of Unix or Windows. The answer to the question is that the

more fully an enterprise adopts Linux across its infrastructure, the more financial

leverage it is likely to get out of upfront investments in the operating system. Laef Oslon, an economist decided to build a TCO model for Linux evaluation. It was indeed a very tedious task to break Linux TCO down because of the fluid

environment. After the TCO modeling, it was found out that financial benefit of

switching to Linux from Unix or Windows is driven by four main cost categories:

acquisition, migration, management, and support. The acquisition costs are mainly the hardware and software costs. On both the hardware and software side, an often overlooked cost advantage of Linux is the flexibility it provides in terms of future

migration. Migration costs include code that may have to be rewritten, data that must be migrated, integration work to back-end systems and software that must be purchased to replicate a capability that already exists on the platform Linux is

displacing. The biggest cost in most Linux TCO studies is the staffing required for

ongoing operational systems management. The final major cost item in the debate 1 See Bailey and Heidt. 2003, http://www.darwinmag.com/read/! 10103/question74.html.

Page 103

11

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n*u-.n

over Linux TCO is support. Linux proponents claim Linux support is cheaper and

available from a more diverse vendor population, and that Linux machines run for years without so much as a reboot anyhow. There are other contingent costs

associated with Linux’s ownership and development path such as Linux upgrade

path will fragment as vendors develop proprietary flavors. The following table shows where and when switching to Linux makes more sense based on TCO analysis:1 Figure 2: Decision for Linux Switching to Linux.. .Now that Linux has an entreprise record, technologists can evaluate when and where migration makes sense. Cost Factor

What to consider

OS Aquisition

What’s included in the Linux distribution? Rapidly growing entreprises benefiting What’s the cost of the full stack of software on from lower aquisition costs per machine an apples-to-apples basis?

Hardware Aquisition

Can you save money by replacing big Unix boxes with cheaper Intel clusters, thereby avoiding ’’hardware look-in”?

Current Unix shops, rapidly growing entreprises, and horizontally scaled environments

Support

Is the support offered by your key ISVs on Linux as strong as their support for ohter platforms? Can you use ’’Google Service"?

Entreprises with less complex support needs or grater internal Linux entreprise

Are key tools you need (management, back­ System Administration up, etc.) available on Linux? Will you benefit and Management from management economies as you scale?

Biggest Winners with Linux

Unix-shops with build-in base of Linuxready talent, and entreprises with a multitudeof OS platforms that can be consolidated

Security-related costs and risks

What is the implied security costs - in rich and Entreprises switching from patch-intensive in patch management resources required - of Windows environments each platform

One-time migration and transition

Is a major migration inevitable? What will it cost to retain?

Shops already running their key applications on application servers that enable easier application

One-time or ongoing integration

What will it cost to integrate existing Unix, Windows, or legacy apps into the Linux environment?

Entreprises with a larger Unix rather than Windows applications base

Source: Margulius. 2003, p. 40

5

Conclusion

The TCO approach has indeed gained a lot of importance from the purchasing

managers in the recent years. “The TCO approach represents a progressive, systematic approach to understanding, analyzing, managing, and reducing the

total cost of ownership of materials, services, capital goods or any purchased 1 See Margulius. 2003, p. 37-42.

Page 104

SMI EUROFEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

item”1 The TCO concept can be easily linked to the Net Present Value Concept.

Though TCO analysis has huge advantages such as efficient decision making, cost reduction, benchmarking, quality control, the implementation of TCO might not be as easy. It has to be widely accepted and requires change in corporate climate

which might be resisted by the managers. It also requires education and training of purchasing managers apart from identifying each cost associated with the system

which can be highly tedious and time consuming. Resource allocation might also be an important barrier to TCO implementation. TCO analysis has become an active part of everyday decision making in almost all the industries today. It has a

wide application in Retail, manufacturing, supply chain and IT industry. The

complexity of TCO models used in any industry depends on the corporate culture, the reason for developing a TCO approach, the importance of various types of

purchases to the firm, the firm’s experience in implementing TCO, the complexity

of the purchased items of interest etc. The case of Linux illustrates an ideal application of TCO analysis to the real world. It clearly brings out the situations and industries where switching to Linux makes more sense based on TCO

analysis.

1 See Ellram and Sieferd. 1993, p. 183.

Page 105

SMI SI pro

EUROPEAN BUSINESS SCHOOL

manag«mini

I NS m I

11

Jnumwlmnal l’WS K« »Sutn»n«.u-.n

The philosophy of Total Quality Management

Total Quality Management (TQM) is a “generic title for the process of quality improvement”1 which emerged from the extensive dealing with the issue during

the 1970s and 1980s. The TQM wave which originated in Japan has been largely influenced by the findings of W. Edwards Deming who, already in the 1950s,

emphasized that quality and productivity are not mutually exclusive. He further

argued that bad quality in the form of defective products or poor service is both expensive, as it requires inspection and leads to scrap, rework or even legal suits,

and unnecessary, as it grounds on avoidable failures in the system. Other quality gurus like Philip B. Crosby who defined zero defects as the only performance

standard, Armand V. Feigenbaum who stressed the importance of an alignment of production to customer requirements and Joseph M. Juran who emphasized the

understanding of quality as fitness for use further influenced the philosophy of TQM.3

A universally accepted definition of TQM does not exist, as every company

interprets it in a slightly different way. It can, for example, be referred to as “A management approach to an organization centered on quality, based on the participation of all its members and aiming at long-term success through customer

satisfaction, and benefits to the members of the organization and to society”4 or,

more generally speaking, as “A way of managing an organization so that every job, every process, is carried out right, first time and every time“.5 The declared

aim is to achieve perfection for each single step in the production process by

continuously improving the degree of quality perceived by the customer.6 Three

general tenets constitute the pillars of TQM. Firstly, product improvement has to be regarded from a customer perspective. Similar to the Just-in-Time supply chain under the paradigm of lean manufacturing, the quality chain links all subsequent

steps of the process, whether internal or external, and thus constantly provides

1 Sriparavastu and Gupta. 1997, p. 1215. 2 See Oxford University Press. 2004, pp. 94-96 3 See Lysons and Gillingham. 2003, p. 209. 4 Dobler and Burt. 1996, p. 452 5 Lysons and Gillingham. 2003, p. 206 6 See Dobler and Burt. 1996, p. 452; Lysons and Gillingham. 2003, p. 206; Oxford University Press. 2004, p. 94

Page 108

SMI EUROPEAN BUSINESS SCHOOL

feedback from the demand side. Consequently, each activity is permanently

adapted to the true requirements of the direct and, via the feedback loop, eventually the final customer. The second tenet refers to the Japanese concept of continuous improvement, Kaizen, and claims that responsibility for this process of

product quality improvement shall be shared among employees at all levels of the organization. Quality consciousness becomes a fundamental element of corporate

culture which implies that quality training and feedback are provided for all employees. In this context, Deming emphasizes that all barriers between

departments have to be broken down in order to commonly detect, discuss and solve quality problems. Thirdly, a management information system has to be

created in order to assure adequate planning, control and performance evaluation with regard to the quality processes.1 Even though, in the first place, TQM concerns the interaction between different production steps within the

organization, it is particularly significant for the relationship with the supplier and thus for the purchasing function. Whenever a company implements TQM, the same quality requirements that apply internally also have to be met by all suppliers and the three tenets have to be extended to the whole supply chain. Under the TQM paradigm, the supplier becomes a long-term partner who shares

the goals of the buying company and who assures the conformance to the required quality standards without inspection or rework. According to Deming, the costs of

noncompliance are so important that an exclusive long-term supplier relationship assuring adequate quality leads to minimum total costs, although the buying firm 2 does no longer award business on based on the price tag.

2

The application of the TQM concept as a supply technique

In contrast to the traditional view that quality problems originate from the shop

floor, TQM grounds on the understanding that a holistic approach to the improvement of the company’s products and processes is necessary to achieve

quality and productivity excellence.3 A top down approach with permanent

1 See Lysons and Gillingham. 2003, pp. 206-210; Oess. 1993, pp. 89-93; Schmidt and Finnigan. 1993, pp. 4-9 2 See Lysons and Gillingham. 2003, p. 210; Roethlein and Mangiameli. 1999, pp. 71-73 3 See Sriparavastu and Gupta. 1997, p. 1216.

Page 109

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

involvement of top management into the implementation as well as into the

resulting Kaizen process therefore represents one of the crucial prerequisites for the successful implementation of TQM.1 It represents the philosophy of

establishing a quality culture in the organization rather than an isolated tool. It usually implies the formation of cross-functional quality teams and requires that every member of the organization has to constantly evaluate every activity in the production process he or she is concerned with. Thereby each restriction to

customer value creation has to be detected and its causes have to be identified.

The next step refers to the notion of continuous improvement and aims at the

generation, evaluation and implementation of possible solutions to each problem. The permanent move through Deming’s Plan-Do-Check-Act cycle assures that the new processes are constantly monitored after their implementation and that

continuous adaptations lead to constant improvement. When an activity has

reached the state of perfection, it becomes the new standard procedure. By

continuously evaluating every step in the process, the overall quality of products and services is incessantly improved. The effective application of TQM requires extensive cross-functional communication and cooperation along the production chain in order to create full awareness of the customer requirements as well as a high degree of empowerment in order to allow employees to take immediate

corrective actions when a problem occurs. Responsibility for process quality is delegated to self-managed teams who are assigned specific goals on the basis of

customer needs and who receive an ongoing training in quality-related issues. The creation of these ad hoc problem-solving teams represents one of the most

commonly used techniques with regard to TQM. Another widespread tool is the constant benchmarking of processes with best practice organizations in order to learn about alternative processes and to formulate quality goals. Continuous

evaluation of all activities allows management to detect and solve problems before they become major crises.4 The TQM concept is not only applicable with

regard to internal processes, but it is particularly important in view of supplier relationships. Vertical disintegration and Just-in-Time supply have created a 1 2 3 4

See Brown. 1992, pp. 151-152; Douglas and Judge. 2001, p. 158; Kanji and Barker. 1990, p. 376. See Oess. 1999, pp. 94-96; Sila and Ebrahimpour. 2003, p. 236; Stewart. 1993, p. 78. See Giordan and Ahem. 1994, pp. 34-36; Oess. 1993, p. 102. See Oxford University Press. 2004, pp. 112-116.

Page 110

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

Jnumwlmnal Voiomi«« JwhlnS K«-S>jtn»n«.u-.n

situation in which competition takes place between supply chains rather than

between single companies. Consequently, the dependence on the supplier, particularly in terms of quality, has increased. Crosby estimates that the average

manufacturing firm spends 55% of sales revenues on purchased goods and that

suppliers are responsible for at least 50% of product-related quality problems. He therefore emphasizes that supplier quality management is absolutely crucial to the TQM concept and that poor supplier quality can quickly undermine the whole TQM strategy of a company. Given the high costs of noncompliance, e.g. loss of

reputation, the focus has to be set on the prevention of any defects. The reliability

of suppliers is particularly important, as the increasing complexity and specificity of supplied inputs - in many cases the production of whole subassemblies or even entire finished products is outsourced - lead to a situation in which manufacturers

have to create close long-term relationships with few suppliers. Extensive communication and cooperation with suppliers are an important prerequisite for

the application of TQM, as they assure that customer requirements are constantly understood and addressed along the complete supply chain and that the suppliers

are able to react flexibly to changing expectations. The consistent conformance of supplied products with customer requirements has to be assured for current

supplies as well as for the future. Quality thereby does not only refer to product attributes, but the supplier also has to perform in terms of delivery, after-sales

service and cost management.1 Every member of the supply chain should be

continuously trying to contribute to an optimization of the total process, no matter where the problems occur. Based on the application of TQM principles to supplier

relations, Wong developed a model for achieving supply chain management excellence (figure l).2

1 See Malomy and Kassebohm. 1994, p. 349; Monczka et al. 2002, pp. 267-271; Spencer and Loomba. 2001, p. 689; Wong. 2003, p. 151. 2 See Wong. 2003, p. 151-156.

Page 111

SMI EUROPEAN BUSINESS SCHOOL

MPTIV MANAOIMIM

I>MIH II

L'«h NrhlUi K*m^»rUhMMMt

Figure 1: Supply Chain Management Excellence Model

Source: Wong. 2003, p. 152. However, the model indicates that close cooperation is only a prerequisite for

quality improvements. According to a study among purchasing managers, the perceived performance of suppliers in terms of responsiveness to changes, product

quality and delivery is rather average than close to excellence. Consequently,

continuous monitoring and improvement of supplier quality has to be conducted in order to reach perfect quality which is the declared goal of the TQM paradigm.1 An important condition for successful supplier quality management is the explicit,

comprehensible and mutually acceptable specification of quality requirements, as most supplier quality problems originate from misunderstandings. Furthermore, a

manufacturer expecting high supplier performance shall, in return, be a good

customer which means that purchase orders have to be clear and complete,

specifications and volumes should remain stable after the order has been sent out and payment has to be timely and reliable. In order to trigger the actual process of continuous improvement in supplier quality, it is firstly suitable to rationalize the

supplier base to an optimal size where all remaining suppliers are high performers and optimal supply chain coordination is possible, whereas dependence is still

moderate. In a second step, the purchasing department has to develop a

measurement process for supplier performance in order to assess the necessity of

1 See Monczka et al. 2002, p. 270.

Page 112

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

improvement, to detect concrete improvement potentials and to select and support

the best suppliers. The assessment which constitutes the fundament for all further TQM activities can be conducted in different ways, as for example with monthly qualitative assessments including site inspection or through the permanent

calculation of the total costs of noncompliance. Based on the performance measurement as well as on benchmarking activities, the purchasing department

has to determine aggressive improvement targets for the suppliers in order to get rid of remaining weaknesses and to achieve faster progress than competing supply

chains. The targets thereby have to be challenging but still achievable. In most

cases, the buying company sets the same standards for its suppliers as it does in its internal TQM efforts. Permanent performance measurement and benchmarking lead to ever rising objectives and hence induce the achievement of the final goal of continuous improvement. Successful improvement by the supplier shall

moreover be rewarded by the buying manufacturer. If no reward is granted, suppliers might not share their accomplishments but keep the resulting benefits

for their own. Chrysler implemented an online tool which allows suppliers to suggest improvements and to benefit from the ensuing savings. Besides giving the supplier a share of the benefits, the buyer can also use longer contracts, higher purchasing volumes, access to technology or any kind of award as an incentive for

quality improvement. Granting awards goes in hand with the certification of

supplier quality. Certification indicates that the supplier’s performance complies with specified standards which usually implies that incoming goods do not need to be inspected. Many large purchasing departments have established their own certification systems. However, as this forces suppliers to conform to different

requirements with regard to different customers, independent quality auditing and certification systems have been developed. The most widespread auditing

framework is the ISO 9000 series which represents a required criterion within the European Union. The series is composed by three certifications, ISO 9001, ISO

9002 and ISO 9003, whereby ISO 9003 is the least restrictive. The ISO 9000

standards are helpful for a purchasing department to assure a certain minimum input quality and to choose among a range of suppliers. However, as they are

limited to some dimensions of the TQM concept and include only process

requirements, the framework can only be seen as a necessary but not as a

Page 113

SMI EUROPEAN BUSINESS SCHOOL

sufficient condition for the realization of TQM. In the United States, the Malcolm Baldrige National Quality Award goes much further in specifying quality

requirements. It is based on the notion of continuous improvement and comprises seven separate categories of criteria. Due to their completeness, the criteria of the

award have become an operational definition of TQM, but, as the implementation process of an appropriate quality system takes on average eight to ten years, the

number of applications for the award is constantly decreasing. Despite the

shortcomings of the certification systems, it can be concluded that initial certification and constant reassessment are central to a supplier TQM system.

Further steps in the TQM process are the active support of supplier development by providing knowledge or sharing resources and the involvement of key

suppliers in the whole product development process. These points refer to the understanding that, under the supply chain management paradigm, the buyer

purchases the capabilities of the supplier for specified production rather than a

concrete existing product. This implies that the whole production process becomes a common effort of both actors. The buyer helps managing supplier capabilities and the supplier contributes to the product design. Early involvement

in form of cross-functional design teams avoids a misfit between product

specifications and supplier capabilities. This firstly allows a maximization of

quality by including all value-creating capabilities into the design and secondly assures that the specifications are achievable and that no eventual readjustments

will be necessary. Supplier contribution to product design is particularly

important, as most product specifications are already determined in this phase and thus early involvement is significantly cheaper than eventual noncompliance. The elements of the described TQM process are illustrated in figure 2.1

1 See Lysons and Gillingham 2003, p. 218 & pp. 238-245; Monczka et al. 2002, pp. 268-291.

Page 114

SMI EUROPEAN BUSINESS SCHOOL

MPT IV MANAG» MINI

I NS UI I H

Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

Figure 2: The process of achieving high supplier quality Accelerated

Expected rate of quality improvement

Gradual

Procurement / Supply Chain Management Activity

Source: Monczka, Trent and Handfield. 2002, p. 278.

In his supplier quality assurance approach, Van Weele largely addresses the same ideas. However, he argues that, prior to placing the order, all potential suppliers have to be assessed on the basis of samples and preproduction series and that

long-term contracts, together with a quality certification, should only be granted if

the standard of zero-defects is attained.1 He moreover determines several prerequisites which have to be fulfilled within the purchasing department in order

to induce the TQM process described above. Before the suppliers are integrated into the process, objectives like the quality standards and selection criteria, the

number of suppliers and the number of certifications granted have to be agreed upon internally. Furthermore, the purchasing department has to take the internal responsibility for supplier quality and to be willing to provide constructive

feedback to the suppliers after delivery in order to allow them taking actions for improvement. It can be concluded that the application of the TQM principle has to be adapted to the specific circumstances of the company. Still, although the definition allows a variety of different approaches, a certain set of activities is inherent to most TQM programs. Despite the variability of the concept, it has to

be stated that the underlying characteristics customer focus, top management

involvement, delegation of quality-related responsibility to every member of the organization,

superiority

of prevention

over

detection

and

continuous

improvement have to be taken into consideration for every TQM project?

1 See Van Weele. 2002, pp. 194-201. 2 See Van Weele. 2002, pp. 201-203. 3 See Kanji and Barker. 1990, p. 376.

Page 115

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

3

Application of Total Quality Management in practice

The study conducted by Sriparavastu/Gupta among U.S. manufacturing firms

indicates that, in 1997, 62.8% of the respondents disposed of a TQM programme. Another 5.2% had implemented it, but abandoned it later (figure 3).1 TQM thus can be interpreted as an almost ubiquitous concept in today’s production

organizations. However, it has also been successfully transferred to service industries, government agencies, health-care organizations and education.

Figure 3: The application of the TQMprinciples in U.S. manufacturing firms

Source: Sriparavastu and Gupta. 1997, p. 1222. The City of Virginia Beach has applied the principles of TQM in order to become

more oriented to visitors’ needs. Sadikoglu’s examination even argues that implementation success and acceptance of TQM are totally independent from the

industry. Small companies, however, often do not use the whole bunch of TQM principles but adapt the concept to their specific situation by picking only some ideas.2 Although TQM has been invented by Japanese companies, European and

American organizations followed soon and today the concept is applied all over

the world. TQM is not used at a specific point in time, but it represents a concept which has to be embedded into corporate culture and into every supplier relationship over time. Once it is implemented, TQM becomes a permanent

philosophy which underlies every activity of the company. Although competitive pressure and customer dissatisfaction are often the motivators for TQM

1 See Sriparavastu and Gupta. 1997, pp. 1219-1223. 2 See Sadikoglu. 2004, pp. 364-366; Spencer and Arvinder. 2001, pp. 689-692; Stewart.1993, pp. 78-79.

Page 116

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI K«-S>jtn»n«.u-.n

implementation, the concept can be established at every point in time in every company exposed to any product or service quality expectations.1

Case example: Whirlpool2 In 1993, the world leader in household appliances manufacturing and marketing, Whirlpool, decided to implement TQM principles. Besides internal changes, this

decision particularly concerned Whirlpool’s suppliers. Whirlpool defined its

strategy as selling world-class products which exceed customer expectations and as being committed to continuous improvement. It announced a drastic reduction of the size of its supplier base and required from every potential supplier to prove

the ability to provide high-quality products at low cost as well as additional value-

added services, to react totally flexibly to changing requirements and to achieve continuous improvement. Each potential supplier, regardless of past relationships,

had to invest into all relevant changes before Whirlpool decided on who becomes a “concerned business partner” with exclusive long-term contracts and who loses his business. Although cost limits were still imposed, price was no longer the

significant selection criterion. Instead, compliance with requirements became crucial, whereby performance was assessed on the basis of the Malcolm Baldrige

National Quality Award Criteria for quality excellence. In order to be able to concentrate the supplier base, Whirlpool required from many suppliers to develop capabilities for the additional production of other inputs with

relation to those provided before. Furthermore, suppliers would have to

extensively contribute to the design phase of Whirlpool products in order to avoid expensive rework and redesign from the beginning. The supplier should not limit

his efforts to his own components but consider all aspects of the final product and

be committed to its perfection. Therefore cross-functional teams from Whirlpool

and all of its suppliers met regularly in order to develop improvements for the product. The company furthermore aimed at the permanent availability of all

products for the customer although inventories had to be reduced to input for two days. The suppliers hence had to adapt their production to flexible daily delivery. 1 See Brown. 1992, p. 148. 2 Roethlein and Mangiameli. 1999, pp. 71-81.

Page 117

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

Whirlpool set the product quality goal as reducing the service-incidence rate from 20% to 2% and defined a maximum defect rate for each component. Application tests on all completed assemblies prior to delivery were furthermore required.

Moreover, production costs had to be reduced by 5% per year after the initial implementation.

4

Conclusion

Those suppliers that were finally chosen as concerned business partners received

continuous feedback, but additionally had to conduct a detailed quarterly self­ assessment. The term Total Quality Management stands for a management

philosophy rather than a concrete tool. Based on the theories of Deming and other

quality gurus, the concept includes all activities which contribute to the achievement of product or service perfection. Perfect quality is thereby defined as

meeting or exceeding all customer requirements. Besides the customer focus, the concept

of continuous

improvement,

the

delegation

of quality-related

responsibilities to all employees, extensive and constant quality training, top management involvement and the establishment of quality teams are important

pillars of the TQM philosophy. Given the increasing share of purchasing volume in total sales, the quality

standards of suppliers become crucial to a company’s product or service quality

and hence the purchasing function is particularly concerned with TQM. TQM as a supply technique aims at assuring constantly and reliably 100% quality of supplied goods. TQM is not limited to product quality but also implies perfect

processes which includes the minimization of waste and the maximization of flow efficiency through e.g. Just-in-Time supply. Supplier quality management

requires extensive communication and coordination between supplier and buyer

and thus the concentration of the supplier base to few reliable long-term partners. Deming claims that one supplier per component is sufficient whereas the Japanese interpretation argues that the choice between two alternative suppliers

significantly reduces production risks. All actors are expected to contribute to the success of the whole supply chain by proposing process improvements. Suppliers

Page 118

SMI EUROPEAN BUSINESS SCHOOL

M.-PH» MANAGIMIN1

INSH»l «l

IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

are involved in the product design phase in order to guarantee realizable specifications, as most product-related costs are determined in this phase.

Supplier performance has to be regularly assessed which is usually done with the help of certifications like ISO 9000 or the Malcolm Baldrige National Quality

Award. Aggressive internal and external improvement targets have to be set and

mutual support has to be rendered. Most companies using TQM are able to significantly reduce their costs of

noncompliance and thus their total costs while quality and hence customer

satisfaction and reputation increase. At the same time, internal processes can be rationalized and optimized with the help of TQM. The cost-saving potential

resulting from TQM has led to a situation in which most companies in

competitive markets have developed quality management policies.

Page 119

SMI EUROFEAN BUSINESS SCHOOL ln*»TMUMH»l

5 VsFFt V MA N AO I Ml NT INSTnill

!Müm* HumA*n*hMMM>

VALUE ANALYSIS/VALUE ENGINEERING (VA/VE)

TABLE OF CONTENT

1

INTRODUCTION........................................................................................ 121

2

FUNCTIONALITY...................................................................................... 122

3

REAL WORLD APPLICATION................................................................. 126

4

SUMMARY & CONCLUSION................................................................... 129

Page 120

SMI EUROPEAN BUSINESS SCHOOL

M 1*1*1 V MANAG« MINI

I NS I II I II

Jnumwlmnal Voiomil« !M>WI K« »Sutn»n«.u-.n

1

Introduction

The original concepts of Value Analysis (VA) and Value Engineering (VE) as a technique to constantly evaluate a product throughout its whole lifecycle has been

developed by Lawrence D. Miles around 1947 at the General Electric facility in

the state of New York.1 Since then, the concept was implemented in companies

around the world, reaching approximately 77% of all purchasing departments by 1981.2 Afterwards, the rise of VA/VE slowed down, leading to the decline of the use of a concept that has “never [...] reached its predicted potential”3 completely.

A poll in 1995 found only 55% of all companies still having a VA program of some kind.4

Defining Value Analysis and Value Engineering

“Value Analysis or engineering is a complete system for identifying and dealing

with the factors that cause uncontributing cost or effort in products, processes or services.”5 With this definition, the concept of VA is to be understood as a

structured and organized approached to efficiently identify the parts6 that do not create any value, i.e. either increase the performance or decrease the cost of a

product.7 In other words, VA or VE tries to find exactly the specific components that fulfil a certain function in such a way that costs are minimized without o

causing a corresponding reduction in quality. Therewith, VA tries to achieve the providence of the important functions at a lower cost, or, alternatively, the

providence of higher performance or quality parts at the same costs.9 To put it as

simply as possible, VA and VE try to maximize the “value equation”, i.e. 7 Function^ )0 MAX Value =----------- . y Cost )

This definition is to be understood in a very general sense, as “parts” or “function” can essentially mean rather different specific issues in particular

1 See Romani. 1997, p. 27. 2 See Morgan. 1995, p. 34. 3 See Romani. 1997, p. 27. 4 See Morgan. 1995, p. 34. 5 Miles. 1989, p. 24. 6 Understood here in the most general sense, i.e. meaning product parts, processes, costs or else. 7 See Miles 1989, p. 4f. 8 See Romani. 1997, p. 28. 9 See Hartley. 2000, p. 28. 10 See Sperling. 2001, pp. 46-48.

Page 121

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS tu»

circumstances. With that, the value equation used can be applied to almost

everything that performs a specifiable function. While it should be remarked that,

concerning a widely accepted definition, “the terminology is generally vague”1, the difference between VA and VE will be explained in the next part of this paper, followed by the explanation of the exact functionality of the general concept.

2

Functionality

Value Analysis versus Value Engineering

The concept underlying both VA and VE is essentially the same. Both follow the specific definition derived above and try to achieve the same goals alongside the

process. However, while VA is generally understood to be applied to parts already in use, in production, in sale or similar, VE is used for the procedure of applying

the same algorithm to a part that is in the design or development stage. In other words, “VA refers to making improvements to existing products and processes, while [...] VE applies the VA process to new product design”2, or “VE = Cost

avoidance or cost prevention before production [and] VA = Cost reduction during production”3. The need to apply both concepts to the same parts essentially arises from two

factors: First, the dynamic environment, i.e. the available components, technologies, costumer demands etc., changing continuously even after the initial

design of a certain part, will lead to the potential of value enhancement through a process of VA even after the original design has gone through a similar VE process.4 Second, the necessity of finishing the design of a certain part as soon as

possible, e.g. because of strong competition or market demands for certain parts or for end-product the part is required for, can cause the initial VE process to be applied in an imperfect fashion. In addition to these two factors, other potential

causes for the application of VA after VE has already been used are the lack of accurate measures for the enhanced value, human factors and the new information

about parts gained from their use in practice.5 In the remaining of this paper, no 1 Romani. 1997, p. 28. 2 Hartley. 2000, p. 28. 3 Reuter. 1986, p. 73. 4 See Reuter. 1986, p. 28. 5 See Miles. 1989, pp. 11-14.

Page 122

SMI EUROPEAN BUSINESS SCHOOL IrMTMlMml U««n»h SMiImB KcwhsrUhauMt

further differentiation between VA and VE will be applied, i.e. as the issues

covered are concerned with the general concept and not the different points in time at which either VA or VE is applied, this paper will from now on refer to the

general concept as VA/VE. The concept of VA/VE can be understood as a six-phased process. In the original

sense of the concept, the process of VA/VE starts with the “Information” phase, i.e. the decomposition of a certain product into its relevant parts, while defining the functions performed by the specific parts. These functions can then be differentiated among primary or use functions and secondary functions. For example, “hold tie” can be defined as the primary function of a tie clip, while

“good appearance” is a necessary secondary function for the tie clip to be

saleable,1 or the mechanical primary function of a control device and the

secondary function of conducting electric current.2 Afterwards, the “speculation” phase describes the process of identifying possible

alternatives of providing the necessary functions to the product in the same way, for example the exchange of nonferrous material with steel to conduct a small amount of electric current in a control device.3 Next, the “analysis” phase

describes the process of learning about the economic and quality value of the identified alternatives to identify the best or bests among them. Following the example, this phase led to the discovery of steel being far more economically

useable than nonferrous material. Then, during the “decision” phase, the now valued alternatives are ranked and the ones to pursue are chosen, followed by the

getting of approval from both company and customers (as far as possible) before the execution of the chosen alternatives in the “action” stage. Lastly in the “evaluation” phase, the results of the changes made are evaluated and the potential

for additional improvements are measured. In practice, such results can be 25% to 35% and more in cost reductions4, without taking the additional value of e.g. the

learning from the supplier’s expertise etc. into account.

1 2 3 4

See Morgan. 1995, p.42, for that example. See Miles. 1989, p. 6. See Miles. 1989, p. 6. See Pooler and Pooler. 1997, p. 253.

Page 123

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

Figure 1: VA process

VALUE ANALYSIS CAN BE DESCRIBED AS A SIX-PHASED PROCESS Gather all pertinent information and define the basic and supporting functions of the product

Generate ideas and alternatives through the use of creativity techniques

Conduct feasibility and cost analysis to identify the most promising alternatives

Prioritize and select the alternatives to pursue

Obtain company and customer approvals

Measure the outcomes and identify opportunities for further improvements

Source: Hartley. 2000, p. 28. It should be mentioned that, in the existing literature, the six-phased process of

VA/VE is sometimes extended, to include more specific actions as “preparation”,

i.e. the set-up of a particular team and leadership-structures to perform the task, or “presentation”, i.e. the specific task of presenting the gained insights and options

to a decision-entity. In addition, the six phases of the original process sometimes

have different names, although their functions are usually described in the same way.1 The revival of VA/VE in the context of modern supply management While the concept in its original sense was not necessarily focused on purchasing or supply management, it already led to its application in purchasing departments.

Today, in the context of modem supply management and real net output ratios in companies of around or even below 50% , especially purchasing departments are

again in the need of cutting costs dramatically.4 Moreover, with every 1$ saved in purchasing being one more 1$ at the bottom line, having the same impact as an 1 2 3 4

See Sperling. 2001, p. 46. See Jahns. 2003, pp. 32-39. See Jahns. 2003, p. 32. See Morgan. 2003, p. 41.

Page 124

SMI EUROPEAN BUSINESS SCHOOL JmMwlmnal Voiomit« SMilwB K« »Sutn»n*u-.n

increase in sales of 20$’, programs like VA/VE have huge potential benefits. Especially, this is true as the “challenge to purchasing from the executive suite is to reduce material costs and to conduct the total acquisition process more efficiently”2.

Figure 2: VA key questions/principles SINCE THE ORIGINAL VA CONCEPT, 10 KEY QUESTIONS/PRINCIPLES GUIDE PURCHASING PEOPLE IN EVALUATING PRODUCTS

1. Does its use contribute value?

2. Is its cost proportionate to its usefulness? 3. Does it need all its features? 4. Is there anything better for the intended use?

5. Can a usable part be made by a lower-cost method?

6. Can a standard product be found that will be usable? 7. Is it made on proper tooling - considering quantities made?

In-depth evaluation of product and product-components

8. Do materials, reasonable labor, overhead, and profit total its cost?

9. Will another dependable supplier provide it for less? 10. Is anyone buying it for less?

Source: Morgan. 2003. p. 42. The most important feature of VA/VE in the context of modem understanding of

SM and SCM is the leveraging of suppliers expertise in order to maximize the value equation used in the described concept. Here, the direct exchange of ideas and concepts to optimize the interdependent processes as well as the mere

functional parts of the final output can lead to a significant decrease in costs. Also

contributing to this point, the use of a cooperative VE approach includes the suppliers and their expertise already in the very early design stage of a product,

providing opportunities to significantly benefit right from the beginning.4 In

1 The example assumes a sales-margin of 3% and a real net output ratio of 60%. It is taken from Jahns. 2004, p. 40. 2 Lacy; Smith and Williams. 1992, p. 1. 3 See Romani. 1997, pp. 28. 4 See Saunders. 1997, p. 193.

Page 125

SMI EUROPEAN BUSINESS SCHOOL

SI l i t >

M AN M4 VW N I

I NS I II I

II

addition “a partnership is a two-way street”1, which is especially true in the

context of modem SCM. The concept of VA/VE provides a highly responsive context of applying exactly that point of view in a structured way to modem purchasing. Therewith, in the modem understanding even more than in its original

application, the “10 Key Questions” of VA/VE can be used by the professionals applying the concept to their work. Especially, purchasing people can use these

questions as guidelines in evaluating products they buy from suppliers.

3

Real world application

In general, two broad possibilities of applying the VA/VE concept are existent:

First, the use in the general design or redesign of a product and in the process of

identifying and developing more adequate (i.e. higher value) parts to be used in

certain products. Second, the use of VA/VE steps and processes in the context of purchasing and supply management. Despite that, the application of the VA/VE

concept - unlike other and more theoretically sound procedures - is, generally speaking, almost intuitively, as the described process as well as the named

questions can be used immediately in practice. No difficult transformation or even translation of a theoretical approach into practice is necessary. To cover broadly

both possible ways of application and to cover the broad variety of possible usages in practice, two different real-life case examples will be covered in turn.

Case example 1: The automotive industry The first case example covers first-tier suppliers in the automotive industry.2 This

particular set of companies is of certain interest for a description of the application

of VA/VE techniques in practice: First of all, automotive suppliers do buy a significant amount of their products as components or subassemblies and

therefore incur a majority of their costs via purchasing. Second, the cost-reduction pressure on the automotive suppliers by their customers, the large automotive

companies (such as GM, BMW, Daimler Chrysler etc.) is particularly high, leading to the superiority of a VA/VE concept over traditional “cost-plus approaches” in reaching a competitive price.3 And thirdly, the high technological 1 Shorr. 1998, p. 157. 2 See Hartley. 2000, pp. 28-32. 3 See Saunders. 1997, p. 193.

Page 126

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K« »Sutn»n«.u-.n

level of the subassemblies and components as well as the final assembly done

inside the automotive suppliers increases the possibility of in-depth cooperation

between suppliers and customers, providing an example for a major source of

value in the VA/ VE process, i.e. the leveraging of the expertise of suppliers.

Figure 3: VA benefits and barriers IN THE CASE OF THE AUTOMOTIVE-SUPPLIER INDUSTRY, VA DID PROVIDE CERTAIN BENEFITS, BUT ALSO REQUIRED COPING WITH BARRIERS Company A ►

VA Benefits





Immediate savings even on a product that has gone through VE

Easy to learn how to facilitate the process

Difficult to complete implementation

Company B *

Enhance cross­ functional communication

*

Builds trust with suppliers

»

Company D

Many ideas generated quickly

»

Many ideas generated quickly



Cross-functional teams reach consensus

*

Cross-functional teams reach consensus

Second-/ third-tier learned VA process



Suppliers gain systems understanding



Suppliers gain systems understanding

a

Three-day time commitment for VA workshop



Cost sharing among mutiple suppliers

»



Difficult to get implementation completed



w

Less leverage with lower-tier suppliers for cost sharing

Difficult to obtain internal resources for imlpementation

Suppliers financial information is difficult to accurately interpret





Difficult to obtain customer approval

May not understand customer's priorities



Schedule VA with planned tooling changes

*

Altered process to fit with company's culture

VA Barriers

Coping Mechanisms

Company C ►

*

Top-Management support

Source: Hartley. 2000, p. 30. Concerning the general approach to VA/VE, the companies in this example used

the typical combination of engineers, marketing people etc. to perform the “cross­

functional meetings referred to as VA workshops”1. In addition, the companies used engineers to perform the major VA task of analysing the functions etc. of certain components constantly. Concerning the mentioned major potential source

of value in the VA/VE concept - the direct involvement of suppliers in the process - the practical application does not correspond completely to the

theoretical foundation: In spite of levels of involvement of suppliers differing

between different companies, the general concern of involving an outside entity into the rather proprietary process usually outweighed the potential benefits.

1 Hartley. 2000, p. 29.

Page 127

SMI EUROPEAN BUSINESS SCHOOL

M.-PHV MANAG» MINI

INSHH»»

Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n«.u-.n

Therefore, only relatively fewer suppliers were involved in the VA/VE process and that only on a rather not regular basis.

Finally, it is interesting to notice that in all examined companies, the VA/VE process was rather seen as a specific project. Therewith, a particular focus was

chosen and a new project-run of VA/VE was started inside the company every

time. This is somewhat opposite to the notion that VA/VE can and should be used as a constant process within organisations. Even more contradicting, this

procedure does oppose some of the existing theory on VA/VE and how the concept can best be implemented: Especially in the context of “today’s cross­

functional team approaches”1, the use of a technique such as VA/VE for

continuous improvement can be even more valuable that to just apply the concept in general planning processes and higher-level projects. In other words, “just because most of the emphasis is on up-front planning, that’s no reason for

abandoning techniques that can enhance continuous improvement”2 Case example 2: The U.S. Department ofDefense

The second case example covers the use of VA/VE projects in the context of military equipment and the United States Department of Defense (DoD).3 This

example is particularly relevant for the development of VA/VE, as it was the implementation of this program by the DoD that eventually led to the spreading of

this program also internationally.4 In addition, the use of the VA/VE concept was particularly possible in the case of military equipment: Here, the specific

functions of parts of e.g. vehicles or weapon systems and even the function of the overall system as a whole can be clearly defined. Moreover, the mere function is of exclusive importance in the context of military equipment. Therewith, the DoD

started to use VA/VE programs itself in the 1950s and, after realizing the potential benefits of such programs, started to provide significant benefits to contractors to perform VA/VE themselves in the 1960s.

As a critique, it becomes obvious that the application of the VA/VE concept in the

context of the DoD does not make use of most of the specific advantages the 1 Morgan. 1995, p. 2. 2 See Morgan. 1995, p. 2; See Saunders. 1997, p. 21. 3 See Romani. 1997, p. 27; Sperling. 2002, p. 52; Institute for Defense Analysis. 2004: DoD Value Engineering Program, http://ve.ida.org/ve/ve.html. 4 See Sperling. 2002, p.52.

Page 128

SMI EUROPEAN BUSINESS SCHOOL

SI pro

manag«mini

I NS m I

11

Jnumwlmnal Voiomit« !M>WS K« »Sutn»n«.u-.n

theory has to offer. In particular, the DoD focuses rather exclusively on the

application of only VE processes at the time of the initial development of a new

system, although cost-reducing modifications can also be provided by suppliers later on, a practice that did grow in importance in times of fewer developments of

completely new systems. And, despite the fact that the DoD initiated and later required VA/VE programs to be performed by their contractors, there was no

cooperation between supplier and buyer to the extend the VA/VE process would have allowed. Nevertheless, the application in the context of the DoD provides an

example for a quite “straight-forward” use of the concept of maximizing the “value equation” and of focusing on function.

4

Summary & conclusion

In summary, the concept of VA/VE represents a rather old piece of theory to improve value or to reduce costs in production and purchasing processes.

Nevertheless, despite the fact that the original concept has never really reached its predicted potentials1 in its classic usage, the modem theory of SCM represents a

new basis for the application of the rather old concept in up-to-date environments.

As the trend towards even lower real net output ratios in many companies is likely to continue, the value of VA/VE to purchasing professionals and today’s supply

chain managers is likely to increase in turn. Most significantly, this should be

possible if the VA/VE concept is specifically used to communicate the potential

benefits of e.g. closer cooperation with suppliers, and not only “by chance” found its major application in the purchasing departments of corporations. To sum up

and to conclude this paper, the use of VA/VE as a powerful tool to both directly influence the cost and return structure via efficiency and effectiveness of internal

design processes, cost reduction due to removal of unnecessary parts and improvement of function without higher costs, and indirectly creating “value” in serving as a communication tool and a general framework to structure SMC

activities in the context of function and cost. To close, “VA is a supplement to cost reduction and good buying, not a substitute for them”2

1 See Romani. 1997, p. 27. 2 Pooler. 1997, p. 251.

Page 129

SMI 51 FFt V MANAGf Ml N T~Tn Ml IV U

EUROFEAN BUSINESS SCHOOL !M»lnS Mliv»ru»»w»«

REFERENCES

4Managers, www.4managers.de, 13.11.2004.

key

word

“ABC-Analyse“,

Accounting change: A structural equation approach, in: Organizations and Society, Vol. 28 No. 7-8, pp. 675-698.

accessed

Accounting,

Albright, K. S. 2004: Environmental scanning: Radar for success, in: Business & Industrial Marketing, Vol. 10 No. 1, 2004, pp. 16-23. ANAO Better Practice Guide. 2001: http://www.anao.gov.au , accessed 15.10.2004.

Life-Cycle

Costing,

Anonymous, b. 2001: E-cl@ss, White Paper, available underwww.eclass.de. Anonymous, a. 2000. Internet: Neue Studie deckt Schwächen virtueller B2BPlattformen auf und wagt einen Blick in die Zukunft E-Märkte "vom Ideal weit entfernt", in: VDI, No. 48, 01.12.2000, p. 26. Anonymous, c. 2002: Online Information Services GmbH WebServices: Die beste Erfindung, seit es Internet gibt, in: Password, No. 9, 01.09.2002, p. 8.

Anonymous, d. 2003: Rapid prototyping helps doctors separate conjoined twins. In: Manufacturing Engineering. Vol. 130 No. 6, 2003, p. 21.

Arnaout, A. 2001: Target costing in der deutschen Untemehmenspraxis, München, Diss. Arnaout, A., Hildebrandt, J., H. Werner. 1998: Einsatz der Conjoint-Analyse im Target Costing, in: Controlling, Vol. 10 No. 5, 1998, pp. 306-315. Arnolds, H., Heege, F. and W. Tussing. 1998: Materialwirtschaft und Einkauf Praxisorientiertes Lehrbuch, 10., durchges. Aufl., Wiesbaden, Automobilindustrie, Frankfurt am Main, Diss.

Bailey J. T. and S. R. Heidt. 2003: “Why is Total Cost of Ownership Important?”, http://www.darwinmag.eom/read/l 10103Zquestion74.html, accessed 18.9.2004. Bainbridge, D. 1997: Spend Today, Save Tomorrow? Life Cycle Costing Can Change the Way We Build, in: SanDiego Earth Times, Dec. 1997, pp. 1318. Baismeier, P. W. and J. V. Wendell. 1997: Rapid prototyping: State-of-the-art manufacturing. In: Industrial Management. Vol. 39 No. 1, 1997, pp. 1-4.

Banham, R. 2000: Off Target, in: CFO The Magazine for Senior Financial Barney, J. 1991: Firm Resources and Sustained Competitive Advantage, in: Journal of Management. Vol. 17 No. 1, 1991, pp. 99-120. Bartolo, P. J. and G. Mitchell. 2003: Stereo-thermal-lithography: A new principle for rapid prototyping, in: Rapid Prototyping Journal. Vol. 9 No. 3, 2003, pp. 150-156.

Beckmann, D. 2002: Projektorientiertes Target Costing am Beispiel des Bauträgergeschäfts, in: krp - Kostenrechnungspraxis, Vol. 46 No. 2, 2002, pp. 67-73. Page 130

SMI EUROPEAN BUSINESS SCHOOL

M.-pHV MANAOIMINl 1NMIWH

Jnumwlmnal Voiomit« JwhlnS K«-S>jtn»n*u-.n

Bcllehumeur, C., L. Li, Q. Sun, P. Gu. 2004: Modeling of Bond Formation Between Polymer Filaments in the Fused Deposition Modeling Process, in: Journal of Manufacturing Processes. Vol. 6 No. 2, 2004, pp. 170-178.

Bensberg, F. 1993: “TCO VOFI for eLearning Platforms”, www.campussource.de/org/opensource/docs/bensbergVor.doc.pdf , accessed 18.11.2004. Biehler, K., Kalker, P. and E. Wilken. 1992: “Logistikorientiertes PPS-System: Konzeption, Entwicklung und Realisierung“, Wiesbaden.

Bogaschewsky, R. and R. Rollberg. 1998: Prozeßorientiertes Management, Berlin et al. Breskin, I. 2003: IBM deal will help ford cut design cost. The Detroit News, February 6.

Brown, B. and T. Sharpe. 2001: Designing Forward and Back, in: Tooling and Production, Vol. 67 No. 2, pp. 75-77. Brown, A. 1992: Industrial Experience with Total Quality Management, in: Total Quality Management, Vol. 3 No. 2, 1992, pp. 147-156.

Bruhn, M. and W. Masing. (Eds.) 1994: Handbuch Qualitätsmanagement. 3. Aufl. München and Wien.

Buggert, W. and A. Wielpütz. 1995: Target Costing: Grundlagen und Umsetzung des Zielkostenmanagements, München.

Bullinger, H. J. 1997: Technologiemanagement - Wettbewerbsfähige Technologieentwicklung und Arbeitsgestaltung, in: Forschungs- und Entwicklungsmanagement - simultaneous engineering, Projektmanagement, Produktplanung, rapid product development. Eds. Hans-Jörg Bullinger und Joachim Warschat, Stuttgart. Burt D. N., D. W. Dobler and S. L. Starling. 2003: World Class Supply Management, 7th ed., New York.

Burt, T. and J. Grant. 2002: Chrysler seeks savings through efficiency. Financial Times, June 19, 2002. Carrillo, J. E. and R. M. Franza. 2004: Investing in Product Development and Product Capabilities: The Crucial Linkage between Time-To-Market and Ramp-Up Time, in: European Journal of Operational Research, n.a., n.a., pp. 1-24.

Cavinato J. L. and R. G. Kauffman. 1999: The Purchasing Handbook, 6th ed., New York. Certified automotive parts association. 2002: Car Company Quality: A Vehicle Test Fit Study of 1,907 Car Company Service Parts, Study Dates: March 1999-March 2002. Chaneski, W. S. 1998: Reverse Engineering: A Valuable Service, in: Modem Machine Shop, Vol. 70 No 9, pp. 50-52. Chivate, P. N. and A. G. Jablokow. 1995: Review of surface representation and fitting for reverse engineering, in: Computer Integrated Manufacturing Systems, Vol. 8 No. 3, pp. 193-204.

Page 131

SMI EUROPEAN BUSINESS SCHOOL

m

n-n

uamoimim

i xs 1 i m

Jnumwlmnal Voiomit« SMilwB K« »Sutn^>*u—n

Chuk, R. N. and V. J. Thomson. 1998: A comparison of rapid prototyping techniques used for wind tunnel model fabrication, in: Rapid Prototyping Journal. Vol. 4 No. 4, 1998, pp. 185-193.

Cole, R.J. and E. Sterner. 2000: Reconciling Theory and Practice of Life Cycle Costing, in: Building Research & Information, Vol. 28, 2000, pp. 368-375. Cooper, R., Chew, W. B. and B. Avishai. 1996: Control tomorrow’s costs through today’s designs, in: Harvard Business Review, Vol. 74 No. 1, pp. 88-99.

David L. M. 2003: “The Real Cost of Linux”, www.infoworld.com.

Dervitsiotis, K. N. 2002: The importance of conversations-for-actions for Dobler, D. W. 1996: Purchasing and Supply Management - Text and Cases. 6th edition, New York.

Dobler, D. W. and D. N. Burt. 1996: Purchasing and Supply Management: Text and Cases, 6th ed., New York et al. Dolenc, A. 1993: Software Tools for Rapid Prototyping Technologies in Manufacturing. Diss. Helsinki. Douglas, T. J. and W. Q. Judge Jr. 2001: Total Quality Management implementation and competitive advantage: The role of structural control and exploration, in: Academy of Management Journal, Vol. 44 No. 1, 2001, pp. 158-169. EDS

2004:http://www.unigraphics.de/pdf/ueber_uns/material/branchen/automo tive.pdf, accessed 05.11.2004.

Einsporn, Th. 2001: Marktplätze als Träger der zukünftigen Entwicklung des ECommerce, available under www.eclass.de.

Eisenberg, B. 2004: Thinking In Prototypes, Development. Vol. 59 No. 1,2004. p. 28.

in:

Product Design &

Ellram L. M. 1992: The Role of Purchasing Function in Purchasing Cost Savings Analysis, in: International Journal of Purchasing and Materials Management, Vol. 28 No. 3,1992, pp. 26-33 Ellram L. M. 1994: A Taxonomy of Total cost of Ownership Models, in: Journal of Business Logistics, Vol. 15 No. 1, 1994, pp. 171-183 Ellram L. M. and S. P. Siferd. 1993: Purchasing: The Cornerstone of the Total Cost of Ownership Concept, in: Journal of Business Logistics, Vol. 14, No. 1, 1993, pp. 163-183. Evans, M. A. and R. I. Campbell. 2003: A comparative evaluation of industrial design models produced using rapid prototyping and workshop-based fabrication techniques, in: Rapid Prototyping Journal. Vol. 9 No. 5, 2003, pp. 344-351.

Ewert, R. and C. Ernst. 1999: Target costing, co-ordination and strategic cost management, in: European Accounting Review, Vol. 8 No. 1, 1999, pp. 2350.

Page 132

11

SMI EUROFEAN BUSINESS SCHOOL lnt»TMUMn»l

M Pm MAMAQt Ml NT INMIHII

!M»lnS K»xiv»ru»»w»«

Flanagan, R., Ken dell, A., Norman, G. and G.D. Robinson. 1987: Life Cycle Costing and Risk Management, in: Construction Management and Economics, Vol. 5,1987, pp 53-71. Gagne, M. L. and R. Discenza. 1995: Target costing, in: Journal of Executives, Vol. 16 No. 6, 1995, pp. 127-130.

Gartner Group, Inc. 1997: A White Paper on GartnerGroup’s Total Cost of Ownership Methodology, www.gartnergroup.com

Gebhardt, A. 2000: Rapid Prototyping - Werkzeuge fur die schnelle Produktentwicklung. München. Gilchrist, W. 2000: Modelling failure modes and effect analysis, in: The International Journal of Quality & Reliability Management Bradford. Vol. 10 No. 5,2000, pp. 16-24.

Giordan, J. C. and A. M. Ahern. 1994: Self-managed teams: Quality improvement in action, in: Research Technology Management, Vol. 37 No. 3, 1994, pp. 33-37. Gleich, R. 1996: Target Costing fur die montierende Industrie, München. Granada Research. 2001: Using the UNSPSC, United Nations Standard Products and Services Code, White Paper, Why Coding and Classifying is Critical to Success in Electronic Commerce, www.unspsc.org/documentation.asp .

GTMA 2004: Reverse Engineering theme for GTMA, in: Metalworking Productions, 16th March 2004.

Hahn, D. and L. Kaufmann. 2002: Handbuch industrielles Beschaffungsmanagement - Internationale Konzepte - Innovative Instrumente - Aktuelle Praxisbeispiele, 2., überarb. und erw. Auf!., Wiesbaden.

Hanford, D. 2003: Ford’s realignment to accelerate new products, cost cuts. Dow Jones News Service, February 20, 2003. Hantusch, Thomas. 2001: Trends im ERP-Markt - XML, UDDI und BMEcat sind auf dem Vormarsch - Web-Standards erleichtern ERP-Vernetzung, in: Computerwoche, No. 13, 30.03.2001, p. 76-77. Harriman N. F. 1998: Principles of Scientific Purchasing, 1st ed. New York. Hartley, J. L. 2000: Collaborative Value Analysis: Experiences from the Automotive Industry, in: The Journal of Supply Chain Management, Vol. 39 No. 4, 2000, pp. 27-32. Heinrich, W. M. 1996: Einführung in das Qualitätsmanagement. 1. Aufl. Eichstätt.

Herbst, St. 2001: Umweltorientiertes Kostenmanagement durch Target Costing und Prozeßkostenrechnung in der Automobilindustrie, Köln, Diss. Hieu, L. C. et al. 2003: Design for medical rapid prototyping of cranioplasty implants, in: Rapid Prototyping Journal. Vol. 9 No. 3, 2003, pp. 175-186.

Hindson, G. A., Kochhar, A. K. and P. Cook. 1998: Procedures for effective implementation of simultaneous engineering in small to medium enterprises,

Page 133

SMI EUROFEAN BUSINESS SCHOOL ImmmUhm*!

M

PPiV'm A^

«I

KrhMI M»iv»ru»w«Mi

in: Proceedings of the Institution of Mechanical Engineers, Vol. 212 No. 4, 1998, pp. 251-258. Hiromoto, T. 1988: Another hidden edge: Japanese management accounting, in: Harvard Business Review, Vol. 66 No. 4, 1998, pp. 22-25. Holley, D. 1997: Life Cycle Costing: What Does it Mean?, in: The Buyers Network, Vol. 7 No. 5,1997, pp. 6-9. Horvath, P., Niemand, S. and M. Wolbold. 1993: Target Costing- State of the Art, in: Horvath, Peter (Hrsg.): Target Costing, Stuttgart, S. 1-27. Hutton, R. B. and W. L. Wilkie. 1980: Life Cycle Cost: A New Form of Consumer Information, in: Journal of Consumer Research, Vol. 6 No. 3, 1980, pp. 45-56. Institut der Deutschen Wirtschaft Köln Consult GmbH. 2000: E-cl@ss, Standard für Materialklassifikation und Warengruppen, Anwendung im ECommerce. www.eclass.de.

Ittner, C. D. and D. F. Larcker. 1997: Product Development Cycle Time and Organizational Performance, in: Journal of Marketing Research, Vol. 34, 1997, pp.13-23. Jacobs, K. J. and J. D. Mercer. 2004: Shrinkwrap Licenses after Bowers: A Final Blessing on the Prohibition of Reverse Engineering?, in: The License Journal, n. a., p. 1-10.

Jahns, C. 2003: Paradigmenwechsel vom Einkauf zum Supply Management, in: Beschaffung Aktuell. Vol. 4, 2003, pp. 32-39. Jahns, C. 2004: Supply Risk Management, in: Beschaffung Aktuell. Vol. 4, 2004, pp. 38-44.

Jakob, F. 1993: Target Costing im Anlagenbau - das Beispiel der LTG Lufttechnische GmbH, in: Horvath, Peter (Hrsg.): Target Costing, Stuttgart, pp. 155-190.

Jones, S. 2004: Understanding Six Sigma, in: Quality. Vol. 43 No. 3, 2004, p. 24. Kamiske, G. F. and J. P. Brauer. 1999: Qualitätsmanagement von A bis Z: Erläuterungen moderner Begriffe das Qualitätsmanagements. 3. Aufl. München and Wien.

Kanji, G. K. and R. L. Barker. 1990: Implementation of total quality management, in: Total Quality Management, Vol. 1 No. 3, 1990, pp. 375389.

Kim, L, A. Shahid, Bell, J. and D. Swenson. 2002: Target costing practices in the United States, in: Controlling, Vol. 14 No. 11, 2002, pp. 607-614. Kochan, A. 2003: Rapid prototyping helps Renault Fl Team UK improve championship prospects, in: Assembly Automation. Vol. 23 No. 4, 2003, pp. 336-339.

Koster, A. 1994: Being best in the customer's eyes - A Mercedes-Benz perspective, in: Managing Service Quality. Vol. 4 No. 5,1994, pp.26-30.

Page 134

SMI EUROFEAN BUSINESS SCHOOL

M>pm MAMGI Ml NT INMIHII

IntonwUHM*! IfewmMH !M>lnB k»»jv»ru»w~«

Kroll, K. M. 1997: On target, in: Industry Week, Vol. 246 No. 11, 1997, pp.16-22.

Kurawarwala, A. A. and H. Matsuo. 1993: Cost of delay in time-to-market and capacity restriction. Working Paper 93-01-02, University of Texas, Austin.

Kurbel, K. 1998: “Produktionsplanung und -Steuerung. Methodische Grundlagen von PPS-Systemen“, in Endres, Albert, Krallmann, Hermann and Schnupp, Peter (eds.): “Handbuch der Informatik“, ed. 13.2, München and Wien. Lacy, S., Smith, W. C. and A. J. Williams. 1992: Purchasing’s Role in Value Analysis: Lessons from Creative Problem Solving, in: International Journal of Purchasing and Management, Vol. 28 No. 2,1992, pp. 37-42.

Leonard-Barton, D.; Wilson, E. and J. Doyle. 1994: Commercializing Technology: Imaginative Understanding of User Needs, Harvard Business School Publishing, Boston, MA.

Linkweiler, Ingo. 2002: Eignet sich die Skriptsprache Python fur die schnelle Entwicklungen im Softwareentwicklungsprozess. Dortmund. Loch, C. H. and C. Terwiesch. 1998: Communication and Uncertainty in Concurrent Engineering, in: Management Science, Vol. 44 No. 8, 1998, pp. 1032-1048. Lopez, S. M. and P. K. Wright. 2002: The role of rapid prototyping in the product development process: A case study on the ergonomic factors of handheld video games, in: Rapid Prototyping Journal. Vol. 8 No. 2, 2002, pp. 116-125. Lund, R. T. 1978: Life Cycle Costing: A Business and Societal Instrument, in:

Management Review, April 1978. n.a.

Lysons, K. and M. Gillingham. 2003: Purchasing and Supply Chain Management, 6th ed., Harlow and Essex. Malorny, C. and K. Kassebohm. 1994: Brennpunkt TQM: Rechtliche Anforderungen; Führung und Organisation; Auditierung und Zertifizierung nach DIN ISO 9000 ff., Stuttgart. McDonnell Douglas Astronautics : Independent Orbiter Assessment. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19900001652_1990001 652.pdf. accessed 05.11.2004.

McDermontt C. M. and A. S. Marucheck. 1995: Training in CAD: An Exploratory Study of Methods and Benefits, in: IEEE Transactions on Engineering Management, Vol. 42 No. 4, 1995, pp. 410-418. Menon, A. and P.R. Varadarajan. 1992: A model of marketing knowledge use within firms, in: Journal of Marketing, Vol. 56 No. 4, 1992, pp. 53-71.

Mensch, G. 1998: Kosten-Controlling: Kostenplanung und -kontrolle als Instrument der Untemehmensfuhrung, München etc.

Miles, L. D. 1989: Techniques of Value Analysis and Engineering, 3rd ed., New York.

Page 135

SMI EUROPEAN BUSINESS SCHOOL Jnumwlmnal Voiomit« JwhlnS K«

Minett, S. and C. Taylor. 2000: Life Cycle Costing- A New Book for Pump Purchasers, in: Coal International, Vol. 248 No. 6,2000, n.a. Mironov, V. 2003: Beyond cloning: Toward human printing, in: The Futurist. Vol. 37 No. 3,2003, pp. 34-36. Möhrstädt, D. G. 2001: Electronic Procurement planen, einfuhren, nutzen, Stutgart. Monczka, R. et al. 2002: Purchasing and Supply Chain Management, 2nd ed., Cincinatti/Ohio.

Morgan, J. 1995: Where Has VA Gone?, in: Purchasing, Vol. 118 No. 9, 1995, pp. 34-36. Morgan, J. 2003: Value Analysis Makes a Comeback, in: Purchasing, Vol. 132 No. 18, 2003, pp. 41-44.

Moussatche, H., Languell-Urquhart, J. and C. Woodson. 2000: Life Cycle Costs in Education: Operations & Maintenance Considered, in: Facilities Design & Management, Vol. 19 No 9, 2000, pp. 34-45. Müller, D. H. et al. 2002: Beschreibung ausgewählter Rapid Prototyping Verfahren. http://www.ppc.biba.uni-bremen.de/projects/rp/Download/ Beschreibung_RPV.pdf. Mymudes, S. 2004: Reverse Engineering, in: T&P: Tooling and Production, Vol. 70 No. 4,2004, pp. 24-26.

Neumann, K. 1996: Produktions- und Operationsmanagement, Berlin. O.S.E. Objektorientierte Software Entwicklung, www.o-s-e.de/PPS_SCHULE/ material.htm, browsed 15th November, 2004. Oess, A. 1993: Total Quality Management: Die ganzheitliche Qualitätsstrategie, 3rd ed., Wiesbaden. Oxford University Press (ed.) 2004: An Introduction to Organizational Behaviour, Oxford.

Pelaccio, D. G. and J. R. Fragola. 1998: At what risk is it acceptable to commit a manned mars mission? in: Mars Society Conference Proceedings. www.marssociety.org/content/proceedings 1998mar98068.htm. accessed 01.11.2004.

Pepels, W. 1998: Produktmanagement - Produktinnovation, Markenpolitik, Programmplanung, Prozeßorganisation, München and Wien.

Pesonen, L. T.T. 2001: Implementation of Design to Profit in a Complex and Dynamic Business Context, Academic Dissertation at University of Oulu, http://herkules.oulu.fi, accessed 09.11.2004 Peterson, K. J., Handfield, R. B. and G.L. Ragatz. 2003: A Model of Supplier Integration into New Product Development, in: Journal of Product Innovation Management, Vol. 20 No. 4, 2003, pp. 284-299. Pfeifer, T. 2001: Qualitätsmanagement - Strategien, Methoden, Techniken. 3rd ed. München and Wien.

Page 136

SMI EUROFEAN BUSINESS SCHOOL

5I PF^y ’ M A

MIN iTlN M» TV t E

ImmmUhm*! IfewmMH !M»WI k»»jv»ru»w~«

Pham, D.T. and R.S. Gault. 1998: A comparison of rapid Prototyping Technologies, in: International Journal of Machine Tools & Manufacture, Vol. 38, 1998, pp. 1257-1287.

Pooler, V. H., Pooler, D. J. (1997): Purchasing and Supply Management Creating the Vision, New York.

Puschmann, T.and R. Alt. 2001: Benchmarking E-Procurement, Institut für Wirtschaftsinformatik, Universität St. Gallen. Ragatz, G. L., Handfield, R. B. and T.V. Scannet 1997: Success Factors for Integrating Suppliers into New Product Development, in: Journal of Product Innovation Management, Vol. 14, 1997, pp. 190-202.

Rebitzer, G., Hunkeler, D. and O. Jolliet. 2003: LCC-The Economic Pillar of Sustainability: Methodology and Application to Wastewater Treatment, in: Environmental Progress, Vol. 22 No. 4, 2003, pp. 241-249.

Reinhart, G. 1996: Qualitätsmanagement - Ein Kurs fur Studium und Praxis. Ist ed. Berlin etc. Renner, T. 2003: Mittelstand spürt Kundendruck, in: Computer Zeitung, No. 8, 17.02.2003, p. 17. Reuter, V. G. 1986: What Good Are Value Analysis Programs?, in: Business Horizons, Vol. 29 No. 2, 1986, pp. 73-79. Rinne, H. and H. J. Mittag. 1995: Statistische Methoden der Qualitätssicherung. 3rd ed. München and Wien.

Roethlein, C. J. and P. M. Mangiameli. 1999: The Realities of Becoming a Long-Term Supplier to a Large TQM Customer, in: Interfaces, Vol. 12 No. 29,1999, pp. 71-81. Romani, P. N. 1997: The Resurrection of Value Engineering, in: Manage, Vol. 49 No. 1, 1997, pp. 27-29. Rösler, F. 1996: Target Costing für die Automobilindustrie, Wiesbaden.

Rüdrich, G. W. Kalbfuß and K. Weißer. 2004: Materialgruppenmanagement Quantensprung in der Beschaffung. 2nd ed., Wiesbaden. Sadikoglu, E. 2004: Total Quality Management: Context and Performance, in: The Journal of American Academy of Business, Cambridge, September 2004, pp. 364-366.

Sakurai, M. and P. J. Keating. 1994: Target costing and activity-based costing, in: Controlling, Vol. 6 No. 2, 1994, pp. 84-91.

Sauer, M. 2003: IT im Maschinenbau - Großfirmen drängen Zulieferer zur einheitlichen Produktklassifizierung - 160 Standards - welcher ist der richtige?, in: Computerwoche, No. 15,11.04.2003, p. 40. Saunders, M. 1997: Strategie Purchasing and Supply Chain Management, 2nd ed., Harlow et al.

Schmidt, F. R. 1999.: Life Cycle Target Costing: Ein Konzept zur Integration der Lebenszyklusorientierung in das Target Costing, Aachen, Diss.

Page 137

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlmnal Voiomit« !M>WI tijtn»n*u-.n

Teng, S. and Shin-Yann H. 1996: Failure mode and effects analysis - An integrated approach for product design and process control, in: International Journal of Quality & Reliability Management, No. 5, pp. 8-26.

Terwiesch, C. and C. H. Loch. 1999: Measuring the Effectiveness of Overlapping Development Activities, in: Management Science, Vol. 45 No. 4, 1999, pp. 455-465.

Timischl, W. 1995: Qualitätssicherung: statistische Methoden. 1st ed. München and Wien. Trapp, P. R. 1991: Rapid Prototyping as an Alternative to the Pilot, in: Modem Office Technology, Vol. 36 No. 10,1991, pp. 74-75.

Ulrich, D. 1997: Human Resource Champions. Harvard Business School Press. Boston. Umfassende Einführung aus managementorientierter Sicht, Wiesbaden. Unister, www.unister.de, key word “XYZ Analyse”, accessed 17.11.2004. Van Weele, A. J. 2002: Purchasing and Supply Chain Management: Analysis, Planning and Practice, 3rd ed., London. Varady, T. ; R.R. Martin and J. Cox. 1997: Reverse Engineering of Geometrie Models - an Introduction, in: An Computer-Aided Design, Vol. 29 No.4, 1997, pp. 255-268.

Vasilash, G. S. 2001: Best Potential in: Automotive Design & Production, Vol. 113N0.10,2001, page 8.

Verband der deutschen Autobomilindustrie 2003: http://www.vda.de/de/service/jahresbericht/auto2003/auto+maerkte/g_50.ht ml#toppage, .

Weber, D. and T. Wunder. 2002: Was kommt nach den Zielkosten? Strukturiertes Kostenkneten in der Wehrtechnikbranche, in: krp Kostenrechnungspraxis, Vol. 46 No. 4,2002, pp. 240-248. Whitworth, B. 1998: Formula for speed to market, in: Professional Engineering, Vol. 11 No. 18, 1998, pp. 29-30.

Wikipedia, www.de.wikipedia.org, key word “Pareto-Verteilung“, accessed 16.11.2004. Wildemann, H. 2002: Einkaufspotentialanalyse. Programme zur partnerschaftlichen Erschliessung von Rationalisierungspotentialen. München. Wohlers, T. T. 2002: Wohlers Report 2002 - Rapid Prototyping & Tooling State of the Industry - Annual Worldwide Progress Report. Fort Collins.

Wong, A. 2003: Achieving supply chain management excellence, in: Total Quality Management, Vol. 14 No. 2,2003, pp. 151-159. Wraige, H. 2002: Back(wards) to the future, in: Professional Engineering, Vol. 15 No. 11, pp. 39-41. Wu, T. 2001: Integration and Visualization of Prototyping and Reverse Engineering, Diss., University of Windsor, Ontario

Page 139

SMI EUROPEAN BUSINESS SCHOOL

5t-pP|» MANAUIMIN1

II

Jnumwlm»»! Voiomi«« SMilnS K*m:h*rUhM»M>

Yaxiong, L. et al. 2003: The customized mandible substitute based on rapid prototyping, in: Rapid Prototyping Journal. Vol. 9 No. 3,2003, pp. 167-174 Ziemke, M. K. and J. K. McCollum. 2001: Simultaneous Engineering: Innovation or Resurrection, in: Business Forum, Vol. 15 No. 1, 2001, pp. 14-17.

Zsidisin, G. A. 2002: E-Procurement: From Strategy to Implementation, in: Journal of Supply Chain, Vol. 38 No. 3,2002, p. 58. Zsidisin, G. A., Ellram, L. M. and J. A. Ogden. 2003: The Relationship Between Purchasing and Supply Management’s Perceived Value and Participation in Strategic Supplier Cost Management Activities, in: Journal of Business Logistics, Vol. 24 No. 2,2003, pp. 129-154.

Page 140