126 43
English Pages 141 [152] Year 2017
APPLIED STRUCTURAL EQUATION MODELLING FOR RESEARCHERS AND PRACTITIONERS Using R and Stata for Behavioural Research
This page intentionally left blank
APPLIED STRUCTURAL EQUATION MODELLING FOR RESEARCHERS AND PRACTITIONERS Using R and Stata for Behavioural Research BY
INDRANARAIN RAMLALL University of Mauritius, Mauritius
United Kingdom North America Japan India Malaysia China
Emerald Group Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2017 Copyright r 2017 Emerald Group Publishing Limited Reprints and permissions service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties, express or implied, to their use. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-78635-883-7
ISOQAR certified Management System, awarded to Emerald for adherence to Environmental standard ISO 14001:2004. Certificate Number 1985 ISO 14001
Dedicated to my parents and GOD.
This page intentionally left blank
Contents Preface
ix
1.
Definition of SEM
1
2.
Types of SEM
13
3.
Benefits of SEM
15
4.
Drawbacks of SEM
19
5.
Steps in Structural Equation Modelling
21
6.
Model Specification: Path Diagram in SEM
29
7.
Model Identification
51
8.
Model Estimation
57
9.
Model Fit Evaluation
61
10.
Model Modification
75
11.
Model Cross-Validation
79
12.
Parameter Testing
81
vii
13.
Reduced-Form Version of SEM
83
14.
Multiple Indicators Multiple Causes Model of SEM
87
15.
Practical Issues to Consider when Implementing SEM
91
16.
Review Questions
105
17.
Enlightening Questions on SEM
107
18.
Applied Structural Equation Modelling Using R
113
19.
Applied Structural Equation Modelling using STATA
123
Appendix
131
Bibliography
139
About the Author
141
viii
Con t e n t s
Preface
Structural equation models permeate every field of research in the world. Indeed, despite its deep-rooted origins in psychology, structural equation models gained considerable attention in different fields of study such as biology, engineering, environment, education, economics and finance. The main power ingrained in these types of models pertains to their ability to cater for various levels of interactions among variables such as bi-directional causality effects and, most importantly, catering for the effects of unobserved or latent variables. As at date, there exist some well-developed textbooks on structural equation models. However, most of them tend to address the subject mainly in a manner which may not really befit the needs of researchers who are new in this area. This is the main aim of this book, that is to explain, in a rigorous, concise and practical manner all the vital components embedded in structural equation modelling. The way the book is structured is to unleash all the vital elements in a smooth and quick to learn approach for the inquisitive readers. In essence, this book substantially leverages the learning curve for novice researchers in the area of structural equation models. Overall, this book is meant for addressing the needs of researchers who are found at the beginning or intermediate level of structural equation modelling learning curve. LISREL and AMOS are now deemed as the workhorse for implementing structural equation models. Consequently, the book clings to two different software, namely R (a freeware) and STATA. R is used to explain the model in its lavaan package without going into too much sophistication. STATA implementation of
ix
structural equation model is also explained. In fact, STATA 13 is now upgraded with enough power to implement structural equation models without being subject to much ado with respect to programming problems which usually characterize LISREL. In a nutshell, STATA 13 is powerful enough to perform different types of structural equation models. This book can be used at graduate level for a one semester course on structural equation modelling. The way the book has been written is highly convenient as a self-learning tool to any interested reader. I hope this book to be particularly useful for all researchers who are new on the path of structural equation modelling.
x
Prefac e
1.1.
Introduction
▾
Definition of SEM
1
Known as causal models with a conspicuous presence in the field of consumer psychology, structural equation model (SEM) allows complex modelling of correlated multivariate data in order to sieve out their interrelationships among observed and latent variables. SEM constitutes a flexible and comprehensive methodology for representing, estimating and testing a theoretical model with the objective of explaining as much of their variance as possible. In simple terms, SEM is nothing more than an analysis of the covariance structure. SEM incorporates various statistical models such as regression analysis, factor analysis and variance/covariance analysis. Under SEM, a clear demarcation line is established between observed and latent variables. SEM can handle complex relationships as it can simultaneously factor in a measurement equation and a structural equation. Moreover, SEM represents a large sample technique, widely known under its rule of thumb, that is to have at least 10 observations per variable. SEM represents a vital multivariate data analysis technique, widely employed to answer distinct types of research questions in statistical analysis. Other names are associated with SEM such as simultaneous equation modelling, path analysis, latent variable analysis and confirmatory factor analysis. Technically speaking, SEM can be defined as a combination of two types of statistical technique, namely, factor analysis and simultaneous
1
equation models. SEM is so much coveted by researchers that there is even a journal in this area, namely, Journal of Structural Equation Modelling. In a nutshell, SEM can best be described as a powerful multivariate tool to study interrelationships among both observed and latent variables.
SEM = 2 types of statistical techniques: Confirmatory Factor Analysis + Simultaneous Equation Models
While the objective of the measurement model is to relate the latent variables to the observed variables, the aim under structural equation focuses on the relationship between dependent and independent latent variables, the effects of explanatory (independent) latent variables on dependent (outcome) latent variables. Alternatively stated, the need to sieve observed variables to capture latent variables is effected in the measurement equation. In that respect, the measurement equation in SEM constitutes an exploratory tool, more specifically, a confirmatory factor analysis tool. Under SEM, there is the need to compare several structural equations via model comparison statistics to sieve out the most appropriate model. Latent variables are also known as unobserved variables, intangible variables, ‘directly unmeasured’ variables, unknown variables or simply constructs, whereas manifest variables are called as observed variables, tangible variables, indicator variables or known variables. Latent variables are inherently linked to observed variables as they can only be captured by observed variables or indicators. Latent variables are inferred from observed variables. This can be best explained when it comes to LISREL software application whereby observed variables are input in such a way as to respect the order of the data input, while for latent variables, they can be defined in any order. Moreover, ellipses or circles are associated with latent variables while rectangles are inherently associated with the observed variables. Latent variables can be dependent or
2
Applied Structural Equation Modelling for Researchers and Practitioners
independent variables. As a matter of fact, SEM comprises observed and latent variables, whether dependent or independent. Overall, SEM includes observed variables, latent variables and measurement error terms. Pure latent variables are those which are uncontaminated by measurement error. Second-order latent variables are functions of other latent variables while first-order latent variables do not depend on any other latent variables. Examples of latent variables are intelligence, market psychology, achievement level and economic confidence. Examples of observed variables are economic performance, scores obtained and number of items sold. Various versions of SEM prevail with the most basic version being the linear SEM. Other types of SEM consist of Bayesian SEM, non-linear SEM and hierarchical SEM. The main ingredient used in SEM application pertains to the covariance or the correlation matrices. It is of paramount significance to gain a proper insight into their inherent difference. In essence, covariance constitutes an unstandardised1 form of correlation. SEM constitutes an exploratory factor analysis tool for gauging on the factors that underlie a set of variables. It assesses which items should be grouped together to form a scale. While exploratory factor analysis allows all the loadings to freely vary, confirmatory factor analysis constrains certain loadings to be zero. SEM is widely preferred to regression analysis by virtue of its powerful distinctive features such as the ability to incorporate multiple independent and dependent variables, inclusion of latent constructs and measurement errors being duly recognised. SEM is widely applied in nonexperimental data (Figure 1.1).
1. Under STATA SEM estimation, unstandardized estimates pertain to covariance and standardized estimates for correlation coefficient.
Definition of SE M
3
Two main variable types in SEM
Observed (indicator) variables
Directly observable or measured
Latent (construct) variables
“directly unmeasured” or measured using several observed (indicator) variables
Figure 1.1: Variables Type under SEM. Intelligence→ Achievement1→ Achievement2
A mediating latent variable
1.2.
Regression and SEM
Compared to regression analysis, SEM is imbued with various distinctive features that render it particularly appealing. SEM incorporates multiple independent and dependent variables while a multivariate regression analysis can merely have one dependent variable. SEM factors in hypothetical latent constructs so that the proxies employed in the studies are equal to the latent variables with measurement errors. Proxies measure the impacts of many different attributes. While regression analysis is directed towards generating a causal relation, SEM pertains to modelling a causal relation. SEM is flexible enough to account for measurement errors, whereas regression assumes perfect measurement. SEM is more powerful than regression analysis as it can simultaneously handle indirect, multiple and reverse relationships. Under SEM, researchers are interested in a large p-value and not a small p-value as it is the case for regression analysis. Causal assumptions
4
Applied Structural Equation Modelling for Researchers and Practitioners
are made implicit in SEM and not explicit as in the case of regressions. SEM tools are not only more rigorous but also more flexible than conventional regression analysis.
1.3.
Data Pre-Processing in SEM
Under SEM, the models are conceptually derived first. To test the specified theory/model, data is required, in particular, the correlation/covariance matrices. In fact, the heart of SEM is related to the covariance/correlation matrix. In that respect, there is the need to use data to generate these matrices. Consequently, prior to deriving these covariance/correlation matrices, it is of paramount significance to have recourse towards data screening to remove outliers, ensure that normality conditions are being fulfilled and that there is no missing data. Data imputation techniques can be used to deal with missing data.
1.4.
Confirmatory Factor Analysis versus Exploratory Factor Analysis
Confirmatory factor analysis, developed by Karl Je0 0308; oreskog in the 1960s, is extremely important in SEM applications as it is used to test the measurement model. Under the measurement model, the objective is geared towards testing the relationship between a latent variable and its corresponding observed variables. Only after the measurement model has been derived that the focus is then shifted towards the structural model estimation, also known as the path analysis. Confirmatory factor analysis is not same as exploratory factor analysis. The underlying rationale is based on the fact that, in the case of exploratory factor analysis, there is no stipulated a priori hypothesis being laid out when it comes to establishing relationships among the variables so that principle components are applied. Explanatory factor analysis is deficient as it tends to be guided by ad hoc rules and intuition, whereas confirmatory factor analysis is imbued with robust dual tests for both parameters (factor loadings) and the quality of the factor. Definition of SE M
5
Confirmatory Factor Analysis ≠ Exploratory Factor Analysis CFA: Based on model specification (Theory based approach) Exploratory Factor Analysis is based on principle component analysis EFA: Data driven approach
Two main model types in SEM: Measurement model and structural model
Anderson and Gerbing (1988) pointed out a dual approach in SEM modelling as encompassing the measurement model (under confirmatory factor analysis) and the structural model. Under the confirmatory factor analysis, there is the need to develop an acceptable measurement model that adequately describes the relationship between a number of latent variables and their corresponding observed variables. Under the measurement model, no causal relationships are established among the latent variables of interest. Under the structural model (also known as the causal model), the relationships among the latent variables are shown, usually between the dependent and independent variables.
1.5.
Measurement and Structural Models
The two components of SEM are: (a) Structural model capturing causal relationship between the endogenous and the exogenous variables (b) Measurement model showing relationship between latent factor variables and the observed variables
6
Applied Structural Equation Modelling for Researchers and Practitioners
Examples of measurement and structural models are depicted below with their respective components. (i) Measurement Model Vi ¼ λi Fi þ ei
ð1:1Þ
Measurement model consists of latent variables. Vi: vector of observed variables Fi: vector of latent constructs λi: vector of parameters ei: vector of measurement errors (ii) Structural or causal model Fi ¼ βi Mi þ τi Fi þ di
ð1:2Þ
βi and Гi are parameter vectors Fi : endogenous variables Mi : mediating variable Fi: exogenous variables di: residual terms Latent-independent variables are measured by observed independent variables through a confirmatory factor analysis measurement model (Figure 1.2).
Definition of SE M
7
Latent-independent variable
Latent-dependent variable
Measured by observed independent variables
Measured by observed dependent variables
via
via
Confirmatory Factor Analysis
Figure 1.2: Confirmatory Factor Analysis under SEM.
1.6.
Software for SEM
The following constitutes a list of software widely used in SEM.
• Open-source software. R has several contributed packages dealing with SEM (e.g. ‘lavaan’). Lavaan was developed by Rosseel (2012).
• STATA 13 (powerful and flexible enough to do sound SEM estimation)
• LISREL (linear structural relations model) first developed in 1973 by Jöreskog and Sörbom.
• AMOS: Arbuckle (2011) • SAS PROC CALIS • OpenMx: Neale, Boker, Xie, and Maes (2003) • EQS: Bentler (2004) • Mplus: Muthén and Muthén (2010) 8
Applied Structural Equation Modelling for Researchers and Practitioners
• SEPATH • RAMONA
1.7.
Non-Experimental Data (Absence of Treatments and Controls)
SEM is mainly used in the case of non-experimental (observational) data. Experimental data pertains to data generated by a formal measurement approach where controls and treatments are introduced. Non-experimental data are widely used in social science research by virtue of the fact that experimental data analysis can be very costly. Because the researcher cannot control assignment of subjects to the treatment and control groups, nonexperimental data are more difficult than experimental data to analyse. Instances of non-experimental data include survey data.
1.8.
Theoretical Construct
SEM represents a statistical technique geared towards gauging on the extent to which a theoretical model is endorsed by data via correlations. Indeed, in the case that the data endorses the theoretical model, this signifies possibility to test more complicated theoretical models. In the case that data does not corroborate the theoretical model, there is the need to either change the model or develop other theoretical models. After all, any theory holds when the relationships among variables (captured through correlations/ covariances) are compatible with the propositions laid down by the theory.
1.9.
Conclusion
SEM constitutes a correlation research or statistical method which has been widely used in consumer psychology, social and behavioural sciences, but is now gaining ground in other fields such as Definition of SE M
9
finance (Chang, Lee, & Lee, 2009) as it is endowed with the ability to frame and answer complex questions relating to the data. Other names are used to capture SEM such as simultaneous equation modelling, path analysis, latent variable analysis, confirmatory factor analysis. A latent-dependent variable is determined from other independent latent variables. Alternatively stated, a latentdependent variable is a latent variable that is predicted by other independent latent variables in SEM. Any latent variable (whether dependent/independent) is always measured by observed variables through confirmatory factor analysis. As a matter of fact, under SEM, measurement models are developed to define latent variables.
1.10.
Key Elements to Retain Whenever Working with SEM
Variable definitions follow from LISREL convention. ξ: (Ksis): latent-independent variables Xs: observed-independent variables ns(Etas): latent-dependent variables ys: indicators for the dependent variables SEM requires that sample size >200 General rule for SEM: 510 observations needed for each model parameter estimated Sample covariance matrix Implied covariance matrix as generated by the theoretical model Correlation matrix is equal to standardized scaling among the observed variables S: Sample/observed covariance matrix
10
Applied Structural Equation Modelling for Researchers and Practitioners
P : Implied covariance matrix based on estimated parameters based on theoretical foundation P S = residual matrix; also called fitted residuals in LISREL Examine residual values to ensure that they are small When working with SEM, there is the need to report both model fit and parameter estimates. The problem is that though a model may generate sound model fit, yet, its parameter estimates may be problematic. Even if the SEM model fits to the data, this does not guarantee the existence of causal inference. Consequently, it can be argued that, under SEM modelling, the statistical significance of the parameters predominate over the model fit. The underlying rationale is that, despite the fact that a model fits the data well, yet, if some of the parameters are not significant, this would blow up or discontinue the theory. A correlation matrix simplifies the interpretation of results by rescaling all variables to have unit variance. A structural model, though, links only the latent variables, can also contain independent observed variables. This can also be the case for a measurement model. By default, the measurement model links the latent variables to the dependent observed variables. However, additional observed variables can still be incorporated as independent variables in the case of a measurement model which purports to measure a dependent latent variable. SEM is not always scale free so that a model that fits the correlation matrix may not fit the covariance matrix. In this case, it is recommended to have recourse towards the covariance matrix. Two major types of statistical tests are used in SEM: (a) Assess the overidentifying restrictions placed on the model→χ2 likelihood test for goodness of fit. (b) t-test for the parameters.
Def initio n of SEM
11
The real interpretation of correlation analysis under SEM Under SEM, the correlation coefficient analysis has three distinct meanings as shown below. Corr (x, y)→ three meanings (i) X→Y (ii) Y→X →X (Mediating effects usually overlooked) (iii) Z→Y
Some of the benefits of SEM over regression analysis: 1. Regression analysis does not control for measurement errors and can merely deal with one dependent variable at a time. 2. SEM controls measurement errors. 3. SEM can handle several dependent variables. 4. SEM allows several independent variables without causing multicollinearity problems. 5. Regression analysis assumes that the observable proxies are exact measures of the theoretical constructs, which may not be true due to measurement errors. Alternatively stated, only SEM deal with observable variables but not latent constructs. 6. Assumptions of regression analysis can easily be violated. However, under SEM, the normality distribution required by maximum likelihood can be achieved by a normal score transformation. A separate chapter of the book probes further into the benefits attached to SEM.
12
Applied Structural Equation Modelling for Researchers and Practitioners
▾
Types of SEM
2
There are different types of SEM. This chapter provides a brief description of the four main types of SEM, namely, multilevel SEM, non-linear SEM, Bayesian SEM and non-parametric SEM.
2.1.
Multilevel SEM
Multilevel modelling is often known as hierarchical linear modelling. Under multilevel SEM, the independence assumption among units is infringed so that units become nested in clusters. In essence, multilevel SEM constitutes a combination of multilevel and SEM. The following are some important recurring facts pertaining to SEM:
• SEM is a powerful statistical technique when used approximately.
• Need sample size >100. • Test of model fit is vital under SEM. • If
model modification recommended.
is
done,
cross
validation
is
• Avoid cross-loadings, that is allowing one item to be indicator of more than one factor.
13
2.2.
Non-Linear SEM
• Non-linear SEMs are formulated with a measurement equation that is same as in linear SEMs, and a structural equation that is non-linear in the explanatory latent variables
Non-linear SEM is motivated by the fact that non-linear relationships prevail among latent variables.
2.3.
Bayesian SEM
Bayesian approach can be applied to deal efficiently with: (1) Non-linear SEMs (2) SEMs with mixed discrete and continuous data (3) Multilevel SEMs (4) Finite mixture SEMs (5) SEM with missing data Bayesian approach → very effective to deal with complex SEMs. Bayesian approach treats the unknown parameter vector θ in the model as random and analyses the posterior distribution of Q which is the conditional distribution of Q given the observed data set.
2.4.
Non-Parametric SEM
Non-parametric SEM is used when researchers have no idea about the functional relationships among outcome and explanatory latent variables. The Bayesian P-splines is used to formulate the non-parametric structural equation.
14
Applied Structural Equation Modelling for Researchers and Practitioners
▾
Benefits of SEM
3
Fabrigar, Porter, and Norris (2010) point out ‘we do not think that causal language of any sort is always inappropriate when using SEM’.
• SEM is highly powerful compared to regression analysis in
terms of controlling for measurement errors. Regression analysis assumes that the observed proxies constitute exact measures of the latent variables/constructs which may not be the case in practice due to measurement errors. SEM obviates such a problem by giving due consideration to latent variables. Alternatively stated, regression analysis assumes that the observable proxies are exact measures or perfect substitutes of the theoretical constructs/attributes, which may not hold true due to measurement errors. Alternatively stated, by using latent variables, SEM effectively controls for measurement errors so that more valid coefficients are obtained.
• SEM is able to deal with many dependent variables compared to regression analysis, which usually has only one dependent variable. As a matter of fact, compared to regression analysis that tends to estimate distinct parts of an overall model, here, SEM is able to simultaneously estimate all effects in the model. In that respect, SEM can estimate parameters with full-information maximum likelihood that generates consistent and asymptotically efficient estimates.
15
• SEM is more flexible than regression analysis, because SEM is able to test competing models that make different assumptions about the causal directions. In essence, SEM is able to tackle the problem caused by causal inferences in nonexperimental data, where the hypothesised causal variable may covary with other causal variables, by accounting for random measurement errors. SEM is also flexible enough to reverse causal assumptions among variables in a model. In that respect, SEM provides more accurate estimates of the impacts of a hypothesized causal variable controlling for the effects of other potential causal variables. Alternatively stated, the use of latent variables enables to generate more accurate estimates.
• Normality requirement can be fulfilled in SEM when applying maximum likelihood by using a normal score transformation. Such normality fulfilment is not easily possible in case of regression analysis.
• SEM allows several observed indicator variables to capture the latent variables without trailing behind multicollinearity problem which tend to buffet OLS regressions. Hence, SEM is immune to the multicollinearity problem by virtue of the fact that multiple measures are needed to describe a latent construct.
• SEM constitutes a maturing tool in causal modelling which enables complex relationships to be analysed. For instance, SEM can handle multilevel nested models, a large number of endogenous and exogenous variables, indirect, multiple and reverse relationships, and time series with auto-correlated error, non-normal and incomplete data. Multigroup models are useful under SEM when the focus is to know whether the same model holds in different populations. When a multiple group model runs, the estimation for all the groups proceeds simultaneously.
• SEM can be used in both non-experimental data analysis and experimental data analysis. Experimental research is based on a
16
Applied Structural Equation Modelling for Researchers and Practitioners
series of tests, controls, treatments and data recording. Nonexperimental research is based on data from other sources such as online databases. SEM is particularly useful when it is not feasible to make strong causal claims with nonexperimental data.
• The chief strength of SEM is that it has encompassing possibilities; being able to undertake both single causal interpretation at one end of the continuum, as well as all causal assumptions simultaneously at the other end of the continuum (Figure 3.1).
Power of SEM (continuum)
All causal assumptions equally plausible
Single causal interpretation is plausible
Figure 3.1: Encompassing Possibilities Embedded in SEM.
• SEM is highly robust as it can estimate parameters with fullinformation maximum likelihood that provides consistent and asymptotically efficient estimates. Moreover, SEM exhibits higher accuracy in the case of parameter estimates when compared to the performance of competing models.
Be nefi ts of SEM
17
This page intentionally left blank
▾
Drawbacks of SEM
4
Like any technique employed in data analysis, SEM, too, presents certain deficiencies.
• The main limitation of SEM relates to the underlying assumptions made. In SEM, it is usually assumed that the sample data follows a multivariate normal distribution so that the means and covariance matrices contain all the information. In practice, this is less likely to hold true.
• The use of too many latent variables in the equation may lead
to poor instruments. One avenue to deal with this problem is to apply multiple indicators and multiple causes (MIMIC) model to scale down the number of latent variables.
• The scale of the latent variable is unknown. Alternatively stated, the latent variables, by virtue of being imaginary, do not have set scales and are thereby assigned arbitrary units. This is known as the factor indeterminacy which constitutes a common problem embedded in structural equation modelling. Factor indeterminacy signifies that an infinite number of parameter estimates from the reduced form can be derived by arbitrarily changing the scale of the latent variables. The indeterminacy problem can be solved as follows: (a) Normalization: A unit variance is assigned to each latent variable (set the variance of the latent variable equal to one). Variables are normalized before doing the analysis by transforming the data into normal scores so that maximum
19
likelihood technique can be applied. Maximum likelihood presumes normality to estimate the parameters. (b) Fix a non-zero coefficient at unity for each latent variable. (c) Pick a good indicator of the latent construct.
• There is need to report both model fit and parameter estimates in SEM. The hitch is that in spite of the fact that a model triggers sound model fit, yet its parameter estimates may be logically impossible to follow. This problem buffets many researchers in practice.
• SEM always needs large sample sizes. • By being a correlation-based analysis, SEM is highly sensitive to measurement scale, missing data, outliers, nonlinearity, non-normality and restricted data range. It is hard to simultaneously deal with these issues.
• Based on the fact that SEM tends to use maximum likelihood which is a full-information technique compared to ordinary least squares which is a partial information technique, if an error manifests in the parameter upstream, this triggers further errors until you go downstream.
• When implementing SEM, many coefficients can be constrained to be equal to zero in order to have a model which can be identified.
• The chi-square statistic is sensitive to the sample size. • A good fit model may still have omission of vital variables. Hence, SEM is impotent in terms of detecting feasible omitted variable bias problems in model estimation. Herein, lies the power of regression analysis.
• The main power of SEM pertains to its ability to ‘assess the fit of theoretically derived predictions to the data’. The fit of a model to data conveys no information pertaining to the theory’s validity. Or, this implies that, even if the model fits to the data, this does not guarantee the existence of causal inference.
20
Applied Structural Equation Modelling for Researchers and Practitioners
▾
Steps in Structural Equation Modelling
5.1.
Specification
5
Specification involves specifying a theoretical model, that is specifying a causal model from theory by building a path diagram of causal relations. In a nutshell, under specification, the aim is to convert the path diagram into a set of structural and measurement models, all based on theoretical foundations. The model is properly specified if the sample covariance matrix is amply reproduced by the implied theoretical model; fit a model that fits the covariance structure. Mis-specified models trigger biased parameter estimates, known as specification error. It is vital to note that theory can also assist in making a model parsimonious. Consider the case of a regression model having 17 independent variables and 1 dependent variable; this would signify having around 217 = 131,072 regression models. Unfortunately, not all of them would be theoretically meaningful as certain core variables would be retained while peripheral variables would be removed. SEM is inherently a confirmatory technique. The best way to describe specification under SEM is as follows: Develop model and convert the information into computer programme.
21
Model specification usually guided by theory as pointed out by Bollen (1989). X X = ðθÞ where P : observed population covariance matrix. =: if equality sign holds, the model fits the data. P (θ): covariance matrix implied by the model. (θ): vector of model parameters. P P (θ) = residual matrix. Every latent variable needs to be assigned a scale; this is done by fixing one of its loadings to one. The latent variables need to relate to a few other things to allow their identification.
5.2.
Identification
The main aim that underpins identification relates to deriving unique set of parameters based on the sample covariance matrix and the theoretical model. If all parameters are identified; the whole model is identified. If one or more parameters are not identified, then the entire model is not identified. There are three main cases of identification as discussed below: (i) Under-identified: One or more parameters are not uniquely determined due to lack of information in the covariance matrix; parameter estimates become unreliable while the degrees of freedom are negative (ii) Just identified: All parameters are uniquely determined due to sufficient information in the covariance matrix. In practice, just identified models are not very useful.
22
Applied Structural Equation Modelling for Researchers and Practitioners
(iii) Over-identified: This signifies that there is more than one way of estimating the parameters since more than enough information is available in the covariance matrix. This leads to an infinite number of solutions. A model is identified if it is either just identified or overidentified. There are two conditions to establish the identification of a model as discussed below: (a) Order condition (b) Rank condition Order condition: Number of free parameters to be estimated ≤ number of distinct values in the covariance matrix. To sieve out the number of distinct values in the covariance matrix, only the diagonal variances and one set of off-diagonal covariance terms are counted. Alternatively stated, only one off-diagonal part is counted since there are two sets of off-diagonal that are symmetry. P: number of variables Number of different values in covariance matrix =
pðp þ 1Þ 2
Parameter
Free (to be estimated)
Fixed (set to a value, 0 or 1)
Constrained (equal to one or more other parameters)
Figure 5.1: Types of Parameters. Parameters can be made free, fixed or constrained (see Figure 5.1). There are various ways to avoid the identification problems as discussed below:
Steps in Structural Equation Modelling
23
1. Start with a parsimonious model that has a minimum number of parameters to then successively incorporate variables deemed to be crucial. 2. Non-recursive SEMs have feedbacks and can be source of identification problems. Recursive models do not exhibit feedbacks so that all of the structural relationships are unidirectional. Recursive models only incorporate unidirectional effects to the plain effect that causality flows in only one direction. Degrees of freedom approach is widely used to assess model identification under SEM. As a matter of fact, a model is identified if there is a unique solution for every parameter. Consequently, an identified model must have non-negative degrees of freedom, that is the number of the estimated parameters should be less than or equal to the number of data points obtained from the sample covariance matrix. An over-identified model has positive degrees of freedom, whereas an under-identified model has negative degrees of freedom.
5.3.
Estimation (Focus Implied Covariance Matrix)
The objective under estimation is to choose the input matrix (covariance/correlation) type in order to estimate the proposed model, that is to compute the parameters of the model. It is of paramount significance to note that whenever dealing with SEMs, two covariance matrices are involved. The first covariance matrix relates to the sample covariance matrix that is simply generated by the data as it is. The second covariance matrix is triggered by the implied covariance matrix that is generated based on pre-established or specified theoretical model under consideration. This is graphically accomplished in SEM. The estimates for the model parameters are basically derived from this implied covariance matrix based on the theoretical model. While the sample covariance
24
Applied Structural Equation Modelling for Researchers and Practitioners
matrix is usually denoted by S, the implied covariance matrix P based on the theoretical model is often denoted by . Distinct estimation methods are used when fitting the function P with respect to S and , namely, ordinary least squares (also labelled as unweighted least squares), generalized least squares and maximum likelihood (which is iterative in its approach). While generalized least squares and maximum likelihood are scale free, ordinary least squares is scale dependent. Scale free signifies that both the transformed and untransformed variables unleash estimates which are properly related. In essence, scale free implies that the estimates are robust in sieving out the true essence of relationships among the variables, whatever be the transformation type involved for some variables. Such a benefit attached to the use of maximum likelihood and generalized least squares comes at the cost of having large sample when using these two techniques. As a matter of fact, both maximum likelihood and generalized least squares are large sample properties, involving variance and unbiasedness minimum. Moreover, maximum likelihood and generalized least squares are highly useful when non-normality conditions manifest such as skewed, peaked or both skewed and peaked distributions. Indeed, if ordinary least squares were to be applied to non-normal data, parameter estimates and standard errors would be subject to weak interpretation as they are not correct. However, maximum likelihood is robust only to slight deviations from normality conditions.
5.4.
Testing Fit (Fitting Sample Covariance Matrix and Implied Covariance Matrix)
P There is perfect fit to the data when S is almost equal , which is unlikely to be the case in practice. When doing fitting of the function, the aim is geared towards minimizing the difference between P S and , the difference of which is labelled as the residual matrix. Hence, there is also the need to scan the residuals in the residual matrix to ensure that they are small.
Steps in Structural Equation Modelling
25
Testing involves evaluation of the model fit. This implies the use of various model fit indices (Figure 5.2). Model testing dual approach
Fit of entire model
Fit of the Individual parameters
Figure 5.2: Types of Model Fits. Fit of entire model: Many models fit indices in SEM, mainly based on comparing model implied covariance matrix to the sample covariance matrix. Fit of the individual parameters: Check t-value Check sign if consistent with theory Ensure: (a) Non-negative variances (b) Correlations do not exceed 1 The book dedicates this full chapter to probe into details on model fit.
5.5.
Re-Specification
If the model is deemed sound in terms of model fit, then the results are interpreted. Otherwise, the model is modified. Indeed, if the original model is not good due to poor fit, that is difference P between and S is large, then there is the need to alter model. A priori, researchers would be motivated to remove statistically
26
Applied Structural Equation Modelling for Researchers and Practitioners
insignificant parameters. But, it is vital to note that parameters may not be significant under small samples. A modification index for a specific non-free parameter shows that if the parameter was allowed to become free in the subsequent model, then the chisquare goodness-of-fit value would be predicted to full by at least the value of the modification index. Expected parameter change (EPC) statistic in LISREL is highly useful when the sign of a parameter violates its theoretical foundation. EPC depicts the estimated change in both magnitude and sign of each non-free parameter if such a parameter was let to be free and estimated. Standardized residuals > 2.58: poor model fit (always need to examine the standardized residual matrix to check for large values). It is vital to always bear in mind that, in SEM estimation, there is always the need to cross-validate the results by having recourse to a new sample. This is akin to out-of-sample analysis. Subsequently, it is required to split the whole sample into two parts; namely, the estimation sample and the validation sample.
Steps in Structural Equation Modelling
27
This page intentionally left blank
▾
Model Specification: Path Diagram in SEM
6.1.
Introduction
6
Path diagram constitutes one of the foundation stones in SEM implementation. In simple terms, the path diagram can be described as a pictorial representation of the measurement and structural equations. Wright (1934) developed the path model. Path models have recourse mainly towards correlation coefficients and regression analysis to model complex relationships among observed variables only. Path models are not that different compared to simultaneous equation models as overwhelmingly wielded in applied econometrics. As a matter of fact, path analysis is also labelled as simultaneous equations model, as a special case of SEM, with exclusive focus being laid on observed variables only. The main reason for drawing path diagram under SEM is to effectively communicate the basic conceptual ideas under study. Alternatively stated, the chief rationale for using a path diagram is to get a clear picture of the hypothesized model under scrutiny, usually guided by theory. Various conspicuous drawings are widely used under SEM; straight arrows to capture the direct effects (regression coefficients) and curved arrows to factor in the bidirectional ‘correlational’ relationships. An arrow that points to the latent-dependent variable and an arrow that points out of the latent-independent variable also reflect bidirectional effects but at
29
variable level. As depicted below, only observed and latent variables are enclosed. Residuals, though incorporated into the path diagram, are however not enclosed. Path analysis in SEM usually involves only the observed variables. Under the path analysis, there is simultaneous test of both the measurement and structural parameters. Figure 6.1 shows the various variables used in SEM. Observed variables, x and y variables are enclosed in rectangles, boxes or squares
Latent variables ξ, η, are enclosed in ovals, eclipses or circles
: Causal arrow
: Correlational arrow for labelling covariance structures among the error terms
Reciprocal effects
Figure 6.1: Observed and Latent Variables. Note: Error terms are also known as residuals or unique factors.
30
Applied Structural Equation Modelling for Researchers and Practitioners
KD
BQ
Measurement model
A
B
Three different measures of KD
Measurement model
D
C Structural model
E
F
Three different measures for BQ
Figure 6.2: An Overview of Path Analysis in SEM. The structural model in Figure 6.2 consists of two latent variables.
• Rectangles represent observed variables A, B, C, D, E, F. There is an error component for each observed variable.
• Ellipses represent unobserved latent variables. • Arrows represent regression paths pertaining to the impacts of the observed variables onto the latent factors. But the arrows are made such as they point into the observed variables from the latent constructs. In the current context, the arrow points out the impact of one latent variable on another latent variable. The dependent latent variable thereby gets an error component as depicted by the bottom arrow.
• Circles with an arrow pointing towards each observed variable represents the measurement error terms.
• Each latent factor can be made connected to every other factor by a curved two-headed arrow; meaning that such a factor is allowed to covary with another factor.
Mode l Spe cific ation: Path Diagram in SEM
31
F2
F1
X1
X2
E1
E2
X3
X4
X5
E3
E4
E5
X6
E6
Figure 6.3: An Overview of Path Analysis in SEM: Another Example. Notes: F1, F2: Two latent variables, also known as common factors or unobserved variables because they are not directly measured. Latent variables are usually represented in ovals, ellipses or circles. X1,…,X6: Observed/manifest/indicator variables. E1,…,E6: residuals/unique factors. SEM researchers should always keep in mind the following points: 1. The underlying rationale for drawing the path diagram is to display the conceptual ideas. 2. Residuals are included in the path diagram but are not enclosed; outside arrows pointing to the latent variable and the dependent variable capture random errors. 3. One-way arrow depicts a direct effect of one variable on another variable; in essence, one-way arrows represent regression coefficients. 4. Two-way arrows show that the variables may be correlated. Alternatively stated, curved double-headed arrows represent covariances among the variables. 5. Coefficient associated with each arrow shows the corresponding parameter.
32
Applied Structural Equation Modelling for Researchers and Practitioners
6. Nonexistence of an arrow between two variables signifies that these two variables are assumed not to be directly related. 7. The means (intercepts) may not be presented in the diagram. 8. A mediating variable may be present which means no direct effect of the independent variables on the dependent variable; the independent variables first affect the mediating latent variable which then impacts on the independent variable. The relationship between the independent and the dependent variables is mediated by the latent variable. 9. Covariance matrix will emanate only from the observed variables and not from the latent variables. This is of paramount significance as it shows that when computing the number of unique elements, only the number of observed variables should be used. 10. Factor analysis is akin to path diagram by virtue of the fact that the observed variables are a function of both the common factors and the unique factors. SEM caters for the model under confirmatory factor analysis and not for the model used under explanatory factor analysis. 11. Under SEM analysis, the interest is laid on a large p-value not a small p-value. 12. Each latent variable is an unobserved variable that has no established unit of measurement. To define the unit of measurement of each latent variable, a non-zero coefficient is given to one of its observed variables as an indicator. 13. Structural model under SEM is used to examine causal relationship. 14. Path analysis in SEM only involves observed variables. Actually, despite of the fact that latent variables are included when doing the analysis, the relationships boil down to assessing only the observed variables. The proof of this emanates when deriving the
Mode l Spe cific ation: Path Diagram in SEM
33
covariances from the specified models. This is further bolstered by the fact that unique elements of the covariance matrix are derived from the number of observed variables. 15. The proposed model under path analysis comprises both a set of specified causal and non-causal relationships among the variables. 16. For ease of exposition, it is important to have one path diagram to represent the measurement equation and another path diagram to capture the structural equation. 17. Second-order factor analysis pertains to latent variables whose indicators are themselves latent variables. 18. Path models more robust than regression analysis as they allow for many independent and dependent variables, so that they are able to test for complicated models. 19. SEM mergers path models and factor models. Factor analysis constitutes a data reduction technique.
6.2.
Benefits of Path Analysis
• Provides a test of overall model fit; test the entire model at once
• Allows for multiple independent and dependent variables • Allows to test relationships among observed and latent variables as a whole.
6.3.
Drawback of Path Analysis
• Treats the observed variables as perfect substitutes for the constructs they represent.Basis to have recourse towards SEM that uses a latent variable.A latent variable explains the relations among the observed variables that measure the construct.
34
Applied Structural Equation Modelling for Researchers and Practitioners
• Latent variables, by virtue of being completely imaginary, do not have set scales, and thus need to be assigned some arbitrary units.
6.4.
Solution to Latent Construct
1. Pick a good indicator of the latent construct. 2. Set the variance of the latent variable to 1.
6.5.
Concepts under Path Analysis B1 B2
B
B3
C1
R
A
C C2 C3 A1
A2
A3
D1 D2
D
D3
Figure 6.4: Concepts under Path Analysis Generalize to SEM.
Based on the rectangular components, it can be easily seen that there are 13 observed variables in Figure 6.3. In a parallel manner, based on circular forms, four latent variables are present in Figure 6.4.
Mode l Spe cific ation: Path Diagram in SEM
35
13 observed variables: matrix
13ð13þ1Þ 2
¼ 91 unique elements in covariance
Unique elements in a covariance matrix = number of variances + number of covariances along only lower or upper off-diagonal terms. It can easily be checked, under order condition, that the model is over-identified. The sample covariance matrix is always symmetrical. K observed variables will lead to (K × K) covariance matrix. K observed variables: KðKþ1Þ unique elements in covariance matrix; h i 2 KðKþ1Þ − K covariances. in which case, there are K variances and 2 Variances are found on the diagonal, the covariances are located below/above the diagonal. Total number of distinct elements in a (5 × 5) matrix = 15 (5 variances and 10 covariances). Population covariance matrix is represented by 3 2 2 σ1 2 7 X 6 7 6 σ 12 σ 2 2 7 6 ¼ 6 σ 13 σ 23 σ 3 7 5 4 σ 14 σ 24 σ 34 σ 2 4 2 σ 15 σ 25 σ 35 σ 45 σ 5 LISREL programme considers separate sets of equations for the measurement model and for the structural model.
6.6.
Assumptions under Path Diagram
1. All causal relations are linear. 2. A path can have only one curved arrow.
36
Applied Structural Equation Modelling for Researchers and Practitioners
3. A path cannot retreat or go backward once it has drifted forward. 4. Causal closure is respected, that is all causes of the variables are fully represented in the specified model. Most importantly, all causal relations between variables are represented, whether they exist or do not exist. 5. Path cannot go through same construct more than once. In the measurement model: L.H.S: Observed variables R.H.S: Latent variables In the structural model: L.H.S: dependent latent variables R.H.S: independent latent variables It can also occur that an observed-independent variable impacts the dependent latent variable in the structural model (Figure 6.5). I1
I2
B
I
I3
Figure 6.5: Equation Modelling under Path Analysis: Deriving Covariance Structure of the Entire Matrix. Measurement model
I ¼ β1 I1 þ β2 I2 þ β3 I3 þ ɛ1
Structural model
B ¼ β4 I þ ɛB
ð6:1Þ
ð6:2Þ
A priori, only two equations are required to estimate the above path analysis since only two variables depict one-way arrow. It would be inappropriate to run these two equations separately but
Mode l Spe cific ation: Path Diagram in SEM
37
instead to estimate these equations in one go, leading to the notion of simultaneous equation modelling (measurement and structural equations being run simultaneously) as an inherent feature of path analysis. As a matter of fact, if separate regression equations were to be fitted to the various parts of our model, this would not unleash any way to gauge on the overall model fit. The observed relationships under path analysis are usually captured via the covariances, often, captured by the sample covariance matrix, S. In practice, due to time and cost issues, it is not feasible to have the population covariance matrix so that recourse is made towards the sample covariance matrix, which thereby serves as a sound estimate of the population covariance matrix. Technically speaking, as the sample size scales up, the sample covariance matrix serves as a good estimate of the population covariance matrix. The model fit is evaluated by comparing the sample covariance matrix with an estimate of the population covariance matrix, ^ denoted by Σ: Path diagrams: Unidirectional arrows versus Bidirectional arrows (a) Unidirectional arrows→ causal relations (capture regression coefficients) (b) Bidirectional arrows→ non-causal/correlational relationships When working with SEM, the arrows work in a completely different way in that when they affect a variable, the arrows point out from that variable to the exogenous variables. This is one of the distinctive aspects of path analysis under SEM (Figure 6.6).
X Z
Y
Figure 6.6: Modelling Example.
38
Applied Structural Equation Modelling for Researchers and Practitioners
X and Y are correlated X, Y: Exogenous variables Z: Endogenous variables Add variable Q with the hypotheses that Q is caused by both X and Y (Figure 6.7). Q X Z Y
Figure 6.7: Modelling Example.
6.7.
From Path Analysis to Equation Analysis: Deriving Covariances from Path Analysis
In Figure 6.8,
CASH INV
AST
PRO
STO
Figure 6.8: Modelling Example. CASH: Cash INV: Investment STO: Stocks AST: Asset Structure PRO: Profitability It is of paramount significance to note that though there are five variables in the specified path analysis, what really matters relates to the number of observed variables, that is three variables in the currently
Mode l Spe cific ation: Path Diagram in SEM
39
specified path analysis. The ultimate aim of the analysis is to derive unique elements of the covariance matrix. Covariance (CASH, AST)? AST ¼ β1 CASH þ β2 INV þ β3 STO þ ɛ CovarianceðCASH; ASTÞ ¼ CovarianceðCASH; β1 CASH þ β2 INV þ β3 STO þ ɛÞ For CASH; β1 CASH : Covariance with itself ¼ variance & covarianceðCASH; ɛÞ ¼ 0 ¼ β1 varianceATT þ β2 CovðCASH; INVÞ þ β3 CovðCASH; STOÞ Note: Random error is not correlated with anything (Figure 6.9).
CASH INV
AST
PRO
STO
Figure 6.9: Modelling Example. Covariance (CASH, PRO)? (Figure 6.10) AST ¼ β1 CASH þ β2 INV þ β3 STO þ ɛ PRO ¼ β4 AST þ μ PRO ¼ β4 ðβ1 CASH þ β2 INV þ β3 STO þ ɛÞ þ μ ¼ β4 β1 CASH þ β4 β2 INV þ β4 β3 STO þ β4 ɛ þ μ CovðCASH; PROÞ
40
Applied Structural Equation Modelling for Researchers and Practitioners
CovðCASH; β4 β1 CASH þ β4 β2 INV þ β4 β3 STO þ β4 ɛ þ μÞ
¼ β4 β1 varðCASHÞ þ β2 β4 CovðCASH; PBCÞ þ β4 β3 CovðCASH; STOÞ þ β4 CovðCASH; ɛÞ þ CovðCASH; μÞ
¼ β4 β1 varðCASHÞ þ β2 β4 CovðCASH; INVÞ þ β4 β3 CovðCASH; STOÞ þ 0 þ 0
¼ β4 β1 varðCASHÞ þ β2 β4 CovðCASH; INVÞ þ β4 β3 CovðCASH; STOÞ
CASH INV
AST
PRO
STO
Figure 6.10: Modelling Example. Covariance (INV, AST)? (Figure 6.11) AST ¼ β1 CASH þ β2 INV þ β3 STO þ ɛ CovðINV; ASTÞ CovðINV; β1 CASH þ β2 INV þ β3 STO þ ɛÞ ¼ β1 CovðINV; CASHÞ þ β2 VarðINVÞ þ β3 CovðINV; STOÞ þ CovðINV; ɛÞ
Mode l Spe cific ation: Path Diagram in SEM
41
¼ β1 CovðINV; CASHÞ þ β2 VarðINVÞ þ β3 CovðINV; STOÞ þ 0 ¼ β1 CovðINV; CASHÞ þ β2 VarðINVÞ þ β3 CovðINV; STOÞ
CASH INV
AST
PRO
STO
Figure 6.11: Modelling Example.
Covariance (INV, PRO)? (Figure 6.12) AST ¼ β1 CASH þ β2 INV þ β3 STO þ ɛ PRO ¼ β4 AST þ μ PRO ¼ β4 ðβ1 CASH þ β2 INV þ β3 STO þ ɛÞ þ μ ¼ β4 β1 CASH þ β4 β2 INV þ β4 β3 STO þ β4 ɛ þ μ CovðINV; PROÞ CovðINV; β4 β1 CASH þ β4 β2 INV þ β4 β3 STO þ β4 ɛ þ μÞ ¼ β4 β1 CovðINV; CASHÞ þ β4 β2 VARðINVÞ þ β4 β3 CovðINV; STOÞ þ β4 CovðINV; ɛÞ þ CovðINV; μÞ ¼β4 β1 CovðINV; CASHÞ þ β4 β2 VARðINVÞ þ β4 β3 CovðINV; STOÞ þ 0 þ 0 ¼ β4 β1 CovðINV; CASHÞ þ β4 β2 VARðINVÞ þ β4 β3 CovðINV; STOÞ
42
Applied Structural Equation Modelling for Researchers and Practitioners
CASH INV
AST
PRO
STO
Figure 6.12: Modelling Example. Covariance (STO, AST)? (Figure 6.13) AST ¼ β1 CASH þ β2 INV þ β3 STO þ ɛ CovðSTO; ASTÞ CovðSTO; β1 CASH þ β2 INV þ β3 STO þ ɛÞ ¼ β1 CovðSTO; CASHÞ þ β2 CovðSTO; INVÞ þ VARðSTOÞ þ CovðSTO; ɛÞ ¼ β1 CovðSTO; CASHÞ þ β2 CovðSTO; INVÞ þ VARðSTOÞ þ 0 ¼ β1 CovðSTO; CASHÞ þ β2 CovðSTO; INVÞ þ VARðSTOÞ
CASH INV
AST
PRO
STO
Figure 6.13: Modelling Example. Covariance (STO, PRO) ? AST ¼ β1 CASH þ β2 INV þ β3 STO þ ɛ PRO ¼ β4 AST þ μ
Mode l Spe cific ation: Path Diagram in SEM
43
PRO ¼ β4 ðβ1 CASH þ β2 INV þ β3 STO þ ɛÞ þ μ ¼ β4 β1 CASH þ β4 β2 INV þ β4 β3 STO þ β4 ɛ þ μ CovðSTO; PROÞ CovðSTO; β4 β1 CASH þ β4 β2 INV þ β4 β3 STO þ β4 ɛ þ μÞ ¼ β4 β1 CovðSTO; CASHÞ þ β4 β2 CovðSTO; INVÞ þ β4 β3 VARðSTOÞ þ β4 CovðSTO; ɛÞ þ CovðSTO; μÞ ¼ β4 β1 CovðSTO; CASHÞ þ β4 β2 CovðSTO; INVÞ þ β4 β3 VARðSTOÞ þ 0 þ 0 ¼ β4 β1 CovðSTO; CASHÞ þ β4 β2 CovðSTO; INVÞ þ β4 β3 VARðSTOÞ Gathering all the terms CovarianceðCASH; ASTÞ ¼ β1 varianceCASH þ β2 CovðCASH; INVÞ þ β3 CovðCASH; STOÞ CovarianceðCASH; PROÞ ¼ β4 β1 varðCASHÞ þ β2 β4 CovðCASH; INVÞ þ β4 β3 CovðCASH; STOÞ CovarianceðINV; ASTÞ ¼ β1 CovðINV; CASHÞ þ β2 VarðINVÞ þ β3 CovðINV; STOÞ CovarianceðINV; PROÞ ¼ β4 β1 CovðINV; CASHÞ þ β4 β2 VARðINVÞ þ β4 β3 CovðINV; STOÞ CovarianceðSTO; ASTÞ ¼ β1 CovðSTO; CASHÞ þ β2 CovðSTO; INVÞ þ VARðSTOÞ CovarianceðSTO; PROÞ ¼ β4 β1 CovðSTO; CASHÞ þ β4 β2 CovðSTO; INVÞ þ β4 β3 VARðSTOÞ
44
Applied Structural Equation Modelling for Researchers and Practitioners
Summarizing all the elements of the entire matrix as follows: Regression coefficients: β1, β2, β3, β4 Note: Regression coefficients are also known as loadings There are four regression coefficients only since there are only four directional arrows. Variances: Var(CASH), Var(INV), Var(STO) Covariances: Cov(CASH, INV), Cov(CASH, STO), Cov(INV, STO) In fact, since there are three observed variables, this will lead to three variances and [(3 × 4)/2] = 6 unique elements, as displayed by the three variances and three covariances. There are also two error terms based on independent arrows pointing to the two dependent variables.
6.8.
Simple and Compound Paths
Path diagrams are useful to show the hypothesized relations. Simple path is used to show direct relationship such as X→ Y Compound path is used to show two or more simple path such as X→ Y→ Z X→Y X→Y→Z Value of a compound path is the product of all simple path, for example X→Y→Z (i) X→Y (ii) Y→Z
Mode l Spe cific ation: Path Diagram in SEM
45
The correlation between any two variables is the sum of the simple and compound paths linking the two variables.
6.9.
SEM Analysis Based on Correlation Coefficients and Visual Display
Focus is laid on the correlation coefficients and the SEM model. Consider the correlation coefficients table for variables A, B and C (Table 6.1) Table 6.1: Correlation Coefficients Table. A
B
A
1.00
B
0.50
1.00
C
0.65
0.70
C
1.00
The SEM model is displayed in Figure 6.14 where A and B affect C A
a C
c
b B
Figure 6.14: Modelling Example.
Let c denote the correlation between A and B = 0.5 Let a denote the correlation between A and C = 0.65 Let b denote the correlation between B and C = 0.70
46
Applied Structural Equation Modelling for Researchers and Practitioners
Writing down a series of structural equations to represent these relationships: c ¼ 0:50 a þ cb ¼ 0:65 b þ ca ¼ 0:70 In the case of double-arrow link, the correlation coefficient is simply inserted as the value for the relationship, as it is the case for c. In the case of single-arrowed link, the direct link is taken as the constant value and then, the other linking correlation values are simply taken as the products. For instance, in the case of the equation a + cb = 0.65; this implies that the correlation coefficient 0.65 is between A and C so that the constant is value a and the products cb so that this unleashes the above equation. The gist of the equations is to consider the direct link as the constant, then add products of the other two links, all being tantamount to the correlation coefficient. Three equations and three unknowns→ just identified→ unique solutions The three equations fully define the correlation matrix, solving the above three equations easily solves them with the values given below: a = 0.40 b = 0.50 c = 0.50 a, b and c constitute the coefficients. These coefficients are the path coefficients of the simple paths; they are also labelled as beta weights or betas obtained from regression.
Mode l Spe cific ation: Path Diagram in SEM
47
6.10.
Minimizing Residuals between Observed and Estimated Covariances under SEM
The ultimate objective under SEM is to minimize the residuals between the observed covariances (which emanate from the observed sample covariance matrix) and the estimated covariances (implied by the model). In essence, the minimization hinges on these two matrices. Thus, if S denotes the sample covariance matrix while Σ^ reflects the estimated covariances, the minimizing ^ function can be encapsulated via function F; denoted F(S, Σ).
6.11.
Saturated versus Independence models
The degrees of freedom (df) in the model is determined by the number of different elements in the covariance matrix minus the number of model parameters to be estimated. In the case that df is equal to zero, this implies that the model is saturated. In such a case, there is no structure on the covariance matrix, that is if we estimate the saturated model, this will be the same as estimating the parameters of ordinary multiple regressions. At the other extreme, there is an independence model in which case no correlations prevail at all so that all the off-diagonal values of the covariance matrix are equal to zero and only the variances remain (Figure 6.15).
df values
lowest = 0 (saturated model)
highest→independance model when no correlations among the independent variables; no path exists
Figure 6.15: Degrees of Freedom Assessment in SEM.
48
Applied Structural Equation Modelling for Researchers and Practitioners
Saturated model: no structure being imposed on the covariance matrix so that we have zero degrees of freedom. Here, every observed variable is related to every other observed variable. Independence model: assumes that all variables are uncorrelated with each other. The model-implied covariance matrix in this case is a diagonal matrix with all the off-diagonal elements being equal to zero, so that only 5 variances df = 15 − 5 = 10. Though, the model is testable but it is not interesting. Chi-square value ranges from zero for a saturated model to a maximum value for the independence model with no paths included.
Saturated model (all paths in the model) χ2 = 0
Independence model (no paths in the model) χ2 maximum value
A perfect fit; no difference between the sample covariance matrix S and the reproduced implied theoretical model.
Figure 6.16: Analysis of Saturated and Independence Models.
The theoretical model lies in between the saturated (occurs in a just identified model) and independence models (Figure 6.16). Objective in SEM: Accomplish a parsimonious model with few substantive meaningful paths and a non-significant chi-square value close to the saturated model value of 0.
Mode l Spe cific ation: Path Diagram in SEM
49
6.12.
Sample Size under SEM
By default, SEM constitutes a large sample size technique. Kline (2005) defined the sample size as being a function of the complexity of the model and data frequency (Figure 6.17). Kline(2005): Sample Size
Complexity of model 5:1 – 20:1
Data quarterly→larger samples for nonnormal data
Figure 6.17: Sample Size under SEM.
50
Applied Structural Equation Modelling for Researchers and Practitioners
▾
Model Identification
7.1.
Introduction
7
Model identification plays a preponderant role in structural equation modelling. Technically speaking, the underpinning rationale behind model identification is to sieve out unique values for parameter estimation. Two tests are widely applied to test for model identification.
7.2.
Types of Parameters in SEM
There are two types of parameters: namely, fixed and free. Fixed parameters are never estimated from the data since their values are fixed to be either zero or one. To set up a scale for each latent variable, there is the need to fix the variance of each latent variable to one or fixed the value to one of one parameter. Free parameters are estimated from the data. Figure 7.1 depicts the types of parameters used in SEM.
51
Types of parameters under SEM
Free
Fixed
Fix variance of each latent variable to 1 Fixed the value to one of one parameter
* Parameters that are being estimated, namely, the order condition and the condition
Figure 7.1: Types of Parameters under SEM.
7.3.
Types of Identification Types of model identification
Exact identification Unique value can be obtained for the parameters
Under-identified
Unique values for the parameters cannot be obtained from the observed data
Degrees of freedom = 0
Overidentification No exact solution, a value for one or more parameters can be obtained in multiple ways Degrees of freedom > 0
Degrees of freedom = Negative
Figure 7.2: Types of Model Identification. A model is identified if it is feasible to derive a unique solution for every parameter. An identified model must have non-negative degrees of freedom, that is, the number of estimated parameters should be less than or equal to the number of data points desired from the sample covariance matrix, under the famous order condition. Figure 7.2 demonstrates the distinct types of model identification.
52
Applied Structural Equation Modelling for Researchers and Practitioners
If a model is overidentified, it means that the number of equations exceeds the number of unknowns so that the system of equations is solvable in more than a single way by virtue of the existence of many possible solutions. The objective is thereby geared towards the selection of a model that generates the best fit to the data. Over-identified models are most susceptible to be of more substantive interest than just identified models to researchers. Under SEM, overidentification can be induced via setting some parameters to be fixed. Typically, the parameter values are set to zero. A second way to induce overidentification is to impose a one-way causality so that the reverse causality value is set to zero. The ideal solution for researchers is to have an overidentified model as this offers them the feasibility to eventually select that model which unleashes the best fit to the data. The degrees of freedom is positive for an overidentified model. If a model is just identified, it means that the model is saturated with degrees of freedom being tantamount to zero. Under just identified model, there is only one unique solution with the model always generating a perfect fit to the data. The degrees of freedom is equal to zero for a just identified model. Just identified models have zero degrees of freedom, because the number of estimated paths equals the number of elements in the covariance matrix and hence generates a perfect fit to the data. If a model is under-identified, this implies that the number of unknowns exceeds the number of equations so that there is no unique solution. Moreover, the model parameters are not uniquely identified. The degrees of freedom is negative in the case of an under-identified model. Covariances can prevail not only among observed variables but also among latent variables. In a parallel manner, there are variances not only for the observed variables, when they capture the latent variables, but also for the latent variables which constitute dependent variables.
M ode l Ide ntif icat ion
53
7.4.
Examples of Model Identification Explained
7.4.1.
Under-Identified Model Independent variable
d
a
Independent variable
B
A
C
b Latent dependent variable
Figure 7.3: Under-Identified Model. In practice → need an overidentified model. The channel to check for identification is the degrees of freedom. Number of observed variables = 2 Hence, the number of unique elements in the covariance matrix ¼3 is 2ð2þ1Þ 2 Number of model parameters = Number of regression coefficients + Number of variances + Number of covariances Number of regression coefficients = 2 (based on single-headed arrows) Number of variances = Variances of independent observed variables + variances of error terms =2+1=3 Number of covariances among independent variables = 1 Thus, the degrees of freedom = number of unique elements in the covariance matrix minus the number of model parameters = 3 6 = 3 < 0 (Underidentified model; df < 0)
54
Applied Structural Equation Modelling for Researchers and Practitioners
It is plain obvious from the above exercise that there are two versions of the error terms. First, an automatic error term follows when gauging the effects of the independent variables on a given dependent variable. Second, error terms emanate from observed variables which capture latent variables. Figure 7.3 depicts an example of an under-identified model in SEM.
7.4.2.
Overidentified Model (The One to Be Used) Indicator variable
d
A
Indicator variable
B
b a C
Mediating latent variable e
c D Independent variable
E
Independent variable
Figure 7.4: Over-Identified SEM. 4 manifest known variables: A, B, D and E. Figure 7.4 shows example of an over-identified SEM. Number of variables: 4, so number of unique elements in the = 10 covariance matrix is 4ð4þ1Þ 2 Number of model parameters ¼ Number of regression coefficients þ Number of variances þ Number of covariances Number of regression coefficients ¼ 4 ðbased on single-headed arrowsÞ
M ode l Ide ntif icat ion
55
Number of variances ¼ Variances of independent observed variables þ variances of error terms ¼2þ1¼3 Number of covariances among independent variables = 1 (doublearrowed among indicator variables) Thus; the degrees of freedom ¼ number ofunique elements in the covariancematrix minusthe number of model parameters ¼ 10 8 ¼ 2 > 0 ðover-identified model; df > 0Þ
7.5.
Model Identification: Both Measurement and Structural Equations
Focus is being laid on an SEM which has the measurement and the structural equations formulated with an unknown parameter vector θ. The definition of identification is based on Σ(θ), the population covariance matrix of the observed variables in y. A model is said to be identified if for any θ1 and θ2, ∑(θ ) = ∑(θ )
Implying that θ1 = θ2
However, SEM comprises of both the measurement and the structural models so that SEM is identified if both its measurement equation and structural equation are identified. m(θ): measurement model s(θ): structural model Measurement model is identified if for any θ1 and θ2, m(θ1) = m (θ2) so that θ1 = θ2 Structural model is identified, if for any θ1 and θ2 , s(θ1 ) = s(θ2 ), implying that θ1 = θ2
56
Applied Structural Equation Modelling for Researchers and Practitioners
8.1.
Introduction
▾
Model Estimation
8
The aim under model estimation is to select a fitting function which is subsequently minimized to obtain the parameter estimates. Such a process is iterative in nature as there is need to insert initial values for all the parameters to then evaluate the function until the generated value from the function no longer alters from one iteration to the next iteration. The most coveted estimation technique constitutes the maximum likelihood function. The constituents of the function comprise sample covariance matrix and the implied covariance matrix as per the specified theoretical model. An overidentified model triggers an infinite number of solutions. LISREL solves models using iterative estimation (Figure 8.1).
57
LISREL start with a guess Guess on parameter values
Compute the implied covariance matrix
Compare the implied covariance matrix with the actual/observed covariance matrix
Stop if model fit is deemed satisfactory. Otherwise, redo.
Figure 8.1: Iterative Approach to Estimation in SEM. How to know when to stops? The point at which computation is ceased is based on the fitting criterion to be applied. Three common fitting criteria: OLS ¼ tr ðS CÞ2 2 1 GLS ¼ tr ðS CÞS1 2 ML ¼ lnjC j lnjSj þ tr SC 1 M Ordinary least squares (OLS), generalized least squares (GLS) and ML (maximum likelihood) tr = trace (sum of the diagonal elements) S = covariance matrix implied by the model C = actual covariance matrix
58
Applied Structural Equation Modelling for Researchers and Practitioners
ln = natural logarithm jj = determinant (index of generalized variance) of a matrix OLS: partial information technique ML and GLS constitute full-information techniques
• Each criterion tries to minimize the differences between the implied and observed covariance matrices. The most widely used estimation technique comprises the ML because it is consistent and asymptotically efficient in large samples. Note: If large sample and assume normality, then, χ2 test is reasonable. If large sample but reluctant to use normality then, use GLS estimation. The benefit of using OLS is that it is a partial information technique so that there is no spreading of errors onwards because each path is estimated independently of the others. ML and GLS are full-information techniques, that is all the parameters (path values) are simultaneously being estimated. Subsequently, an error in one value will be reflected in every parameter which is being estimated. The covariance matrix has covariance parts in its off-diagonal and variance part in the main diagonal. Correlation implies standardized covariance matrix which removes the scale of measurement. A correlation matrix simplifies the interpretation of results by rescaling all variables to have unit variance. SEM is not always scale free so that a model that fits the correlation matrix may not fit the covariance matrix. It is thereby recommended to use the covariance matrix.
Mo del Estimation
59
This page intentionally left blank
9.1.
Introduction
▾
Model Fit Evaluation
9
A vital element ingrained in the application of SEM pertains to the assessment of the model’s goodness-of-fit to the sample data. As a matter of fact, the overall model-fit evaluation should precede prior to interpreting the parameter estimation results. Alternatively stated, model-fit evaluation should be implemented prior to interpreting the parameter estimation results. However, there is an irony embedded with respect to model-fit evaluation. Indeed, under SEM modelling, the statistical significance of the parameters predominates over the model fit. The underlying rationale is that, despite the fact that a model fits the data well, yet, if some of the parameters are not significant, this would blow up or discontinue the theory, which was used at the outset when spelling down the model specification. Consequently, a good SEM simultaneously satisfies good model-fit and statistical significance of the model parameters. The variables are normalized before analysis by transforming the data into normal scores so that the ML method1 can be applied. Based on the normal scores, the covariance matrix is used as an input to estimate the parameters in LISREL.
1. ML method assumes normality to estimate the parameters.
61
Covariance matrix is better than correlation matrix for the following reasons: (i) Correlation matrix modifies the specified model. (ii) Correlation matrix generates incorrect goodness-of-fit measures. (iii) Correlation matrix unloads incorrect standard errors. (iv) SEM uses the variance-covariance matrix instead of correlation matrix because Boomsma (1983) found that the use of correlation in SEM leads to imprecise parameter estimates and standard errors of the parameters. The parameters are estimated by minimizing the difference between the actually observed population covariance matrix and the model-implied covariance matrix. Model selection is based on the overall model-fit evaluation. The evaluation of the model fit ensures that the model-implied covariance does not deviate too much from the population covariance so that the interpretation of parameter estimates can be made reliable and valid. Explanatory power is highly useful in regression analysis to gauge on the extent to which the independent variables are able to explain the dependent variable. The counterpart of R2 noted in regression analysis is the squared multiple correlation (SMC) in SEM. Figure 9.1 shows the model evaluation in SEM.
Model evaluation
Model fit evaluation
Parameter fit evaluation
Figure 9.1: Model Evaluation.
62
Applied Structural Equation Modelling for Researchers and Practitioners
Three criteria are applied to judge on the statistical significance and substantive meaning of a theoretical model: 1. Non-statistically significant chi-square test and the rootmean-square of error approximation (RMSEA) both constitute global fit measures. Non-statistically significant chi-square value shows that the sample covariance matrix and the reproduced model-implied covariance matrix are similar. RMSEA value that is less than or equal to 0.05 is deemed acceptable. Alternatively stated, when χ 2 value is non-significant (close to zero); residual values in the residual matrix are close to zero; meaning that the theoretical implied model fits the sample data. 2. Statistical significance of the individual parameters, as captured by the t-values. 3. Magnitude and direction of the parameter estimates, focusing on the theoretical significance. Researchers have developed more than 30 fit indices in the area of structural equation modelling. Some examples of goodness-of-fit indices are as follows: Root-mean-square of error Exact-fit X 2 Standardized root-mean-square residual Non-normed fit index Comparative fit index Incremental fit index Five indices of goodness of fit are as follows: RMSEA ≤ 0.05 Root-mean-square residual (RMSR) ≤ 0.05
Model Fi t Eva luat ion
63
Non-normed fit index (NNFI) ≥ 0.90 Comparative fit index (CFI) ≥ 0.90 Incremental fit index (IFI) ≥ 0.90 There are basically two kinds of parameter estimates under structural equation modelling. (1) Completely standardized loadings, both for the observed and latent constructs. (2) Unstandardized loadings.
9.2.
Types of Fit Evaluation Assess model fit
Absolute fit
Relative (Comparative) fit -Parsimonious fit
Incremental fit
Figure 9.2: Types of Fit Evaluation.
Fit can be defined as the ability of the model to replicate the data in terms of the variance-covariance matrix (Figure 9.2). Ironically, in spite of the fact that a good fit prevails, yet, this does not imply that the model is properly specified. Similarly, a good fit model may unleash the wrong signs as to be theoretically invalid. Researchers use a plethora of more than 30 descriptive fit indices in the sphere of Structural Equation Modelling, which can be classified under a threepronged approach; namely, absolute fit, relative fit and incremental fit.
9.3.
Absolute Fit
Model selection is based on the overall model-fit evaluation. The evaluation of the model-fit assumes that the model-implied covariance is not different from that of the population covariance so that
64
Applied Structural Equation Modelling for Researchers and Practitioners
the interpretation of parameter estimates can be reliable and valid. Model fit is evaluated by comparing S (sample covariance matrix) and Σ^ (population covariance matrix). Such an evaluation of the fit is applied to the structural equation part. Absolute form of fit pertains to an index discrepancy between the model and the data in an absolute sense. A widely used criterion for assessing absolute fit is to have RMSE being less than 0.05 to have a good model. A widely reported Goodness-of-Fit index in SEM analysis is the χ2 test2 which provides a test of the null hypothesis that the theoretical model fits the data. If the model fits the data well, the χ2 values should be small and the p-value associated with the chisquare should be relatively large (non-significant). Goodness of fit < 0.90 A non-significant χ2 signifies a p-value exceeding 0.05. It is vital to note that the χ2 is sensitive to sample size so that large sample size, by default, tends to imply poor model fit by virtue of a statistically significant χ2 as sample size scales up, thereby necessitating a look at RMSEA. Non-significant χ2 (p > 0.05) (Note: but χ2 is sensitive to sample size; so large sample size tends to show good model fit; reason to look at RMSEA). χ 2 = a measure of badness-of-fit χ 2null = Chi-square for the independence model (No path prevails at all)
• χ 2 model-fit criterion is sensitive to the sample size because as
sample size increases (say above 200), the χ 2 statistic has a tendency to show a statistically significant p-value. Similarly, as sample size falls (