282 41 873KB
English Pages 104 [115] Year 2012
SpringerBriefs in Statistics
For further volumes: http://www.springer.com/series/8921
Ton J. Cleophas Aeilko H. Zwinderman •
SPSS for Starters, Part 2
123
Aeilko H. Zwinderman Rijnsburgerweg 54 2333 AC Leiden The Netherlands
Ton J. Cleophas Weresteijn 17 3363 BK Sliedrecht The Netherlands
Additional material to this book can be downloaded from http://extras.springer.com/
ISSN 2191-544X ISBN 978-94-007-4803-3 DOI 10.1007/978-94-007-4804-0
ISSN 2191-5458 (electronic) ISBN 978-94-007-4804-0 (eBook)
Springer Dordrecht Heidelberg New York London Library of Congress Control Number: 2012939200 The Author(s) 2012 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
The small book ‘‘SPSS for Starters’’ issued in 2010 presented 20 chapters of cookbook like step by step data-analyses of clinical research, and was written to help clinical investigators and medical students analyze their data without the help of a statistician. The book served its purpose well enough, since 13,000 electronic reprints were being ordered within 9 months of edition. The above book reviewed, e.g., methods for 1. 2. 3. 4. 5. 6.
continuous data, like t-tests, non-parametric tests, analysis of variance, binary data, like crosstabs, McNemar tests, odds ratio tests, regression data, trend testing, clustered data, diagnostic test validation.
The current book is a logical continuation and adds further methods fundamental to clinical data analysis. It contains, e.g., methods for 1. 2. 3. 4. 5. 6.
multistage analyses, multivariate analyses, missing data, imperfect and distribution free data, comparing validities of different diagnostic tests, more complex regression models.
Although a wealth of computationally intensive statistical methods is currently available, the authors have taken special care to stick to relatively simple methods, because they often provide the best power and fewest type I errors, and are adequate to answer most clinical research questions. It is time for clinicians not to get nervous anymore with statistics, and not to leave their data anymore to statisticians running them through SAS or SPSS to see if significances can be found. This is called data dredging. Statistics can do more for you than produce a host of irrelevant p-values. It is a discipline at the interface v
vi
Preface
of biology and maths: maths is used to answer sound biological hypotheses. We do hope that ‘‘SPSS for Starters 1 and 2’’ will benefit this process. Two other issues from the same authors entitled ‘‘Statistical Analysis of Clinical Data on a Pocket Calculator 1 and 2’’ are rather complementary to the above books, and provide a more basic approach and better understanding of the arithmetic. Lyon, France, January 2012
Ton J. Cleophas Aeilko H. Zwinderman
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
Multistage Regression (35 Patients) . Example . . . . . . . . . . . . . . . . . . . . . Path Statistics . . . . . . . . . . . . . . . . . Two Stage Least Square Method . . . . Conclusion . . . . . . . . . . . . . . . . . . .
. . . . .
3 3 5 6 6
3
Multivariate Analysis Using Path Statistics (35 Patients) . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 7 11
4
Multivariate Analysis First Example . . . . . . Second Example . . . . Conclusion . . . . . . . .
. . . .
13 13 17 19
5
Categorical Data (60 patients). . . . . . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21 21 24
6
Multinomial Logistic Regression (55 Patients) . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 25 28
7
Missing Data Imputation (35 Patients). Example . . . . . . . . . . . . . . . . . . . . . . . Regression Imputation . . . . . . . . . . . . . Multiple Imputations . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . .
29 29 30 32 33
of .. .. ..
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Variance (35 and 30 Patients) ....................... ....................... .......................
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
vii
viii
8
9
Contents
Comparing the Performance of Diagnostic Tests (650 and 588 Patients) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 35 37
Meta-Regression (20 and 9 Example 1. . . . . . . . . . . . . Example 2. . . . . . . . . . . . . Conclusion . . . . . . . . . . . .
Studies). ....... ....... .......
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
39 39 41 42
and 52 Patients) ............. ............. .............
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
43 43 46 48
11 Confounding (40 patients) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 49 53
12 Interaction, Random Effect Analysis of Variance (40 Patients) . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 56 60
13 Log Rank Testing (60 Patients) . Example . . . . . . . . . . . . . . . . . . Log Rank Test. . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . .
. . . .
61 61 63 63
14 Segmented Cox Regression (60 Patients) . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65 65 68
15 Curvilinear Estimation (20 Patients) . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69 69 73
16 Loess and Spline Modeling (90 Patients) . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spline Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loess (Locally Weighted Scatter Plot Smoothing) Modeling . Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75 75 78 79 80 80
10 Poisson Regression (50 Example 1. . . . . . . . . . Example 2. . . . . . . . . . Conclusion . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . . . .
Contents
ix
17 Assessing Seasonality (24 Averages). . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81 82 85
18 Monte Carlo Tests and Bootstraps for Analysis of Complex Data (10, 20, 139, and 55 Patients) . Paired Continuous Data . . . . . . . . . . . . . . . . . . . Unpaired Continuous Data . . . . . . . . . . . . . . . . . Paired Binary Data. . . . . . . . . . . . . . . . . . . . . . . Unpaired Binary Data. . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
87 87 88 88 90 90
19 Artificial Intelligence (90 Patients) . . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 91 94
20 Robust Testing (33 Patients) . Example . . . . . . . . . . . . . . . . Robust Testing . . . . . . . . . . . Conclusion . . . . . . . . . . . . . .
. . . .
95 95 97 98
Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
Chapter 1
Introduction
The first part of this title contained all statistical tests that are relevant for starters on SPSS, and included standard parametric and non-parametric tests for continuous and binary variables, regression methods, trend tests, and reliability and validity assessments of diagnostic tests. Many more statistical methods can be carried out with the help of SPSS statistical software, and the current small e book reviews the most important of them. We will start with multistep methods for better estimation of multistep relationships (Chap. 2) and multivariate models for assessing data files with multiple outcome variables (Chaps. 3 and 4). Exposure and outcome variables with a categorical rather than linear pattern (Chaps. 5 and 6), and the assessment of missing data (Chap. 7) are the next subjects. Chapter 8 compares the performance of diagnostic tests. Meta-regression and Poisson regression will be reviewed in the Chaps. 9 and 10. Dealing with confounding and interaction using SPSS is in the Chaps. 11 and 12. Survival analyses using Cox methods were reviewed in the first part of this title (Chaps. 15 and 16). This e book reviews log rank tests (Chap. 13) and segmented time-dependent Cox regression (Chap. 14). Various methods for assessing non linear models are in the Chaps. 15 and 16, while autocorrelations for the assessment of seasonality is addressed in Chap. 17. Finally, distribution free methods and robust tests are reviewed in the final three chapters. Each method of testing is explained 1. using a data example from clinical practice, 2. including every step in SPSS (we used SPSS 18.0 available in many western hospitals and clinical research facilities), 3. and including the main tables of results with an accompanying text with interpretations of the results and hints convenient for data reporting, i.e., scientific clinical articles and poster presentations. In order to facilitate the use of this cookbook the datafiles of the examples given is made available by the editor on the web through extras.springer.com.
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_1, The Author(s) 2012
1
2
1 Introduction
For investigators who wish to perform their own data analyses from the very start the book can be used as a step-by-step guideline with help of the dataexamples from the book. They can enter their own data or enter entire data files, e.g., from Excel, simply by opening an Excel file in SPSS, or by the commands ‘‘cut and paste’’ just like with Windows’ Word Program, that everybody knows. This e book just like part one of this title will be used by the masters’and doctorate classes of the European College of Pharmaceutical Medicine Lyon France (EC Socrates Project since 1999) as a base for their practical education in statistics, and will be offered together with a theoretical module entitled ‘‘Statistics applied to clinical trials’’. SPSS statistical software is a user-friendly statistical software with many help and tutor pages. However, we as authors believe that for the novices on SPSS an even more basic approach is welcome. The book is meant for this very purpose, and can be used without the help of a teacher, but the authors are willing to be of assistance for those in search for further help. De authors are well-aware that this cookbook contains a minimal amount of text and maximal technical details, but we believe that this property will not refrain students from mastering the SPSS software systematics, and that, instead, it will even be a help to that aim. Yet, we recommend that, like with the students in Lyon, it will used together with the textbook ‘‘Statistics Applied to Clinical Trials’’ by Cleophas and Zwinderman (5th edition, 2012, Springer Dordrecht). Finally, two last and important points. 1. A data file has rows and columns (vertical rows): the columns are the patient characteristics, otherwise called the variables, 1 row is 1 patient. 2. SPSS software uses commas instead of dots to indicate digits smaller than 1.000.
Chapter 2
Multistage Regression (35 Patients)
Primary question: Is multistage regression better for analyzing outcome studies with multiple predictors than multiple linear regression.
Example The effects of counseling and non-compliance (pills not used) on the efficacy of a novel laxative drug is studied in 35 patients. The data file is given below.
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
Var 1
Var 2
Var 3 (var = variable)
Outcome Frequency counseling
Pt instrumental variable Pills not used
24 30 25 35 39 30 27 14 39 42 41 38 39 37 47
8 13 15 14 9 10 8 5 13 15 11 11 12 10 15
Problematic predictor Efficacy estimator of new laxative (stools/month) 25 30 25 31 36 33 22 18 14 30 36 30 27 38 40 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_2, The Author(s) 2012
3
4
2 Multistage Regression (35 Patients)
(continued) Var 1
Var 2
Var 3 (var = variable)
16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.
13 12 4 10 8 16 15 14 7 12 6 19 5 8 9 15 7 6 14 7
31 25 24 27 20 35 29 32 30 40 31 41 26 24 30 20 31 29 43 30
30 36 12 26 20 43 31 40 31 36 21 44 11 27 24 40 32 10 37 19
Coefficientsa Model 1 (Constant) Counseling Non-compliance a
Standardized coefficients
t
Sig.
B
Std. error
Beta
2.270 1.876 0.285
4.823 0.290 0.167
0.721 0.190
0.471 6.469 1.705
0.641 0.000 0.098
Unstandardized coefficients
Standardized oefficients
t
Sig.
B
Std. error
Beta
4.228 0.220
2.800 0.093
0.382
1.510 2.373
0.141 0.024
Dependent Variable: ther eff
Coefficientsa Model 1 (Constant) Non-compliance a
Unstandardized coefficients
Dependent Variable: counseling
The above tables (commands as given in Chap. 5 of the first part of this title) show the results of two linear regressions assessing (1) the effects of counseling and non-compliance on therapeutic efficacy, and (2) the effect of
Example
5
non-compliance on counseling. With p = 0.10 as cut-off p-value for statistical significance all of the effects are statistically significant. Non-compliance is a significant predictor of counseling, and at the same time a significant predictor of therapeutic efficacy. This would mean that non-compliance works two ways: it predicts therapeutic efficacy directly and indirectly through counseling. However, the indirect way is not taken into account in the usual one step linear regression. An adequate approach for assessing both ways simultaneously is path statistics.
Path Statistics Path analysis uses add-up sums of regression coefficients for better estimation of multiple step relationships. Because regression coefficients have the same unit as their variable, they can not be added up unless they are standardized by dividing them by their own variances. SPSS routinely provides the standardized regression coefficients, otherwise called path statistics, in its regression tables as shown above. The underneath figure gives a path diagram of the data. 0.38 (p = 0.024) Counseling
Non-compliance
0.72 (p = 0.0001)
0.19 (p = 0.098)
Efficacy estimator
The standardized regression coefficients are added to the arrows. Single path analysis gives a standardized regression coefficient of 0.19. This underestimates the real effect of non-compliance. Two step path analysis is more realistic and shows that the add-up path statistic is larger and equals 0:19 þ 0:38 0:72 ¼ 0:46 The two-path statistic of 0.46 is a lot better than the single path statistic of 0.19 with an increase of 60 %.
6
2 Multistage Regression (35 Patients)
Two Stage Least Square Method Instead of path analysis the two stage least square (2LS) method is possible and is available in SPSS. It works as follows. First, a simple regression analysis with counseling as outcome and non-compliance as predictor is performed. Then the outcome values of the regression equation are used as predictor of therapeutic efficacy. Command: Analyze….Regression….2 Stage Least Squares….Dependent: therapeutic efficacy….Explanatory: non-compliance….Instrumental: counseling ….OK. Model description Type of variable Equation 1
VAR00001 VAR00003 VAR00002
dependent predictor instrumental
MOD_1 Coefficients Model
Equation 1 (Constant) VAR00003
Unstandardized coefficients
Standardized coefficients
B
Beta
Std. error
-61.095 37.210 3.113 1.256
2.078
t
Sig.
-1.642 0.110 2.478 0.019
The above tables show the results of the 2LS method. As expected the final p-value is smaller than the simple linear regression p-value of the effect of non-compliance on therapeutic efficacy.
Conclusion Multistage regression methods often produce better estimations of multi-step relationships than standard linear regression methods do. Examples are given.
Chapter 3
Multivariate Analysis Using Path Statistics (35 Patients)
Primary question: does the inclusion of additional outcome variables enable to make better use of predicting variables. Multivariate analysis is a method that works with more than a single outcome variable. It can assess whether a factor predicts more than a single outcome. Path statistics can be used as an alternative approach to multivariate analysis of variance (MANOVA) (Chap. 4), with a result similar to that of the more complex mathematical approach used in MANOVA.
Example The effects of non-compliance and counseling is assessed on treatment efficacy of a new laxative. But quality of life scores are now added as additional outcome variable. The data file is given underneath. Var 1 (y1)
Var 2 (y2)
Improvement frequency stools
Improved quality life Compliance with drug score treatment
Var 4 (x2) (var = variable) Counseling frequency
24.00 30.00 25.00 35.00 39.00 30.00 27.00 14.00 39.00 42.00
92.00 110.00 78.00 103.00 103.00 102.00 76.00 75.00 99.00 107.00
8.00 13.00 15.00 14.00 9.00 10.00 8.00 5.00 13.00 15.00
Var 3 (x1)
25.00 30.00 25.00 31.00 36.00 33.00 22.00 18.00 14.00 30.00
(continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_3, The Author(s) 2012
7
8
3 Multivariate Analysis Using Path Statistics (35 Patients)
(continued) Var 1 (y1)
Var 2 (y2)
Improvement frequency stools
Improved quality life Compliance with drug score treatment
Var 4 (x2) (var = variable) Counseling frequency
41.00 38.00 39.00 37.00 47.00 30.00 36.00 12.00 26.00 20.00 43.00 31.00 40.00 31.00 36.00 21.00 44.00 11.00 27.00 24.00 40.00 32.00 10.00 37.00 19.00
112.00 99.00 86.00 107.00 108.00 95.00 88.00 67.00 112.00 87.00 115.00 93.00 92.00 78.00 112.00 69.00 66.00 75.00 85.00 87.00 89.00 89.00 65.00 121.00 74.00
11.00 11.00 12.00 10.00 15.00 13.00 12.00 4.00 10.00 8.00 16.00 15.00 14.00 7.00 12.00 6.00 19.00 5.00 8.00 9.00 15.00 7.00 6.00 14.00 7.00
Var 3 (x1)
36.00 30.00 27.00 38.00 40.00 31.00 25.00 24.00 27.00 20.00 35.00 29.00 32.00 30.00 40.00 31.00 41.00 26.00 24.00 30.00 20.00 31.00 29.00 43.00 30.00
Underneath are the tables (commands as given in Chap. 5 of the first part of this title) of 5 simple linear regression analyses and 2 multiple linear regression analyses: 1. 2. 3. 4. 5.
the the the the the
effect effect effect effect effect
of of of of of
counseling on therapeutic efficacy counseling on quality of (QOL) con-compliance on QOL non-compliance on therapeutic efficacy non-compliance on counseling.
1. the effects of counseling and non-compliance on QOL 2. the effects of counseling and non-compliance on treatment efficacy.
Example Coefficientsa Model 1(Constant) Counseling a
Unstandardized coefficients
Standardized coefficients
t
Sig.
B
Std. error
Beta
8.647 2.065
3.132 0.276
0.794
2.761 7.491
0.009 0.000
Unstandardized coefficients
Standardized coefficients
t
Sig.
B
Std. error
Beta
69.457 2.032
7.286 0.641
0.483
9.533 3.168
0.000 0.003
Dependent Variable: ther eff
Coefficientsa Model 1(Constant) Counseling a
9
Dependent Variable: qol
Coefficientsa Model 1(Constant) Non-compliance a
1(Constant) Non-compliance
t
Sig.
B
Std. error
Beta
59.380 1.079
11.410 0.377
0.446
5.204 2.859
0.000 0.007
Unstandardized coefficients
Standardized coefficients
t
Sig.
B
Std. error
Beta
10.202 0.697
6.978 0.231
0.465
1.462 3.020
0.153 0.005
t
Sig.
1.510 2.373
0.141 0.024
Dependent Variable: ther eff
Coefficientsa Model 1(Constant) Non-compliance a
Standardized coefficients
Dependent Variable: qol
Coefficientsa Model
a
Unstandardized coefficients
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
4.228 0.220
2.800 0.093
0.382
Dependent Variable: counseling
10
3 Multivariate Analysis Using Path Statistics (35 Patients)
Model summary Model R 0.560a
1 a
Adjusted R square
Std. error of the estimate
0.313
0.270
13.77210
Predictors: (Constant), counseling, non-compliance
Model summary Model R 0.813a
1 a
R square
R square
Adjusted R square
Std. error of the estimate
0.661
0.639
5.98832
Predictors: (Constant), counseling, non-compliance
First, we have to check whether the relationship of either of the two predictors with the two outcome variables, treatment efficacy and quality of life, is significant in the usual simple linear regression: they were so with p-values of 0.0001, 0.005, 0.003 and 0.007. Then, a path diagram with standardized regression coefficients is constructed (Fig. 3.1). The standardized regression coefficients of the residual effects are obtained by taking the square root of (1-R Square). The standardized regression coefficient of one residual effect versus another can be assumed to equal 1.00.
1. 2. 3. 4. Total
Direct effect of counseling 0.79 9 0.48 = Direct effect of non-compliance 0.45 9 0.47 = Indirect effect of counseling and non-compliance 0.79 9 0.38 9 0.45 ? 0.47 9 0.38 9 0.48 = Residual effects 1.00 9 0.58 9 0.83 =
0.38 0.21 0.22 0.48 + 1.29
A path statistic of 1.29 is considerably larger than that of the single outcome model: 1.29 versus 0.46 (Chap. 2), 2.80 times larger. Obviously, two outcome variables make better use of the predictors in our data than does a single one. An advantage of this nonmathematical approach to multivariate regression is that it nicely summarizes all relationships in the model, and it does so in a quantitative way (Fig. 3.1).
Conclusion Fig. 3.1 Decomposition of correlation between treatment efficacy and QOL
11 counseling
0.38
0.48
compliance drug treatment compliance 0.47
0.79
0.45
Quality of life score
Frequency stools
0.58
0.83
Residual efficacy
Residual quality of life 1.00
Conclusion Multivariate analysis is a linear model that works with more than a single outcome variable. It can assess whether a factor predicts more than a single outcome. Path statistics is used in this chapter, but, traditionally, multivariate analysis of variance (MANOVA) is applied for the purpose (Chap. 4). Examples are given.
Chapter 4
Multivariate Analysis of Variance (35 and 30 Patients)
Primary question: does the inclusion of additional outcome variables enable to make better use of predicting variables. Multivariate analysis is a method that works with more than a single outcome variable. It can assess whether a factor predicts more than a single outcome. Path statistics can be used (Chap. 3), but traditionally, multivariate analysis of variance (MANOVA) was developed for the purpose.
First Example In a self-controlled study in 35 patients with constitutional constipation the outcome variables were improvements of frequency of stools and quality of life scores. The predictor variables were compliance with drug treatment and counseling frequency (var = variable). The data file is given underneath.
Var 1 (y1)
Var 2 (y2)
Var 3 (x1)
Var 4 (x2) (var = variable)
Improvement frequency stools 24.00 30.00 25.00 35.00 39.00 30.00 27.00 14.00
Improved quality life score 69.00 110.00 78.00 103.00 103.00 102.00 76.00 75.00
Compliance with drug treatment 25.00 30.00 25.00 31.00 36.00 33.00 22.00 18.00
Counseling frequency 8.00 13.00 15.00 14.00 9.00 10.00 8.00 5.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_4, The Author(s) 2012
13
14
4 Multivariate Analysis of Variance (35 and 30 Patients)
(continued) Var 1 (y1)
Var 2 (y2)
Var 3 (x1)
Var 4 (x2) (var = variable)
39.00 42.00 41.00 38.00 39.00 37.00 47.00 30.00 36.00 12.00 26.00 20.00 43.00 31.00 40.00 31.00 36.00 21.00 44.00 11.00 27.00 24.00 40.00 32.00 10.00 37.00 19.00
99.00 107.00 112.00 99.00 86.00 107.00 108.00 95.00 88.00 67.00 112.00 87.00 115.00 93.00 92.00 78.00 112.00 69.00 66.00 75.00 85.00 87.00 89.00 89.00 65.00 121.00 74.00
14.00 30.00 36.00 30.00 27.00 38.00 40.00 31.00 25.00 24.00 27.00 20.00 35.00 29.00 32.00 30.00 40.00 31.00 41.00 26.00 24.00 30.00 20.00 31.00 29.00 43.00 30.00
13.00 15.00 11.00 11.00 12.00 10.00 15.00 13.00 12.00 4.00 10.00 8.00 16.00 15.00 14.00 7.00 12.00 6.00 19.00 5.00 8.00 9.00 15.00 7.00 6.00 14.00 7.00
We will first assess whether counseling frequency is a significant predictor of both frequency improvement of stools and improved quality of life. Command: Analyze.…General Linear Model.…Multivariate….In dialog box Multivariate: transfer y1 and y2 to Dependent variables and x2 to Fixed factors.…OK. Multivariate testsc Effect Intercept
Pillai’s Trace Wilk’s Lambda Hotelling’s Trace Roy’s Largest Root
Value
F
0.992 0.008 124.751 124.751
1185.131a 1185.131a 1185.131a 1185.131a
Hypothesis df 2.000 2.000 2.000 2.000
Error df
Sig.
19.000 19.000 19.000 19.000
0.000 0.000 0.000 0.000
(continued)
First Example
15
(continued) Effect Var00004
a b c
Value Pillai’s Trace Wilk’s Lambda Hotelling’s Trace Roy’s Largest Root
1.426 0.067 6.598 5.172
F
Hypothesis df
Error df
Sig.
3.547 3.894a 4.242 7.389b
28.000 28.000 28.000 14.000
40.000 38.000 36.000 20.000
0.000 0.000 0.000 0.000
Exact statistic The statistic is an upper bound on F that yields a lower bound on the significance level Design: Intercept ? VAR00004
The above table shows that MANOVA can be considered as another regression model with intercepts and regression coefficients. Just like analysis of variance (ANOVA) it is based on normal distributions and homogeneity of the variables. SPSS has checked the assumptions, and the results as given indicate that the model is adequate for the data. Generally, Pillai’s method gives the best robustness and Roy’s the best p-values. We can conclude that counseling is a strong predictor of both improvement of stools and improved quality of life. In order to find out which of the two outcomes is most important, two ANOVAs with each of the outcomes separately must be performed. Command: Analyze.…General Linear Model.…Univariate.…In dialog box Univariate transfer y1 to Dependent variables and x2 to Fixed factors.…OK. Do the same for variable y2. Tests of Between-Subjects Effects Dependent Variable:improv freq stool Source Type III Sum of Squares Corrected Model Intercept Var00004 Error Total Corrected Total a
a
2733.005 26985.005 2733.005 647.167 36521.000 3380.171
df
Mean Square
F
Sig.
14 1 14 20 35 34
195.215 26985.054 192.215 32.358
6.033 833.944 6.033
0.000 0.000 0.000
R Squared = 0.809 (Adjusted R Squared = 0.675)
16
4 Multivariate Analysis of Variance (35 and 30 Patients)
Tests of Between-Subjects Effects Dependent Variable:improv freq stool Source Type III Sum of Squares Corrected Model Intercept Var00004 Error Total Corrected Total a
a
6833.671 223864.364 6833.671 2002.500 300129.000 8836.171
df
Mean Square
F
Sig.
14 1 14 20 35 34
488.119 223864.364 488.119 100.125
4.875 2235.849 4.875
0.001 0.001 0.001
R Squared = 0.773 (Adjusted R Squared = 0.615)
The above tables show that also in the ANOVAs counseling frequency is a strong predictor of not only improvement of frequency of stools but also of improved quality of life (improv freq stool = improvement of frequency of stools, improve qol = improved quality of life scores). In order to find out whether the compliance with drug treatment is a contributory predicting factor, MANOVA with two predictors and two outcomes is performed. Instead of x2 both x1 and x2 are transferred to Fixed factors. The underneath table shows the results. Multivariate Testsb Effect Intercept
VAR00004
VAR00003
VAR00004 * VAR00003
b
F
Hypothesis df Error df Sig. 1.000 1.000 1.000 1.000 10.000 10.000 10.000 10.000 14.000 14.000 14.000 14.000 5.000
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.004 0.004 0.004 0.004 0.148 0.148 0.148 0.148 0.210 0.210 0.210 0.210 0.208
12.884a 12.884a 12.884a
5.000 5.000 5.000
1.000 1.000 1.000
0.208 0.208 0.208
Wilk’s Lambda Hotelling’s Trace Roy’s Largest Root a
Value
Pillai’s Trace 1.000 29052.980a Wilk’s Lambda 0.000 29052.980a Hotelling’s Trace 29052.980 29052.980a Roy’s Largest Root 29052.980 29052.980a Pillai’s Trace 0.996 27.121a Wilk’s Lambda 0.004 27.121a Hotelling’s Trace 271.209 27.121a Roy’s Largest Root 271.209 27.121a Pillai’s Trace 0.995 13.514a Wilk’s Lambda 0.005 13.514a Hotelling’s Trace 189.198 13.514a Roy’s Largest Root 189.198 13.514a Pillai’s Trace 0.985 12.884a 0.015 64.418 64.418
Exact statistic Design: Intercept ? VAR00004 ? VAR00003 ? VAR00004 ? VAR00003
After including the second predictor variable the MANOVA is not significant anymore. Probably, the second predictor is a confounder of the first one. The analysis of this model stops here.
Second Example
17
Second Example As a second example we use the data from Field (Discovering SPSS, Sage London, 2005, p 571) assessing the effect of three treatment modalities on compulsive behavior disorder estimated by two scores, a thought-score and an action-score (Var = variable). Var 1 (y1)
var 2 (x)
var 3 (y2) (var = variable)
action 5.00 5.00 4.00 4.00 5.00 3.00 7.00 6.00 6.00 4.00 4.00 4.00 1.00 1.00 4.00 6.00 5.00 5.00 2.00 5.00 4.00 5.00 5.00 4.00 6.00 4.00 7.00 4.00 6.00 5.00
treatment 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00
thought 14.00 11.00 16.00 13.00 12.00 14.00 12.00 15.00 16.00 11.00 14.00 15.00 13.00 14.00 15.00 19.00 13.00 18.00 14.00 17.00 13.00 15.00 14.00 14.00 13.00 20.00 13.00 16.00 14.00 18.00
Command: Analyze….General Linear Model.…Multivariate.…In dialog box Multivariate transfer y1 and y2 to Dependent variables and x to Fixed factors.…OK.
18
4 Multivariate Analysis of Variance (35 and 30 Patients)
Multivariate Testsc Effect Intercept
VAR00002
a b c
Pillai’s Trace Wilk’s Lambda Hotelling’s Trace Roy’s Largest Root Pillai’s Trace Wilk’s Lambda Hotelling’s Trace Roy’s Largest Root
Value
F
Hypothesis df
Error df
Sig.
0.983 0.017 57.325 57.325 0.318 0.699 0.407 0.335
745.230a 745.230a 745.230a 745.230a 2.557 2.555a 2.546 4.520b
2.000 2.000 2.000 2.000 4.000 4.000 4.000 2.000
26.000 26.000 26.000 26.000 54.000 52.000 50.000 27.000
0.000 0.000 0.000 0.000 0.049 0.050 0.051 0.020
Exact statistic The statistic is an upper bound on F that yields a lower bound on the significance level Design: Intercept ? VAR00002
The Pillai test shows that the predictor (treatment modality) has a significant effect on both thoughts and actions at p = 0.049. Roy’s test being less robust gives an even better p-value of 0.020. We will use again ANOVAs to find out which of the two outcomes is more important. Command: Analyze.…General Linear Model….Univariate.…In dialog box Univariate transfer y1 to Dependent variables and x to Fixed factors.…OK. Do the same for variable y2. ANOVAb Model 1
a b
Regression Residual Total
a b
df
Mean Square
F
Sig.
0.050 61.417 61.467
1 28 29
0.050 2.193
0.023
0.881a
Sum of Squares
df
Mean Square
F
Sig.
12.800 128.667 141.467
1 28 29
12.800 4.595
2.785
0.106a
Predictors: (Constant), cog/beh/notreat Dependent Variable: actions
ANOVAb Model 1
Sum of Squares
Regression Residual Total
Predictors: (Constant), cog/beh/notreat Dependent Variable: thoughts
The above 2 tables show that in the ANOVAs nor thoughts nor actions are significant outcomes of treatment modality anymore. This would mean that the treatment modality is a rather weak predictor of either of the outcomes, and that it is not able to significantly predict a single outcome, but that it significantly predicts two outcomes pointing into a similar direction.
Second Example
19
What advantages does MANOVA offer compared to multiple ANOVAs. 1. 2. 3. 4.
It It It It
prevents the type I error from being inflated. looks at interactions between dependent variables. can detect subgroup properties and includes them in the analysis. can demonstrate otherwise underpowered effects.
Multivariate analysis should not be used for explorative purposes and data dredging, but should be based on sound clinical arguments. A problem with multivariate analysis with binary outcome variables is that after iteration the data often do not converse. Instead multivariate probit available in STATA statistical software can be performed (see Chap 25 in: Statistics Applied to Clinical Studies, Springer New york, 5th edition, 2012 from the same authors).
Conclusion Multivariate analysis is a linear model that works with more than a single outcome variable. It can assess whether a factor predicts more than a single outcome. Path statistics can be used (Chaps. 2 and 3), but, traditionally, multivariate analysis of variance MANOVA) is used for the purpose. Examples are given.
Chapter 5
Categorical Data (60 patients)
Primary scientific question: does race have an effect on physical strength (the variable race has a categorical rather then linear pattern). The effects on physical strength (scores 0–100) assessed in 60 subjects of different races (hispanics (1), blacks (2), asians (3),and whites (4)), ages (years), and genders (0 = female, 1 = male) are in the left four columns underneath.
Example Patient number
Physical strength
Race Age
Gender Race 1 hispanics
Race 2 blacks
Race 3 asians
Race 4 whites
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
70.00 77.00 66.00 59.00 71.00 72.00 45.00 85.00 70.00 77.00 63.00 72.00 78.00 62.00 69.00 90.00 98.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 2.00 2.00
1.00 0.00 1.00 0.00 1.00 1.00 0.00 1.00 1.00 1.00 0.00 1.00 1.00 0.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
35.00 55.00 70.00 55.00 45.00 47.00 75.00 83.00 35.00 49.00 74.00 49.00 54.00 46.00 34.00 25.00 46.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00
(continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_5, The Author(s) 2012
21
22
5 Categorical Data (60 patients)
(continued) Physical Patient strength number
Race Age
Gender Race 1 hispanics
Race 2 blacks
Race 3 asians
Race 4 whites
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00
1.00 1.00 1.00 1.00 0.00 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
82.00 83.00 90.00 86.00 59.00 99.00 87.00 78.00 96.00 97.00 89.00 90.00 91.00 60.00 61.00 66.00 54.00 53.00 57.00 63.00 70.00 59.00 62.00 65.00 67.00 53.00 69.00 51.00 54.00 68.00 69.00 70.00 90.00 90.00 89.00 82.00 85.00 87.00 86.00 83.00 80.00 81.00 82.00
35.00 50.00 52.00 46.00 53.00 44.00 30.00 80.00 56.00 55.00 35.00 58.00 57.00 65.00 45.00 51.00 55.00 82.00 64.00 40.00 36.00 64.00 55.00 50.00 53.00 73.00 34.00 55.00 59.00 64.00 45.00 36.00 43.00 23.00 44.00 83.00 40.00 42.00 32.00 43.00 35.00 34.00 33.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Example
23
For the analysis we use multiple linear regression. Command: Analyze….Regression….Linear….Dependent: physical strength score….Independent: race, age, gender….OK. The table shows that age and gender are significant predictors but race is not. Coefficientsa Model 1
a
(Constant) Race Age Gender
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
79.528 0.511 -0.242 9.575
8.657 1.454 0.117 3.417
0.042 -0.260 0.349
t
Sig.
9.186 0.351 2.071 2.802
0.000 0.727 0.043 0.007
Dependent Variable: strengths core
However, the analysis is not adequate, because the variable race is analyzed as a stepwise rising function from 1 to 4, and the linear regression model assumes that the outcome variable will rise (or fall) simultaneously and linearly, but this needs not be necessarily so. In the given situation it may be more safe to recode the stepping variable into the form of a categorical variable. The above data overview shows in the right 4 columns how it is manually done. We, subsequently, use again linear regression but now for categorical analysis of race. Command: Analyze….Regression….Linear….Dependent: physical strength score….Independent: race 2, race 3, race 4, age, gender….OK Coefficientsa Model 1
a
(Constant) Race2 Race3 Race4 Age Gender
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
72.650 17.424 -6.286 9.661 -0.140 5.893
5.528 3.074 3.141 3.166 0.081 2.403
0.559 -0.202 0.310 -0.150 0.215
t
Sig.
13.143 5.668 -2.001 3.051 -1.716 2.452
0.000 0.000 0.050 0.004 0.092 0.017
Dependent Variable: strengths core
The above table shows that race 2–4 are significant predictors of physical strength. The results can be interpreted as follows. The underneath regression equation is used: y ¼ a þ b1 x1 þ b2 x2 þ b3 x3 þ b4 x4 þ b5 x5 a
intercept
24
b1 b2 b3 b4 b5
5 Categorical Data (60 patients)
regression coefficient for blacks (0 = no, 1 = yes), asians whites age gender
If an individual is hispanic (race 1), then x1, x2, and x3 will turn into 0, and the regression equation becomes y ¼ a þ b4 x4 þ b5 x5 : If black, y ¼ a þ b1 þ b4 x4 þ b5 x5 : If asian, y ¼ a þ b2 þ b4 x4 þ b5 x5 : If white, y ¼ a þ b3 þ b4 x4 þ b5 x5 : So, e.g., the best predicted physical strength score of a white male of 25 years of age would equal y ¼ 72:65 þ 9:66 0:14 25 þ 5:89 1 ¼ 84:7 (on a linear scale from 0 to 100), * ( = sign of multiplication). Compared to the presence of the hispanic race, the black and white races are significant positive predictors of physical strength (p ¼ 0:0001 and 0:004 respectively), the asian race is a significant negative predictor (p ¼ 0:050). All of these results are adjusted for age and gender, at least if we use p = 0.10 as criterion for statistical significance. Also with a binary outcome variable categorical analysis of covariates is possible. Using logistic regression in SPSS is convenient for the purpose, we need not manually transform the quantitative estimator into a categorical one. For the analysis we apply the usual command. Command: Analyze ….Regression….Binary logistic….Dependent variable…. Independent variables….then, open dialog box labeled Categorical Variables…. select the categorical variable and transfer it to the box Categorical Variables….then click Continue….OK.
Conclusion Predictor variables with a categorical rather than linear character should be recoded into categorical variables before analysis in a regression model. An example is given.
Chapter 6
Multinomial Logistic Regression (55 Patients)
Primary question: the numbers of patients falling out of bed with and without injury are assessed in two hospital departments. It is expected that the department of internal medicine will have higher scores. Instead of binary outcomes, ‘‘yes or no falling out of bed’’, we have three possible outcomes (no falling, falling without or with injury). Because the outcome scores may indicate increasing severities of falling from score 0 to 2, a linear or ordinal regression may be adequate. However, the three possible outcomes may also relate to different types of patients and different types of morbidities, and may, therefore, represent categories rather than increasing severities. A multinomial logistic regression may, then, be an adequate choice.
Example
Var 1
var 2 (var = variable)
0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_6, The Author(s) 2012
25
26
6 Multinomial Logistic Regression (55 Patients)
(continued) Var 1
var 2 (var = variable)
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 0.00 0.00 0.00 0.00 0.00 0.00
Example
27
Var 1 = department (0 = internal medicine, 1 ? surgery), var 2 = fall out of bed (0 = no fall out of bed, 1 = fall out of bed without injury, 2 = fall out of with injury). We will first draw a graph of the data. Command: Graphs…. 3-D Charts…..X-Axis: groups of cases…..Z-Axis: groups of cases….Define….X Category Axis:fall with/out injury….Z Category Axis: department….OK.
The above graph shows that at the department of surgery fewer no-falls and fewer fall with injury are observed. In order to test these data we will first perform a linear regression with fall as outcome and department as predictor variable. Command: Analyze….Regression….Linear….Dependent: fall….Independent: department….OK. Coefficientsa Model 1 a
(Constant) Department
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
0.909 -0.136
0.132 0.209
-0.089
t
Sig.
6.874 -0.652
0.000 0.517
Dependent variable: fall with/out injury
The above graph shows that difference between the departments is not statistically significant. However, the linear model applied assumes increasing severities of the outcome variable, while categories without increasing severities may be a
28
6 Multinomial Logistic Regression (55 Patients)
better approach to this variable. For that purpose a multinomial logistic regression is performed. Command: Analyze….Regression….Multinomial LogisticRegression…. Dependent: fall….Factor: department….OK. Parameter estimates Fall with/out injurya
0.00 Intercept [VAR00001 [VAR00001 1.00 Intercept [VAR00001 [VAR00001 a b
B
Std.Error Wald df Sig.
1.253 0.802 = 0.00] -0.990 0.905 – = 1.00] 0b 1.872 0.760 = 0.00] -1.872 0.881 = 1.00] 0b –
2.441 1.197 – 6.073 4.510 –
1 1 0 1 1 0
0.118 0.274 – 0.014 0.034 –
Exp(B) 95 % Confidence interval for exp (B)
0.371 – 0.154 –
Lower bound
Upper bound
0.063 – – 0.027 –
2.191 – – 0.866 –
The reference Category is: 2.00 This parameter is set to zero because it is redundant
The above graph shows that the odds of falling with injury versus no falling is smaller at surgery than at internal medicine with an odds ratio of 0.371 (p = 0.274), and that the odds of falling with injury versus falling without injury is also smaller at surgery than at internal medicine with and odds ratio of 0.154 (p = 0.034). And, so, surgery seems to perform better when injuries are compared with no injuries. This effect was not observed with linear regression.
Conclusion In research it is not uncommon that outcome variables are categorical, e.g., the choice of food, treatment modality, type of doctor etc. If such outcome variables are binary, then binary logistic regression is appropriate. If, however, we have three or more alternatives, then multinomial logistic regression must be used. It works, essentially, similarly to the recoding procedure reviewed in Chap. 5 on categorical predictors variables. It can be considered a multivariate technique, because the dependent variable is recoded from a single categorical variable into multiple dummy variables (see Chap. 5 for explanation). More on multivariate techniques are reviewed in the Chaps. 3 and 4. Multinomial logistic regression should not be confounded with ordered logistic regression which is used in case the outcome variable consists of categories that can be ordered in a meaningful way, e.g., anginal class or quality of life class. Also ordered logistic regression is readily available in the regression module of SPSS.
Chapter 7
Missing Data Imputation (35 Patients)
Primary question: what is the effect of regression imputation and multiple imputations on the sensitivity of testing a study with missing data.
Example The effects of an old laxative and of age on the efficacy of a novel laxative is studied. The data file with missing data is given underneath. Var 1
Var 2
Var 3 (var = variable)
24.00 30.00 25.00 35.00 39.00 30.00 27.00 14.00 39.00 42.00 41.00 38.00 39.00 37.00 47.00
8.00 13.00 15.00 10.00 9.00 10.00 8.00 5.00 13.00
25.00 30.00 25.00 31.00
36.00 12.00
11.00 11.00 12.00 10.00 18.00 13.00 12.00 4.00
33.00 22.00 18.00 14.00 30.00 36.00 30.00 27.00 38.00 40.00 31.00 25.00 24.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_7, The Author(s) 2012
29
30
7 Missing Data Imputation (35 Patients)
(continued) Var 1 26.00 20.00 43.00 31.00 40.00 31.00 36.00 21.00 44.00 11.00 27.00 24.00 40.00 32.00 10.00 37.00 19.00
Var 2
Var 3 (var = variable)
10.00 8.00 16.00 15.00 14.00
27.00 20.00 35.00 29.00 32.00 30.00 40.00 31.00 41.00 26.00 24.00 30.00
12.00 6.00 19.00 5.00 8.00 9.00 15.00 7.00 6.00 14.00 7.00
31.00 23.00 43.00 30.00
Var 1 = efficacy new laxative (stools per month); var 2 = efficacy old laxative (stools per month); patients’age (years)
Regression Imputation First we will perform a multiple linear regression analysis of the above data (commands as given in Chap. 5 first part of this title). The software program will exclude the patients with missing data from the analysis. Coefficientsa Model 1
a
(Constant) Bisacodyl Age
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
0.975 1.890 0.305
4.686 0.322 0.180
0.715 0.207
t
Sig.
0.208 5.865 1.698
0.837 0.000 0.101
Dependent Variable: new lax
Using the cut-off level of p = 0.15 for statistical significance both the efficacy of the old laxative and patients’ age are significant predictors of the new laxative. The regression equation is as follows y ¼ a þ bx1 þ cx2 y ¼ 0:975 þ 1:890x1 þ 0:305x2
Regression Imputation
31
Using this equation, we use the y-value and x1-value to calculate the missing x2-value. Similarly, the missing y- and x1-values are calculated and imputed. The underneath data file has the imputed values. Var 1
Var 2
Var 3
24.00 30.00 25.00 35.00 39.00 30.00 27.00 14.00 39.00 42.00 41.00 38.00 39.00 37.00 47.00 35.00 36.00 12.00 26.00 20.00 43.00 31.00 40.00 31.00 36.00 21.00 44.00 11.00 27.00 24.00 40.00 32.00 10.00 37.00 19.00
8.00 13.00 15.00 10.00 9.00 10.00 8.00 5.00 13.00 17.00 11.00 11.00 12.00 10.00 18.00 13.00 12.00 4.00 10.00 8.00 16.00 15.00 14.00 11.00 12.00 6.00 19.00 5.00 8.00 9.00 15.00 7.00 6.00 14.00 7.00
25.00 30.00 25.00 31.00 69.00 33.00 22.00 18.00 14.00 30.00 36.00 30.00 27.00 38.00 40.00 31.00 25.00 24.00 27.00 20.00 35.00 29.00 32.00 30.00 40.00 31.00 41.00 26.00 24.00 30.00 35.00 31.00 23.00 43.00 30.00
Var 1 = efficacy new laxative (stools per month); var 2 = efficacy old laxative (stools per month); var 3 = age (years)
A multiple linear regression of the above data file with the imputed data included produced b-values (regression coefficients) equal to those of the un-imputed data, but the standard errors fell, and, consequently, sensitivity of testing was increased with a p-value falling from 0.101 to 0.005 (see the table on the next page).
32
7 Missing Data Imputation (35 Patients)
Multiple Imputations Multiple imputations is probably a better device for missing data imputation than regression imputation. In order to perform the multiple imputation method the SPSS add-on module ‘‘Missing Value Analysis’’ has to be used. First, the pattern of the missing data must be checked using the command ‘‘Analyze Pattern’’. If the missing data are equally distributed and no ‘‘islands’’ of missing data exist, the model will be appropriate. The following commands are needed. Command: Analyze….Missing Value Analysis….Transform….Random Number Generators….Analyze.…Multiple Imputations….Impute Missing Data.…OK (the imputed data file must be given a new name e.g. ‘‘study name imputed’’). Five or more times a file is produced by the software program in which the missing values are replaced with simulated versions using the Monte Carlo method (see Chap. 18 for explanation of the Monte Carlo method). In our example the variables are continuous, and, thus, need no transformation. Command: Split File….OK If you, subsequently, run a usual linear regression of the summary of your ‘‘imputed’’ data files (commands as given in Chap. 5 first part of this title), then the software will automatically produce pooled regression coefficients instead of the usual regression coefficients. In our example the multiple imputation method produced a much larger p-value for the predictor age than the regression imputation did as demonstrated in the underneath table (p = 0.097 versus p = 0.005). The underneath table also shows the result of testing after mean imputation and hot deck imputation as reviewed in Chap. 3 of the e book ‘‘Statistics on a Pocket Calculator Part 2’’, Springer New York, 2012, from the same authors), (B = regression coefficient, SE = standard error, t = t-value, Sig = p-value).
B1
SE1
t
Sig
B2
Bisacodyl Full data 1.82 5 % Missing data 1.89 Means imputation 1.82 Hot deck imputation 1.77 Regression imputation 1.89 Multiple imputations 1.84
SE2
t
Sig
Age
0.29
6.3
0.0001
0.34
0.16
2.0
0.048
0.32
5.9
0.0001
0.31
0.19
1.7
0.101
0.33
5.6
0.0001
0.33
0.19
1.7
0.094
0.31
5.7
0.0001
0.34
0.18
1.8
0.074
0.25
7.6
0.0001
0.31
0.10
3.0
0.005
0.31
5.9
0.0001
0.32
0.19
1.7
0.097
Multiple Imputations
33
The result of multiple imputations was, thus, less sensitive than that of regression imputation. Actually, the result was rather similar to that of mean and hot deck imputation. Why do it then anyway. The argument is that, with the multiple imputation method, the imputed values are not used as constructed real values, but rather as a device for representing missing data uncertainty. This approach is a safe and probably, scientifically, better alternative to the other methods.
Conclusion Regression imputation tends to overstate the certainty of the data testing. Multiple imputations is, probably, a better alternative to regression imputation. However, it is not in the basic SPSS program and requires the add-on module ‘‘Missing Value Analysis’’.
Chapter 8
Comparing the Performance of Diagnostic Tests (650 and 588 Patients)
Primary scientific question: two vascular lab score tests for demonstrating the presence of peripheral vascular disease are compared. Does one test perform better than the other. Often c-statistics (concordance-statistics) is used for the purpose. However, c-statistics has many limitations and logistic regression may better serve the purpose (In: Statistics Applied to Clinical Studies 5th edition, Chap. 49, Springer, New York, 2012, from the same authors).
Example The underneath figure shows the data of the evaluation studies of two vascular lab score tests. On the x-axis we have the vascular lab scores, on the y-axis ‘‘how often’’. The scores in patients with (1) and without (0) the presence of disease according to the gold standard (angiography) are respectively in the lower and upper graph.
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_8, The Author(s) 2012
35
36
8 Comparing the Performance of Diagnostic Tests (650 and 588 Patients)
The first test (upper two graphs) seems to perform less well than the second test (lower two graphs), because there may be more risk of false positives (the 0 disease curve is more skewed to the right in the upper than in the lower graphs). However, c-statistics produced no significant difference between the two tests. Binary logistic regression with odds of disease as dependent and score as predictor is used instead. Command: Analyze…. Regression.…Binary logistic…. Dependent variable: disease….Covariate: score.…OK. The table below is from the first test and shows that the best fit regression equation for the data is: log odds of having the disease = -8.003 ? 0.398 times the score.
Example
37
Variable in the equation
Step 1 a
a
VAR00001 Constant
B
S.E.
Wald
df
Sig.
Exp(B)
0.398 -8.003
0.032 0.671
155.804 142.414
1 1
0.000 0.000
1.488 0.000
Variable(s) entered on step 1: VAR00001
The table below is from the second test and shows that the best fit regression equation for the data is: log odds of having the disease = -10.297 ? 0.581 times the score. Variable in the equation Step 1a a
VAR00001 Constant
B
S.E.
Wald
df
Sig.
Exp(B)
0.581 -10.297
0.051 0.915
130.715 126.604
1 1
0.000 0.000
1.789 0.000
Variable(s) entered on step 1: VAR00001
Both regression equations produce highly significant regression coefficients with standard errors of respectively 0.032 and 0.051 and p-values of \0.0001. The two regression coefficients are tested for significance of difference using the z–test (the z-test is in Chap. 2 of Statistics on a Pocket Calculator part 2, Springer New York, 2012, from the same authors): p z ¼ ð0:398 0:581Þ= 0:0322 þ 0:0512 ¼ 0:183=0:060 ¼ 3:05; which corresponds with a p value of \0:01: Obviously, test 2 produces a significantly steeper regression model, which means that it is a better predictor of the risk of disease than test 1. We can, additionally, calculate the odds ratios of successfully testing with test 2 versus test 1. The odds of disease with test 1 equals e0.398 = 1.488, and with test 2 it equals e0.581 = 1.789. The odds ratio = 1.789/1.488 = 1.202, meaning that the second test produces a 1.202 times better chance of rightly predicting the disease than test 1 does.
Conclusion Logistic regression with the presence of disease as outcome and test scores of as predictor is adequate for comparing the performance of qualitative diagnostic tests. The method assumes that the two tests were performed in parallel groups. The method is explained.
Chapter 9
Meta-Regression (20 and 9 Studies)
Primary questions: a meta-analysis of studies assessing the incidence of emergency admissions due to adverse drug effects (ADEs) was very heterogenous. A meta-analysis of the risk of infarction in patients with coronary artery disesase and collateral coronary arteries was heterogeneous. What were the causal factors of these heterogeneities. Heterogeneity in meta-analysis makes pooling of the overall data pretty meaningless. Instead, a careful examination of the potential causes has to be accomplished. Regression analysis is generally very helpful for that purpose.
Example 1 Twenty studies assessing the incidence of ADEs were meta-analyzed (Atiqi et al. (2009) Int J Clin Pharmacol Ther 47: 549–56). The studies were very heterogenous. We observed that studies performed by pharmacists produced lower incidences than did the studies performed by internists. Also the study magnitude and age was considered as a possible cause of heterogeneity. The data file is underneath. Study no
%ADEs
1 2 3 4
21.00 14.40 30.40 6.10
Study magnitude 106.00 578.00 240.00 671.00
Clinicians’ study yes = 1
Elderly study yes = 1
1.00 1.00 1.00 0.00
1.00 1.00 1.00 0.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_9, The Author(s) 2012
39
40
9 Meta-Regression (20 and 9 Studies)
(continued) Study no
%ADEs
Study magnitude
Clinicians’ study yes = 1
Elderly study yes = 1
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
12.00 3.40 6.60 3.30 4.90 9.60 6.50 6.50 4.10 4.30 3.50 4.30 12.60 33.20 5.60 5.10
681.00 28411.00 347.00 8601.00 915.00 156.00 4093.00 18820.00 6383.00 2933.00 480.00 19070.00 2169.00 2261.00 12793.00 355.00
0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00
A multiple linear regression will be performed with percentage ADEs as outcome variable and the study magnitude, the type of investigators (pharmacist or internist), and the age of the study populations as predictors. Command: Analyze….Regression….Linear….Dependent: % ADEs ….Independent: Study magnitude, Age, and type of investigators….OK. Coefficientsa Model
Unstandardized coefficients
Standardized coefficients t
B
Std. error
Beta
1.454 0.000 2.885 3.359
-0.071 -0.075 0.887
1 (Constant) 6.924 Study-magnitude -7.674E - 5 Elderly = 1 -1.393 Clinicians = 1 18.932 a
4.762 -0.500 -0.483 5.636
Sig. 0.000 0.624 0.636 0.000
Dependent Variable: percentageADEs
The above table shows the results. After adjustment for the age of the study populations and study magnitude, the type of research group was the single and very significant predictor of the heterogeneity. Obviously, internists more often diagnose ADEs than pharmacists do.
Example 2
41
Example 2 Nine studies of the risk of infarction of patients with coronary artery disease and collateral coronary arteries were meta-analyzed. The studies were heterogeneous. A meta-regression was performed with the odds ratios of infarction as dependent and the odds ratios of various cardiovascular risk factors as independent variables. Var 1
Var 2
Var 3
Var 4
Var 5 (Var = Variable)
0.44 0.62 0.59 0.30 0.62 1.17 0.30 0.70 0.26
1.61 0.62 1.13 0.76 1.69
1.12 1.10 0.69 0.85 0.83 1.02 0.17 0.79 0.74
2.56 1.35 1.33 1.34 1.11
0.93 0.93 1.85 0.78 1.09 1.28 (two values were missing) 0.27 1.25 0.83
Var Var Var Var Var
0.13 1.52 0.65 1 2 3 4 5
odds odds odds odds odds
ratio ratio ratio ratio ratio
0.21 0.85 1.04
of infarction on patients with collaterals versus patients without diabetes‘‘ of hypertension‘‘ of cholesterol‘‘ of smoking‘‘
Simple linear regressions with the odds ratios of infarction as dependent variable were performed. Command: Analyze….Regression….Linear….Dependent: odds ratio of infarction ….Independent: var 2….OK. The underneath tables show, that, with p = 0.15 as cut-off value for significance, only diabetes and smoking were significant covariates of the the odds ratios of infarction in patients with coronary artery disease and collaterals. After mean imputation of the missing values (Statistics on a Pocket Calculator Part 2, Springer New York 2012, from the same authors) the results were unchanged. In the multiple linear regression none of the covariates remained significant. However, with no more than 9 studies multiple linear regression is powerless. The conclusion was that the beneficial effect of collaterals on coronary artery disease was little influenced by the traditional risk factors of coronary artery disease. Heterogeneity of this meta-analysis was unexplained. Coefficientsa Model 1 a
(Constant) ORdiabetes
Unstandardized coefficients
Standardized coefficients
B
Std. Error
Beta
0.284 0.192
0.114 0.100
0.616
Dependent Variable: ORinfarction
t
Sig.
2.489 1.916
0.047 0.104
42
9 Meta-Regression (20 and 9 Studies)
Coefficientsa Model 1 a
(Constant) ORhypertension
a
(Constant) ORcholesterol
a
Std. Error
Beta
t
Sig.
0.208 0.427
0.288 0.336
0.433
0.724 1.270
0.493 0.245
t
Sig.
3.021 0.243
0.023 0.816
t
Sig.
0.810 1.760
0.445 0.122
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
0.447 0.026
0.148 0.108
0.099
Dependent Variable: ORinfarction
Coefficientsa Model 1
Standardized coefficients
B
Dependent Variable: ORinfarction
Coefficientsa Model 1
Unstandardized coefficients
(Constant) ORsmoking
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
0.184 0.363
0.227 0.206
0.554
Dependent Variable: ORinfarction
Conclusion Meta-regression is increasingly used as approach to subgroup analysis to assess heterogeneity in meta-analyses. The advantage of meta-regression compared to simple subgroup analyses is that multiple factors can be assessed simultaneously and that confounders and interacting factors can be adjusted.
Chapter 10
Poisson Regression (50 and 52 Patients)
Primary questions: Do psychological and social factors affect the rates of episodes of paroxysmal atrial fibrillation. Are certain treatments efficacious in preventing the rates of torsades de pointes. Poisson regression is different from linear en logistic regression, because it uses a log transformed dependent variable. For rates, defined as numbers of events per person per time unit, Poisson regression is very sensitive and probably better than standard regression methods.
Example 1 Fifty patients were followed for numbers of episodes of paroxysmal atrial fibrillation (PAF), while on treated with two parallel treatment modalities. The data file is below. Var 1
Var 2
Var 3
Var 4
Var 5 (Var = Variable)
1 1 0 0 0 0 0 1 1 1 0 1
56.99 37.09 32.28 29.06 6.75 61.65 56.99 10.39 50.53 49.47 39.56 33.74
42.45 46.82 43.57 43.57 27.25 48.41 40.74 15.36 52.12 42.45 36.45 13.13
73 73 76 74 73 62 66 72 63 68 72 74
4 4 2 3 3 13 11 7 10 9 4 5 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_10, The Author(s) 2012
43
44
10 Poisson Regression (50 and 52 Patients)
(continued) Var 1
Var 2
Var 3
Var 4
Var 5 (Var = Variable)
0 0 1 1 0 0 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 0
62.91 65.56 23.01 75.83 41.31 41.89 65.56 13.13 33.02 55.88 45.21 56.99 31.51 52.65 17.26 33.02 61.04 66.98 1.01 38.35 44.66 44.12 59.85 32.28 23.01 70.94 1.01 41.89 40.15 41.31 44.66 38.35 32.28 37.09 63.55 43.57 33.02 68.49
62.27 44.66 25.25 61.04 49.47 65.56 46.82 6.75 42.45 64.87 55.34 44.66 38.35 50.00 6.75 40.15 57.55 71.83 45.21 35.13 46.82 46.82 46.29 47.35 49.47 61.04 1.01 52.12 35.13 38.35 58.69 42.45 1.01 32.28 57.55 41.31 24.17 59.26
72 74 76 76 75 74 75 55 75 76 76 76 71 67 70 76 74 77 77 76 32 77 77 49 71 72 75 46 72 59 57 67 68 75 75 68 68 56
5 3 1 0 1 0 2 24 2 0 1 0 8 3 7 0 2 0 0 1 3 0 0 28 8 5 2 27 5 18 19 9 9 4 2 3 9 20
Var 1 treatment modality, var 2 psychological score, var 3 social score, var 4 days of observations, var 5 numbers of episodes of paroxysmal atrial fibrillation
First, we will perform a linear regression analysis with var 5 as outcome variable and the other 4 variables as predictors.
Example 1
45
Command: Analyze….Regression….Linear….Dependent Variable: episodes of paroxysmal atrial fibrillation….Independent: treatment modality, psychological score, social score, days of observation….OK. Coefficientsa Model 1
a
(Constant) Treat Psych Soc Days
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
49.059 -2.914 0.014 -0.073 -0.557
5.447 1.385 0.052 0.058 0.074
-0.204 0.036 -0.169 -0.715
t
Sig.
9.006 -2.105 0.273 -1.266 -7.535
0.000 0.041 0.786 0.212 0.000
Dependent Variable: paf
The above table show that treatment modality is weakly significant, and psychological and social score are not. Furthermore, days of observation is very significant. However, it is not entirely appropriate to include this variable if your outcome is the numbers of events per person per time unit. Therefore, we will perform a linear regression, and adjust the outcome variable for the differences in days of observation using weighted least square regression. Coefficientsa, Model 1
a b
b
(Constant) Treat Psych Soc
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
10.033 -3.502 0.033 -0.093
2.862 1.867 0.069 0.078
-0.269 0.093 -0.237
t
Sig.
3.506 -1.876 0.472 -1.194
0.001 0.067 0.639 0.238
Dependent Variable: paf Weighted Least Squares Regression—Weighted by days
Command: Analyze….Regression….Linear….Dependent: episodes of paroxysmal atrial fibrillation….Independent: treatment modality, psychological score, social score ….WLS Weight: days of observation…. OK. The above table shows the results. A largely similar pattern is observed, but treatment modality is no more statistically significant. We will now perform a Poisson regression which is probably more appropriate for rate data. Command: Generalized Linear Models….mark: Custom….Distribution: Poisson…..Link function: Log….Response: Dependent variable: numbers of episodes of PAF….Scale Weight Variable: days of observation….Predictors: Main Effect: treatment modality….Covariates: psychological score, social score…. Model: main effects: treatment modality, psychological score, social score…. Estimation: mark Model-based Estimation….OK.
46
10 Poisson Regression (50 and 52 Patients)
Parameter estimates Parameter B Std. error 95 % wald confiidence interval Hypothesis test Lower (Intercept) [Treat = 0] [Treat = 1] Psych Soc (Scale)
1.868 0.667 0a 0.006 -0.019 1b
Upper
Wald Chi-Sauare df Sig.
0.0206 0.0153
1.828 0.637
1.909 0.697
8256.274 1897.429
1 1
0.000 0.000
0.0006 0.0006
0.005 -0.020
0.008 -0.017
120.966 830.264
1 1
0.000 0.000
Dependent Variable: paf Model: (Intrcept), treat,psych, soc a Set to zero because this parameter is redundant b Fixed a the displayed value
The above table gives the results. All of a sudden, all of the predictors including treatment modality, phychological and social score are very significant predictors of the PAF rate.
Example 2 Poisson regression cannot only be used for counted rates but also for binary outcome variables. If each patient is measured within the same period of time, no weighting variable has to be added to the model. Rates of 0 or 1, do, after all, exist in practice. We will see how this approach performs as compared to the logistic regression, traditionally, used for binary outcomes. The data file is below. Var 1
Var 2
Var 1
Var 2 (Var = Variable)
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 (continued)
Example 2
47
(continued) Var 1
Var 2
Var 1
Var 2 (Var = Variable)
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00
Var 1 treatment modality, var 2 presence of torsade de pointes
First, we will perform a traditional binary logistic regression with torsade de pointes as outcome and treatment modality as predictor. Command: Analyze….Regression….Binary Logistic….Dependent: torsade….. Covariates: treatment….OK. Variables in the equation Step1a a
VAR00001 Constant
B
S.E.
Wald
df
Sia.
Exd(B)
1.224 -0.125
0.626 0.354
3.819 0.125
1 1
0.051 0.724
3.400 0.882
Variable(s) entered on step 1: VAR00001
The above table shows that the treatment is not statistically significant. A Poisson regression is performed subsequently. Command: Generalized Linear Models ….mark Custom….Distribution: Poisson ….Link Function: Log….Response: Dependent Variable: torsade…. Predictors: Main Effect: treatment…..Estimation: mark Robust Tests….OK. Parameter estimates Parameter B
(Intercept) [yAR00001 = 0.00] rVAR00001 = 1.00] (Scale)
Std. error 95 % wald confiidence Hypothesis test interval
-0.288 0.1291 -0.470 0.2282 0a 1b
Lower
Upper
Wald Chi-Sauare df Sig.
-0.541 -0.917
-0.035 -0.023
4.966 4.241
Dependent Variable: torsade Model: (Intercept), VAR00001 a Set to zero because this parameter is redundant b Fixed at the displayed value
1 1
0.026 0.039
48
10 Poisson Regression (50 and 52 Patients)
The above table shows the results of the Poisson regression. The predictor treatment modality is statistically significant at p = 0.039. We will check with a 3-dimensional graph of the data if this result is in agreement with the data as observed. Command: Graphs….Legacy Dialog….3-D Bar: X-Axis mark: Groups of Cases, Z-Axis mark: Groups of Cases…Define 3-D Bar: X Category Axis: treatment, Z Category Axis: torsade….OK.
The above graph shows that in the 0-treatment (placebo) group the number of patients with torsades de pointe is virtually equal to that of the patients without. However, in the 1-treatment group it is considerably smaller. The treatment seems to be efficacious.
Conclusion Poisson regression is different from linear en logistic regression, because it uses a log transformed dependent variable. For the analysis of rates Poisson regression is very sensitive and probably better than standard regression methods. The methodology is explained.
Chapter 11
Confounding (40 patients)
Primary scientific question: is a sleeping pill more efficacious than placebo in spite of confounding in the study.
Example A 40 patient parallel group study assesses the efficacy of a sleeping pill versus placebo. We suspect that confounding may be in the data: the females may have received the placebo more often than the males. Var 1
Var 2
Var 3 (Var = Variable)
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3.49 3.51 3.50 3.51 3.49 3.50 3.51 3.49 3.50 3.49 3.51 3.50 3.49 3.51 3.50 3.45 3.45
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_11, The Author(s) 2012
49
50
11
Confounding (40 patients)
(continued) Var 1
Var 2
Var 3 (Var = Variable)
0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
3.50 3.50 3.49 3.51 3.50 3.49 3.51 3.50 3.51 3.49 3.50 3.49 3.51 3.50 3.51 3.49 3.50 3.49 3.55 3.55 3.50 3.50 3.50
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00
Var 1 treatment modality (0 = placebo, 1 = sleeping pill, Var 2 treatment outcome (hours of sleep), Var 3 gender (0 = female, 1 = male)
We will start with drawing the mean results of the treatment modalities with their error bars. Command: Graphs….Legacy dialogs.…Error Bars.…mark Summaries for groups of cases.…Define.…Variable: outcome.…Category Axis; treatment.…Confidence Interval for Means: 95 %….OK
Example
51
The above graph shows that the treatment 1 tended to perform a bit better than treatment 0, but given the confidence intervals (95 % CIs) the difference is not significantly different. Females tend to sleep better than males, and we suspect that confounding may be in the data: the females may have received the placebo more often than the males. We, therefore, draw a graph with mean treatment results in the genders. Command: Graphs….Legacy dialogs….Error Bars….mark Summaries for groups of cases.…Define.…Variable: outcome….Category Axis: gender….Confidence Interval for Means: 95 %….OK
The graph shows that the females tend to perform better than the males. However, again the confidence intervals are wider than compatible with a
52
11
Confounding (40 patients)
statistically significant difference. We subsequently perform simple linear regressions with respectively treatment modality and gender as predictors. Command: Analyze….Regression….Linear….Dependent: outcome….Independent: treatment modality….OK Coefficientsa Model 1 a
(Constant) treatment
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
3.495 0.010
0.004 0.005
0.302
t
Sig.
918.743 1.952
0.000 0.058
Dependent Variable: outcome
The above table shows that treatment modality is not a significant predictor of the outcome. We also use linear regression with gender as predictor and the same outcome variable. Command: Analyze….Regression….Linear….Dependent:outcome….Independent: gender….OK. Coefficientsa Model 1 a
(Constant) gender
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
3.505 -0.010
0.004 0.005
-0.302
t
Sig.
921.504 -1.952
0.000 0.058
Dependent Variable: outcome
Also gender is not a significant predictor of the outcome, hours of sleep. Confounding between treatment modality and gender is suspected. We perform a multiple linear regression with both treatment modality and gender as independent variables. Command: Analyze….Regression….Linear….Dependent: outcome ….Independent: treatment modality, gender….OK. Coefficientsa Model 1
a
(Constant) gender treatment
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
3.500 -0.021 0.021
0.003 0.005 0.005
-0.604 0.604
Dependent Variable: outcome
t
Sig.
1005.280 -3.990 3.990
0.000 0.000 0.000
Example
53
The above table shows, that, indeed, both gender and treatment are very significant predictors of the outcome after adjustment for one another.
The above figure tries to explain what is going on. If one gender receives few treatments 0 and the other gender receives few treatments 1, then an overall regression line will be close to horizontal, giving rise to the erroneous conclusion that no difference in the treatment efficacy exists between the treatment modalities. This phenomenon is called confounding, and can be dealt with in several ways: (1) subclassification (Statistics on a Pocket Calculator, Part 1, Chap. 17, Springer New York, 2011, from the same authors), (2) propensity scores and propensity score matching (Statistics on a Pocket Calculator, Part 2, Chap. 5, Springer New York, 2012, from the same authors), and (3) multiple linear regression as performed in this chapter. If there are multiple confounders like the traditional risk factors for cardiovascular disease, then multiple linear regression is impossible, because with many confounders this method loses power. Instead, propensity scores of the confounders can be constructed, one propensity score per patient, and the individual propensity scores can be used as covariate in a multiple regression model (Statistics on a Pocket Calculator, Part 2, Chap. 5, Springer New York, 2012, from the same authors).
Conclusion If in a parallel group trial the patient characteristics are equally distributed between the two treatment groups, then any difference in outcome can be attributed to the different effects of the treatments. However, if not, we have a problem. The difference between the treatment groups may be due, not only to the treatments given but, also to differences in characteristics between the two treatment groups. The latter differences are called confounders or confounding variables. Assessment for confounding is explained.
Chapter 12
Interaction, Random Effect Analysis of Variance (40 Patients)
Primary scientific question: is there interaction between the effects of gender and treatment on the treatment outcome. Interaction is different from confounding. In a trial with interaction effects the parallel groups have similar characteristics. However, there are subsets of patients that have an unusually high or low response.
The above figure shows the essence of interaction: the males perform better than the females with the new medicine, with the control treatment the opposite (or no difference between males and females) is true.
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_12, The Author(s) 2012
55
56
12
Interaction, Random Effect Analysis of Variance (40 Patients)
Example A parallel-group study assesses verapamil versus metoprolol for the treatment of paroxysmal atrial tachycardias. The numbers of episodes of paroxysmal atrial tachycardias per patient are the outcome variable. An overview of the individual results are underneath. Verapamil
Metoprolol
Males 52 48 43 50 43 44 46 46 43 49+ 464 Females 38 42 42 35 33 38 39 34 33 34+ 368 832
28 35 34 32 34 27 31 27 29 25+ 302
766
43 34 33 42 41 37 37 40 36 35+ 378 680
746
Overall, metoprolol seems to perform better. However, this is only true only for one subgroup (males). The presence of interaction between gender and treatment modality can be assessed several ways: (1) t-tests (see Chap. 18, Statistics on a Pocket Calculator, Springer New York, 2011, from the same authors), (2) analysis of variance, and (3) regression analysis. The data file is given underneath. Var 1
var 2
var 3
var 4 (var = variable)
52.00 48.00
0.00 0.00
0.00 0.00
0.00 0.00 (continued)
Example
57
(continued) Var 1
var 2
var 3
var 4 (var = variable)
43.00 50.00 43.00 44.00 46.00 46.00 43.00 49.00 28.00 35.00 34.00 32.00 34.00 27.00 31.00 27.00 29.00 25.00 38.00 42.00 42.00 35.00 33.00 38.00 39.00 34.00 33.00 34.00 43.00 34.00 33.00 42.00 41.00 37.00 37.00 40.00 36.00 35.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
Var Var Var Var
1 2 3 4
number of episodes of paroxysmal atrial fibrillation (PAF) treatment modality (0 = verapamil, 1 = metoprolol) gender (0 = male, 1 = female) interaction variable = treatment modality*gender (* = sign of multiplication)
58
12
Interaction, Random Effect Analysis of Variance (40 Patients)
We will first use analysis of variance. Command: Analyze….General linear model….Univariate analysis of variance ….Dependent variable:episode PAF….Fixed factors:treatment, gender….OK Tests of between-subjects effects Dependent variable: episodes PAF Source
Type III sum of sauares
df
Mean square
F
Sig.
Corrected model Intercept VAR00002 VAR00003 VAR00002*VAR00003 Error Total Corrected total
1327.200a 57153.600 577.600 10.000 739.600 423.200 58904.000 1750.400
3 1 1 1 1 36 40 39
442.400 57153.600 577.600 10.000 739.600 11.756
37.633 4861.837 49.134 0.851 62.915
0.000 0.000 0.000 0.363 0.000
a
R Squared = 0.758 (Adjusted R Squared = 0.738)
The above table shows that there is a significant interaction between gender and treatment at p = 0.0001 (var 2* var 3, * is sign of multiplication). In spite of this the treatment modality is a significant predictor of the outcome. In situations like this it is often better to use a socalled random effect model. The sum of squares treatment is then compared to the sum of squares interaction instead of the sum of squares error. This is a good idea since the interaction was unexpected, and is a major contributor to the error, otherwise called spread, in the data. This would mean that we have much more spread in the data than expected and we will lose a lot of power to prove whether or not the treatment is a significant predictor of the outcome, episodes of PAF. Random effect analysis of variance requires the following commands: Command: Analyze….General linear model….Univariate analysis of variance ….Dependent Variable: episodes of PAF….Fixed Factors: treatment…. Random Factors: gender….OK The underneath table shows the results. As expected the interaction effect remained statistically significant, but the treatment effect has now lost its significance. This is realistic, since in a trial with major interactions, an overall treatment effect analysis is not relevant anymore. A better approach will be a separate analysis of the treatment effect in the subgroups that caused the interaction.
Example
59
Tests of between-subjects effects Dependent variable: VAR00001 Source
Type lll sum of squares df Mean square F
Intercept
Hypothesis 57153.600 Error 577.600 VAR00003 Hypothesis 10.000 Error 739.600 VAR00002 Hypothesis 577.600 Error 739.600 VAR00003*VAR00002 Hypothesis 739.600 Error 423.200 a b c
1 57153.600 1 577.600a 1 10.000 1 739.600b 1 577.600 1 739.600b 1 739.600 36 11.756c
Sig.
98.950 0.064 0.014 0.926 0.781 0.539 62.915 0.000
MS(VAR00002) MS(VAR00003*VAR00002) MS(Error)
As a contrast test we will also use regression analysis for these data. For that purpose we first have to add an interaction variable: interaction variable = treatment modality * gender (* = sign of multiplication). The previously given data file shows the calculated interaction variable in the 4th column. The interaction variable is then used together with treatment modality and gender as independent variables in a multiple linear regression model. Command: Analyze….Regression….Linear….Dependent: episodes PAF …Independent: treatment modality, gender, interaction….OK. Coefficientsa Model 1 (Constant) treatment gender interaction a
Unstandardized coefficients
Standardized coefficients
B
Std. Error
Beta
46.400 -16.200 -9.600 17.200
1.084 1.533 1.533 2.168
-1.224 -0.726 1.126
t
Sig.
42.795 -10.565 -6.261 7.932
0.000 0.000 0.000 0.000
Dependent Variable: outcome
The above table shows the results of the multiple linear regression. Like with fixed effect analysis of variance, both treatment modality and interaction are statistically significant. The t-value-interaction of the regression = 7.932. The Fvalue-interaction of the fixed effect analysis of variance = 62.916 and this equals 7.9322. Obviously, the two approaches make use of a very similar arithmetic. Unfortunately, for random effect regression SPSS has limited possiblities.
60
12
Interaction, Random Effect Analysis of Variance (40 Patients)
Conclusion Interaction is different from confounding (Chap. 11). In a trial with interaction effects the parallel group characteristics are equally distributed between the groups. However, there are subsets of patients that have an unusually high or low response to one of the treatments. Assessments are reviewed.
Chapter 13
Log Rank Testing (60 Patients)
Primary scientific question: Does the log rank test provide a significant difference in survival between the two treatment groups in a parallel-group study. Log rank testing is more general than Cox regression for survival analysis, and does not require the Kaplan–Meier patterns to be exponential.
Example A data file is given below Variable 1 1.00 1.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 3.00 4.00 5.00 6.00 6.00 7.00 9.00
2
3
4
5
1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
65.00 66.00 73.00 54.00 46.00 37.00 54.00 66.00 44.00 62.00 57.00 43.00 85.00 46.00 76.00 76.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_13, The Author(s) 2012
61
62
13 Log Rank Testing (60 Patients)
(continued) Variable 1
2
3
4
5
9.00 11.00 12.00 14.00 16.00 17.00 18.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 30.00 29.00 29.00 29.00 28.00 28.00 28.00 27.00 26.00 24.00 23.00 22.00 22.00 21.00 20.00 19.00 19.00 18.00 17.00 16.00
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
65.00 54.00 34.00 45.00 56.00 67.00 86.00 75.00 65.00 54.00 46.00 54.00 75.00 56.00 56.00 53.00 34.00 35.00 37.00 65.00 45.00 66.00 55.00 88.00 67.00 56.00 54.00 57.00 57.00 76.00 67.00 66.00 56.00 66.00 84.00 56.00 46.00 45.00 76.00 65.00 45.00 76.00 56.00
0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 (continued)
13
Example
63
(continued) Variable 1
2
3
4
5
16.00
1
1
45.00
1.00
Var Var Var Var Var
1 2 3 4 5
months of follow-up (Var variable) event (lost for follow up or completed the study = 0, death = event = 1) treatment modality (0 = placebo, 1 = drug) age gender
Log Rank Test Command: Analyze….Survival….Kaplan–Meier….Time: follow months…. Status: var 2…. Define event (1)….Factor: treat….Compare factor levels….mark: Log rank….Continue…. Plots….mark: Hazard….mark: Survival….Continue….OK. Overall comparisons Log rank (Mantel-Cox)
Chi-Square
df
Sig.
9.126
1
0.003
Test of equality of survival distributions for the different levels of treat
The log rank test is statistically significant at p = 0.003 (Fig. 12.1). In Chap. 15, first part of this title, Cox regression of the same data was performed and provided a p-value of only 0.02. Obviously, the log rank test better fits the data than does Cox regression. The above figure shows that with treatment 1 few patients died in the First months. With treatment 2 the patients stopped dying after 18 months. These patterns are not very exponential, and, therefore, do not fit the exponential Cox model very well. The disadvantage of log rank tests is that it can not be easily adjusted for relevant prognostic factors like age and gender. Cox regression has to be used for that purpose even so.
Conclusion Log rank testing is generally more appropriate for testing survival data than Cox regression. The log rank test calculates a summary Chi-square p-value and is more sensitive than Cox regression. The advantage of Cox regression is that it can adjust relevant prognostic factors, while log rank cannot. Yet the log rank is a more
64
13 Log Rank Testing (60 Patients)
Fig. 12.1 On the y-axis % of survivors, on the x-axis the time (months). The treatment 1 (indicated in the graph as 0) seems to cause fewer survivors than does treatment 2 (indicated in the graph as 1)
appropriate method, because it does not require the Kaplan–Meier patterns to be exponential. The above curves are not exponential at all, and so the Cox model does not fit the data very well.
Chapter 14
Segmented Cox Regression (60 Patients)
Primary question: is frailty a time-dependently changing variable in patients admitted to hospital for exacerbation of chronic obstructive pulmonary disease (COPD). Cox regression assesses time to events, like death or cure, and the effects of predictors like comorbidity and frailty. If a predictor is not significant, then timedependent Cox regression may be a relevant approach. It assesses whether the predictor interacts with time. Time dependent Cox has been explained in Chap 16 of the first part of this title. The current chapter explains segmented time-dependent Cox regression. This method goes one step further and assesses whether the interaction with time is different at different periods of the study.
Example A simulated data file of 60 patients admitted to hospital for exacerbation of COPD is given underneath. All of the patients are assessed for frailty scores once a week. The frailty scores run from 0 to 100 (no frail to very frail). Var 1
Var 2
Var 3
Var 4
1.00 1.00 1.00 1.00 2.00 2.00 2.00 2.00 3.00 3.00 3.00 4.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 1.00 1.00
15.00 18.00 16.00 17.00 15.00 20.00 16.00 15.00 18.00 15.00 16.00 15.00
Var 5
Var 6 (var = variable)
(continued) T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_14, The Author(s) 2012
65
66
14
Segmented Cox Regression (60 Patients)
(continued) Var 1
Var 2
Var 3
Var 4
Var 5
Var 6 (var = variable)
4.00 5.00 5.00 5.00 6.00 6.00 6.00 7.00 8.00 8.00 8.00 9.00 9.00 10.00 10.00 10.00 10.00 11.00 11.00 12.00 12.00 13.00 13.00 13.00 13.00 14.00 14.00 14.00 15.00 16.00 17.00 17.00 17.00 18.00 18.00 19.00 19.00 19.00 20.00 21.00 21.00 21.00 21.00 23.00 23.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
18.00 19.00 19.00 19.00 18.00 17.00 19.00 16.00 60.00 69.00 67.00 60.00 86.00 87.00 75.00 76.00 67.00 56.00 78.00 58.00 59.00 77.00 66.00 65.00 68.00 85.00 65.00 65.00 54.00 43.00 45.00 34.00 34.00 42.00 27.00 57.00 54.00 73.00 86.00 64.00 54.00 65.00 54.00 35.00 34.00
15.00 16.00 17.00 19.00 24.00 16.00 10.00 20.00 32.00 24.00 25.00 26.00 25.00 20.00 16.00 18.00 10.00 16.00 23.00 20.00 60.00 68.00 67.00 75.00 56.00 68.00 79.00 50.00 60.00 79.00 57.00 78.00 79.00 56.00 75.00 74.00 65.00
14.00 15.00 12.00 14.00 24.00 21.00 17.00 18.00 17.00 16.00 19.00 18.00 20.00 21.00 22.00 21.00 15.00 (continued)
Example
67
(continued) Var 1
Var 2
Var 3
Var 4
Var 5
Var 6 (var = variable)
23.00 24.00 24.00
1.00 1.00 1.00
0.00 0.00 0.00
23.00 76.00 66.00
84.00 65.00 75.00
17.00 18.00 16.00
var var var var var var
1 2 3 4 5 6
day of discharge from hospital cured or lost from observation (1 = cured) gender frailty index first week frailty index second week frailty index third week
The missing values in var 5 and 6 are those from patients already discharged from hospital. We will first perform a simple time dependent Cox regression . Command: Analyze….Survival….Cox w/Time-Dep Cov….Compute Time-Dep Cov….Time (T_); transfer to box Expression for T_Cov….add the sign *….add the frailty variable third week….Model….Time: day of discharge….Status: cured or lost….Define: cured = 1….Continue….T_Cov: transfer to Covariates….OK. Variables in the equation B
SE
Wald
df
Sig.
Exp(B)
T_COV_
0.001
0.243
1
0.622
1.000
0.000
The above table shows the result: frailty is not a sigificant predictor of day of discharge. However, patients are generally not discharged from hospital until they are non-frail at a reasonable level, and this level may be obtained at different periods of time. Therefore, a segmented time-dependent Cox regression may be more adequate for these data. Command: Survival…..Cox w/Time-Dep Cov….Compute Time-Dependent Covariate…. Expression for T_COV_: entered (T_ [ = 1 & T_ \ 11) * VAR00004 ? (T_ [ = 11 & T_ \ 21) * VAR00005 ? (T_ [ = 21 & T_ \ 31). ….Model….Time: enter Var 1….Status: enter Var 2 (Define events enter 1)….Covariates: enter T_COV_….OK). Variables in the equation B
SE
Wald
df
Sig.
Exp(B)
T_COV_
0.009
38.317
1
0.000
0.945
-0.056
68
14
Segmented Cox Regression (60 Patients)
The above table shows that the independent variable, segmented frailty variable, is, indeed, a very significant predictor of day of discharge. We will, subsequently, perform a multiple segmented time-dependent Cox regression with treatment modality as second predictor variable. Command: same commands as above, except for Covariates: enter T_COV and treatment….OK.
T_COV_ VAR00003
B
SE
Wald
df
Sig.
Exp(B)
-0.060 0.354
0.009 0.096
41.216 13.668
1 1
0.000 0.000
0.942 1.424
The above table shows that both the frailty and treatment are very significant predictors of the day of discharge with hazard ratios of 0.942 and 1.424. The new treatment is about 1.4 times better and the patients are doing about 0.9 times worse per frailty score point. If treatment is used as single predictor unadjusted for frailty, then it is no longer a significant factor. Command: Analyze….Survival….Cox regression….Time: day of discharge ….Status: cured or lost….Define: cured = 1….Covariates: treatment….OK. Variables in the equation B
SE
Wald
df
Sig.
Exp(B)
VAR00003
0.072
3.281
1
0.070
1.140
0.131
The p-value of treatment has risen from p = 0.0001 to 0.070. Probably, frailty has a confounding effect on treatment efficacy, and after adjustment for it the treatment effect is, all of a sudden, a very significant factor.
Conclusion Cox regression assesses time to events, like death or cure, and the effects on it of predictors like treatment efficacy, comorbidity, and frailty. If a predictor is not significant, then time-dependent Cox regression may be a relevant approach. It assesses whether the time-dependent predictor interacts with time. Time dependent Cox has been explained in Chap. 16 of the first part of this title. The current chapter explains segmented time-dependent Cox regression. This method goes one step further and assesses whether the interaction with time is different at different periods of the study. It is shown that a treatment variable may be confounded with time dependent factors and that after adjustment for it a significant treatment efficacy can be demonstrated.
Chapter 15
Curvilinear Estimation (20 Patients)
Primary question: If the relationship between quantity of care and quality of care is not linear, does curvilinear regression help find the best fit curve?
Example The quantity of care estimated as the numbers of daily interventions like endocopies and small operations per doctor is tested against the quality of care scores. The data file is underneath. Var 1
Var 2 (var = variable)
19.00 20.00 23.00 24.00 26.00 27.00 28.00 29.00 29.00 29.00 28.00 27.00 27.00 26.00 25.00 24.00 23.00
2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_15, The Author(s) 2012
69
70
15 Curvilinear Estimation (20 Patients)
(continued) Var 1
Var 2 (var = variable)
22.00 22.00 21.00 21.00
19.00 20.00 21.00 22.00
Var 1 quantity of care (numbers of daily intervention per doctor), Var 2 quality of care scores
First, we will make a graph of the data. Command: Analyze….Graphs….Chart builder….click: Scatter/Dot….Click quality of care and drag to the Y-Axis….Click Intervention per doctor and drag to the X-Axis….OK.
The above graph shows the scattergram of the data. A non linear relationship is suggested. The curvilinear regression option in SPSS helps us identify the best fit model. Command: Analyze….Regression….Curve Estimation….mark: Linear, Logarithmic, Inverse, Quadratic, Cubic, Power, Exponential….mark: Display ANOVA Table….OK.
Example
71
The above graph is produced by the software program. It looks as though the quadratic and cubic models produce the best fit models . All of the curves are tested for goodness of fit using analysis of variance (ANOVA). The underneath tables show the calculated B-values (regression coefficients). The larger the absolute B-values, the better fit is provided by the model. The tables also test whether the absolute B-values are significantly larger than 0. 0 means no relationship at all. Significantly larger than 0 means that the data are closer to the curve than could happen by chance. The best fit linear, logarithmic, and inverse models are not statistically significant. The best fit quadratic and cubic models are very significant. The power and exponential models are, again, not statistically significant. Coefficients
interventions/doctor (Constant)
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
-0.069 25.588
0.116 1.556
-0.135
t
Sig.
-0.594 16.440
0.559 0.000
Coefficients Unstandardized coefficients B ln(interventions/doctor) 0.726 (Constant) 23.086
Standardized coefficients t
Std. error
Beta
1.061 2.548
0.155
Sig.
0.684 0.502 9.061 0.000
72
15 Curvilinear Estimation (20 Patients)
Coefficients Unstandardized coefficients B
Sig.
Std. error Beta
-11.448 5.850 26.229 0.989
1/interventions/doctor (Constant)
t
Standardized coefficients
-1.957 0.065 26.512 0.000
-0.410
Coefficients Unstandardized coefficients
Standardized coefficients t
B
Std. error
Beta
0.200 0.008 1.054
3.960 -4.197
interventions/doctor 2.017 interventions/doctor ** 2 -0.087 (Constant) 16.259
Sig.
10.081 0.000 -10.686 0.000 15.430 0.000
Coefficients Unstandardized coefficients
Standardized coefficients t
B
Std. error
Beta
0.258 0.024 0.001 0.772
8.234 -14.534 6.247
interventions/doctor 4.195 interventions/doctor ** 2 -0.301 interventions/doctor ** 3 0.006 (Constant) 10.679
Sig.
16.234 -12.437 8.940 13.836
0.000 0.000 0.000 0.000
Coefficients Unstandardized coefficients Standardized coefficients t B ln (interventions/doctor) 0.035 (Constant) 22.667
Std. error
Beta
0.044 2.379
0.180
Sig.
0.797 0.435 9.528 0.000
The dependent variable is ln (qual care score) Coefficients
interventions/doctor (Constant)
Unstandardized coefficients
Standardized coefficients
B
Std. error
Beta
-0.002 25.281
0.005 1.632
-0.114
The dependent variable is ln (qual care score)
(1) (2) (3) (4)
Linear Logarithmic Inverse Quadratic
t
Sig.
-0.499 15.489
0.624 0.000
Example
73
(5) Cubic (6) Power (7) Exponential The largest test statistics are given by ((4) Quadratic) and ((5) Cubic). Now we can construct regression equations for these two best fit curves using the data from the ANOVA tables y ¼ a þ bx þ cx2 ¼ 16:259 þ 2:017x 0:087x2 ¼ 16:3 þ 2:0x 0:09x2
y ¼ a þ bx þ cx2 þ dx3 ¼ 10:679 þ 4:195x 0:301x2 þ 0:006x3 ¼ 10:7 þ 4:2x 0:3x2 þ 0:006x3
ð4Þ ð5Þ
The equations can be used to make a prediction about the best fit y-value from a given x-value, e.g., with x = 10 you might expect an y-value of (4) y = 16.3 ? 20–9 = 27.3 according to the quadratic model. (5) y = 10.7 ? 42—30 ? 6 = 28.7 according to the cubic model. Alternatively, predictions about the best fit y-values from x-values given can also be fairly accurately extrapolated from the curves as drawn.
Conclusion The relationship between quantity of care and quality of care is curvilinear. Curvilinear regression has helped finding the best fit curve. If the standard curvilinear regression models do not yet fit the data, then there are other possibilities, like logit and probit transformations, Box Cox transformations, ACE (alternating conditional expectations)/AVAS (additive and variance stabilization) packages, Loess (locally weighted scatter plot smoothing) and spline modeling (see also Chap. 16). These methods are increasingly complex, and, often, computationally very intensive. However, for a computer this is no problem.
Chapter 16
Loess and Spline Modeling (90 Patients)
Primary question: Does Loess and spline modeling produce a better fit model for the plasma concentration—Time relationships of zoledronic acid than the standard exponential model does?
Example The data file underneath shows the plasma concentration levels at different time after intravenous administration. Var 1
var 2 (var = variable)
1.10 0.90 0.80 0.78 0.55 0.65 0.48 0.45 0.32 0.30 0.25 0.10 0.45 0.40 0.23 0.30
1.00 1.00 1.00 2.00 2.00 3.00 4.00 4.00 4.00 5.00 5.00 5.00 6.00 6.00 6.00 7.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_16, The Author(s) 2012
75
76
16
(continued) Var 1
var 2 (var = variable)
0.30 0.05 0.37 0.23 0.08 0.20 0.02 0.15 -0.05 -0.05 0.19 0.12 0.05 0.15 0.10 0.00 -0.10 -0.23 0.15 0.25 -0.10 -0.15 0.03 0.00 0.25 0.13 0.00 0.20 0.13 0.14 0.13 0.02 0.11 0.12 0.19 0.10 0.23 0.13 0.10 0.23 0.11 -0.13 -0.02 -0.06 0.25
7.00 7.00 8.00 8.00 8.00 9.00 9.00 10.00 10.00 10.00 11.00 11.00 11.00 12.00 12.00 13.00 13.00 13.00 14.00 15.00 15.00 15.00 16.00 16.00 17.00 17.00 17.00 18.00 18.00 18.00 19.00 19.00 20.00 20.00 21.00 21.00 22.00 22.00 22.00 23.00 23.00 23.00 24.00 24.00 25.00 (continued)
Loess and Spline Modeling (90 Patients)
Example
77
(continued) Var 1
var 2 (var = variable)
-0.10 0.13 -0.05 -0.15 0.14 0.13 0.03 -0.05 -0.08 0.12 0.01 -0.08 0.10 0.06 0.01 0.15 0.02 -0.02 0.25 0.08 -0.14 0.15 0.10 -0.02 0.10 -0.05 -0.10 0.12 0.10
25.00 26.00 26.00 26.00 27.00 27.00 28.00 28.00 28.00 29.00 29.00 29.00 30.00 30.00 30.00 31.00 31.00 31.00 32.00 32.00 32.00 33.00 33.00 33.00 34.00 34.00 34.00 35.00 35.00
Var 1 = plasma concentration of zoledronic acid (ng/ml); var 2 = time (hours)
Usually, the relationship between plasma concentration and time of a drug is described in the form of an exponential model. This is convenient, because it enables to calculate pharmacokinetic parameters like plasma half-life and equations for clearance. Using the Non-Mem program of the University of San Francisco a non linear mixed effect model of the data is produced (= multi-exponential model). The underneath figure of the data shows the exponential model. There is a wide spread in the data, and, so, the pharmacokinetic parameters derived from the model do not mean too much.
78
16
Loess and Spline Modeling (90 Patients)
Spline Modeling If the traditional models do not fit your data very well, you may use a method called spline modeling. The term spline stems from thin flexible wooden splines formerly used by shipbuilders and cardesigners to produce smooth shapes. A spline model consists of 4 or 5 intervals with different cubic curves (= third order polynomes, like y = a ? bx3, see also Chap. 15) that have the same y-value, slope, and curvature at the junctions. Command: Graphs….Chart Builder….click Scatter/Dot….click in Simple Scatter and drag to Chart Preview…. click plasma concentration and drag to the Y-Axis….click time and drag to the X-Axis….OK…..double-click in GGraph ….Chart Editor comes up….click Elements….click Interpolation….dialog box Properties….mark Spline….click Apply….click Edit….click Copy Chart. The underneath figure shows the best fit spline model of the above data.
Spline modeling
79
Loess (Locally Weighted Scatter Plot Smoothing) Modeling Also Loess modeling works with cubic curves (third order polynomes), but unlike spline modeling it does not work with junctions, but, instead, it chooses the best fit cubic curves for each value with outlier data given less weight. Command: Graphs….Chart Builder….click Scatter/Dot….click in Simple Scatter and drag to Chart Preview…. click plasma concentration and drag to the Y-Axis….click time and drag to the X-Axis….OK…..double-click in GGraph ….Chart Editor comes up….click Elements….click Fit Line at Total….in dialog box Properties….mark: Loess….click: Apply…. click Edit….click Copy Chart. The underneath figure shows the best fit Loess model of the above data.
80
16
Loess and Spline Modeling (90 Patients)
Note Both spline and Loess modeling are computationally very intensive methods that do not produce simple regression equations like the ones given in Chap. 15 on curvilinear regression. They also require fairly large, densely sampled data sets in order to produce good models. For making predictions from such models direct interpolations/extrapolations from the graphs can be made, and, given the mathematical refinement of these methods, these predictions should, generally, give excellent precision.
Conclusions 1. Both spline and Loess modeling are computationally intensive models that are adequate, if the data plot leaves you with no idea about the relationship between the y- and x-values. 2. They do not produce simple regression equations like the ones given in Chap. 15 on curvilinear regression. 3. For making predictions from such models direct interpolations/extrapolations from the graphs can be made, and, given the mathematical refinement of these methods, these predictions generally give excellent precision. 4. Maybe, the best fit for many types of non linear data is offered by Loess.
Chapter 17
Assessing Seasonality (24 Averages)
Primary question: do repeatedly measured CRP values in a healthy subject follow a seasonal pattern. For a proper assessment of seasonality, information of a second year of observation is needed, as well as information not only of, e.g., the months of January and July, but also of adjacent months. In order to unequivocally demonstrate seasonality, all of this information included in a single test is provided by autocorrelation.
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_17, The Author(s) 2012
81
82
17
Assessing Seasonality (24 Averages)
The above graph gives a simulated seasonal pattern of C-reactive protein levels in a healthy subject. Lagcurves (dotted) are partial copies of the datacurve moved to the left as indicated by the arrows. First-row graphs: the datacurve and the lagcurve have largely simultaneous positive and negative departures from the mean, and, thus, have a strong positive correlation with one another (correlation coefficient & +0.6). Second-row graphs: this lagcurve has little correlation with the datacurve anymore (correlation coefficient & 0.0). Third-row graphs: this lagcurve has a strong negative correlation with the datacurve (correlation coefficient & -1.0). Fourth-row graphs: this lagcurve has a strong positive correlation with the datacurve (correlation coefficient & +1.0).
Example Instead of individual values also summary measures like proportions or mean values of larger populations can be assessed for seasonality using autocorrelation. Of course, this method does not give evidence for seasonality in individual members of the populations, but it does give evidence for seasonality in the populations at large. E.g., 24 mean monthly CRP values of a healthy population is enough to tell you with some confidence something about the spread and the presence of seasonality in these mean values. Month
Average C-reactive protein values in a group of healthy subjects (mg/l)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
1.98 1.97 1.83 1.75 1.59 1.54 1.48 1.54 1.59 1.87 1.71 1.97 1.98 1.97 1.71 1.87 1.68 1.54 (continued)
Example
83
(continued) Month
Average C-reactive protein values in a group of healthy subjects (mg/l)
19 20 21 22 23 24
1.54 1.54 1.59 1.75 1.83 1.97
We will first make a graph of the data
Command: Graphs….Chart Builder….click Scatter/Dot….click mean C-reactive protein level and drag to the Y-Axis….click time and drag to the X-Axis….OK …..double-click in Chart Editor….click Interpolation Line….Properties: click Straight Line.
The above graph shows that the average monthly C-reactive protein levels look inconsistent. A graph of bi-monthly averages is drawn. 2 months
Mean C-reactive protein in a group of healthy subjects (mg/l)
2 4 6 8 10 12 14
1.90 1.87 1.56 1.67 1.73 1.84 1.89 (continued)
84
17
Assessing Seasonality (24 Averages)
(continued) 2 months
Mean C-reactive protein in a group of healthy subjects (mg/l)
16 18 20 22 24
1.84 1.61 1.67 1.67 1.90
The above bi-monthly graph shows a rather seasonal pattern. Autocorrelation is, subsequently, used to test significant seasonality of these data. SPSS Statistical Software is used. Command: Analyze….Forecasting.…Autocorrelations….move monthly percentages into Variable Box.…mark Autocorrelations….mark Partial Autocorrelations.…OK.
Example
85
The above graph of monthly autocorrelation coefficients with their 95 % confidence intervals is given by SPSS, and it shows that the magnitude of the monthly autocorrelations changes sinusoidally. The significant positive autocorrelations at the month no. 13 (correlation coefficients of 0.42 [SE 0.14, t-value 3.0, p \ 0.01)] further supports seasonality, and so does the pattern of partial autocorrelation coefficients (not shown): it gradually falls, and a partial autocorrelation coefficient of zero is observed one month after month 13. The strength of the seasonality is assessed using the magnitude of r2 = 0.422 = 0.18. This would mean that the lagcurve predicts the datacurve by only 18 %, and, thus, that 82 % is unexplained. And so, the seasonality may be statistically significant, but it is pretty weak, and a lot of unexplained variability, otherwise called noise, is in these data.
Conclusions Autocorrelation is able to demonstrate statistically significant seasonality of disease, and it does so even with imperfect data.
Chapter 18
Monte Carlo Tests and Bootstraps for Analysis of Complex Data (10, 20, 139, and 55 Patients)
Monte Carlo methods allows you to examine complex data more easily than advanced mathematics like integrals and matrix algebra. It uses random numbers from your own study rather than assumed Gaussian curves. For continuous data a special type of Monte Carlo method is used called bootstrap which is based on random sampling from your own data with replacement. SPSS supports Monte Carlo methods for the analysis of 1 2 3 4
paired continuous data unpaired continuous data paired binary data unpaired binary data.
We will use the examples from the Chaps. 3, 4, 10 and 13 from the first part of this title.
Paired Continuous Data Bootstrap analysis of paired continuous data originally analyzed with the Wilcoxon’s test (10 patients), Chap. 3 part 1 of this title: the bootstrap analysis of these data produced a p-value of 0.015, a little bit better than that of the Wilcoxon’s test (p = 0.019). Command: all of the commands are similar to those for the Wilcoxon’s test from Chap. 3 part 1 of this title. However, for the bootstrap analysis you have to additionally click ‘‘Exact’’ in the main dialog box. Then click Mont Carlo method, set Confidence Intervals, e.g., 99 %, and set Numbers of Samples, e.g., 10,000, and click Continue and OK (Table 18.1).
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_18, The Author(s) 2012
87
88
18 Monte Carlo Tests and Bootstraps
Table 18.1 Bootstrap analysis of paired continuous data originally analyzed with the Wilcoxon’s test (10 patients), Chap. 3 part 1 of this title Test statisticsb, c Effect treatment 2- effect treatment 1 Z Asymp. significant (2-tailed) Monte Carlo significant (2-tailed)
-2.346a 0.019 Significant 99 % Confidence interval
Lower bound Upper bound Significant
0.012 0.018 0.007
99 % Confidence interval
Lower bound Upper bound
0.005 0.009
Monte Carlo significant (1-tailed)
a b c
0.015
Based on positive ranks Wilcoxon Signed Ranks Test Based on 10,000 sampled tables with starting seed 2,000,000
Unpaired Continuous Data Bootstrap analysis of unpaired continuous data originally analyzed with Mann– Whitney test (20 patients), Chap. 4 of the first part of this title: the bootstrap method produced a p-value of p = 0.002, while the Mann–Whitney test produced a p-value of 0.005. Command: all of the commands are similar to those for the Mann–Whitney test from Chap. 4 part 1 of this title. However, for the bootstrap analysis you have to additionally click ‘‘Exact’’ in the main dialog box. Then click Mont Carlo method, set Confidence Intervals, e.g., 99 %, and set Numbers of Samples, e.g., 10,000, and click Continue and OK (Table 18.2).
Paired Binary Data Monte Carlo analysis of paired binary dataset originally analyzed with McNemar’s test (139 general practitioners), Chap. 13 part 1 of this title: the Monte Carlo test produced a p-value of 0.016, while the McNemar test produced a p-value of 0.018. Command: all of the commands are similar to those for the McNemar’s test from Chap. 13 part 1 of this title. However, for the Monte Carlo analysis you have to additionally click ‘‘Exact’’ in the main dialog box. Then click Mont Carlo method, set Confidence Intervals, e.g., 99 %, and set Numbers of Samples, e.g., 10,000, and click Continue and OK (Table 18.3).
Paired Binary Data
89
Table 18.2 Bootstrap analysis of unpaired continuous data originally analyzed with Mann-Whitney test (20 patients) Test statisticsc Effect treatment Mann-Whitney U Wilcoxon W Z Asymp. significant (2-tailed) Exact significant [2*(1-tailed significant)] Monte Carlo significant (2-tailed)
12.500 67.500 -2.836 0.005 0.003a Significant 99 % Confidence interval
Lower bound Upper bound Significant Lower bound Upper bound
Monte Carlo significant (1-tailed) 99 % Confidence interval
a b c
0.002b 0.001 0.003 0.001b 0.000 0.002
Not corrected for ties Based on 10,000 sampled tables with starting seed 2,000,000 Grouping variable: group
Table 18.3 Monte Carlo analysis of paired binary dataset originally analyzed with McNemar’s test (139 general practitioners), Chap. 13 part 1 of this title Test Statisticsb, c Life tyle after 1 year— lifeatyle Z Asymp. Significant (2-tailed) Exact Carlo significant (2-tailed significant)
Significant 95 % Confidence interval
Monte Carlo significant (1-tailed) 95 % Confidence interval
a b c
-2.530a 0.011 0.016 Lower bound Upper bound Significant Lower bound Upper bound
Based on negative ranks Wilcoxon Signed Ranks Test Based on 1,000 sampled tables with starting seed 2,000,000
0.008 0.024 0.010 0.004 0.016
90
18 Monte Carlo Tests and Bootstraps
Table 18.4 Monte Carlo analysis of unpaired binary data originally analyzed with the Chisquare test (55 patients), Chap. 10 of the first part of this title Test statistics Department Fall out of bed Chi-square Df Asymp. Significant Monte Carlo significant Significant 95 % Confidence interval Lower bound Upper bound
4.091a 1 0.043 0.064b 0.057 0.070
455a 1 0.500 0.595b 0.582 0.608
a 0 cells (0 %) have expected frequencies less than 5. The minimum expected cell frequency is 27,5 b Based on 10,000 sampled tables with starting seed 926,214,481
Unpaired Binary Data Monte Carlo analysis of unpaired binary data originally analyzed with the Chisquare test (55 patients), Chap. 10 of the first part of this title: the Monte Carlo test did not produce a significant p-value (p = 0.064), as did the Pearson Chi-square test (p = 0.021). Command: Analyze….Nonparametric tests….Chi-square….Test variable list: enter department and fall out of bed….click ‘‘Exact’’….Click: Monte Carlo method….set Confidence Interval, e.g., 99 %, and set Numbers of Samples, e.g., 10,000….click Continue….OK (Table 18.4).
Conclusion Monte Carlo methods allow you to examine complex data more easily and more rapidly than advanced mathematics like integrals and matrix algebra. It uses random numbers from your own study. For continuous data a special type of Monte Carlo method is used called bootstrap which is based on random sampling from your own data with replacement. Examples are given.
Chapter 19
Artificial Intelligence (90 Patients)
Primary scientific question: Does artificial intelligence better predict the body surface from the body weight and height than does the mathematical model of Hancock. Body surface is a better indicator for metabolic body mass, and is applied for drug dosage schedules. Artificial intelligence, otherwise called neural network, is a data producing methodology that simulates the structures and operating principles of the human brain.
Example We will use neural network instead of the Hancock equation for predicting the body surface from the body height and weight. The underneath data file consists of a row for each patient with different factors (left four columns) and one dependent variable, the photometrically measured body surface (variable 5). Using SPSS 18.0 with the neural network add-on module, we will assess whether a neural network with two hidden layers of neurons is able to adequately predict the measured body surfaces, and whether it outperforms the mathematical model of Haycock (* = sign of multiplication)34: body surface ¼ 0:024265 height0:3964 weight0:5378 Command: Neural Networks…. Multilayer Perceptron….Select Dependent Variable: the measured body surface…. Factors: body height and weight, and covariates, age and gender, In the main dialog box are various dialog boxes that must be assessed from the main dialog box:
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_19, The Author(s) 2012
91
92
19
Artificial Intelligence (90 Patients)
1. the dialog box Partitioning: set the Training Sample (70), Test Sample (20) 2. ‘‘ ‘‘ Architecture: set the Numbers of Hidden Layers (2) 3. ‘‘ ‘‘ Activation Function: click Hyperbolic Tangens 4. ‘‘ ‘‘ Output: click Diagrams, Descriptions, Synaptic Weights 5. ‘‘ ‘‘ Training: Maximal Time for Calculations 15 min, Maximal Numbers of Iterations 2000. Then press OK, and synaptic weights and body surfaces predicted by the neural network are displayed. The results are in the 7th column of the data file. Also, the values obtained from the Haycock equation are included in the data file (6th column). Both the predicted values from the neural network and from the Haycock equation are close to the measured values. When performing a linear regression with neural network as predictor, the r square value was 0.983, while the Haycock produced an r square value of 0.995. Although the Hancock equation performed slightly better, the neural network method produced adequate accuracy defined as an r-square value larger than 0.95. Gender Age Var 1
Weight Height Body surface measured Var 2 Var 3 Var 4 Var 5
Predicted from equation Var 6
Predicted from neural network Var 7
1.00 0.00 0.00 1.00 1.00 0.00 0.00 1.00 1.00 0.00 0.00 1.00 0.00 1.00 1.00 0.00 0.00 1.00 1.00 1.00 0.00 1.00 1.00 0.00 0.00
13.00 5.00 0.00 11.00 15.00 11.00 5.00 5.00 3.00 13.00 3.00 0.00 7.00 13.00 0.00 0.00 7.00 11.00 7.00 11.00 9.00 9.00 0.00 11.00 0.00
10770.00 6490.00 1890.00 10750.00 13080.00 10001.00 6610.00 6540.00 6010.00 12150.00 5,540.00 1,890.00 7,910.00 10,040.00 2,130.00 2,080.00 8,400.00 10,880.00 9,120.00 9,720.00 9,330.00 9,260.00 1,780.00 9,800.00 1,810.00
10129.64 6307.14 2565.16 10598.32 13688.06 9682.47 6758.45 6533.28 6,096.53 11,788.01 5,350.63 2,342.85 7,815.05 9,505.63 2,696.17 2,345.39 7,207.74 8,705.10 7,978.52 9,641.04 9,003.97 8,804.45 2,655.69 9,982.77 2,582.61
30.50 15.00 2.50 30.00 40.50 27.00 15.00 15.00 13.50 36.00 12.00 2.50 19.00 28.00 3.00 3.00 21.00 31.00 24.50 26.00 24.50 25.00 2.25 27.00 2.25
138.50 101.00 51.50 141.00 154.00 136.00 106.00 103.00 96.00 150.00 92.00 51.00 121.00 130.50 54.00 51.00 123.00 139.00 122.50 133.00 130.00 124.00 50.50 129.00 53.00
10072.90 6189.00 1906.20 10290.60 13221.60 9654.50 6768.20 6194.10 5830.20 11759.00 5299.40 2094.50 7490.80 9521.70 2446.20 1632.50 7958.80 10580.80 8756.10 9573.00 9028.00 8854.50 1928.40 9203.10 2200.20
(continued)
Example
93
(continued) Gender Age Var 1
Weight Height Body surface measured Var 2 Var 3 Var 4 Var 5
Predicted from equation Var 6
Predicted from neural network Var 7
0.00 0.00 0.00 1.00 1.00 0.00 1.00 1.00 1.00 0.00 1.00 1.00 1.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 1.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 1.00 1.00 0.00
5.00 9.00 13.00 3.00 3.00 9.00 13.00 9.00 1.00 15.00 15.00 1.00 7.00 1.00 7.00 13.00 3.00 0.00 1.00 1.00 15.00 13.00 3.00 15.00 5.00 1.00 5.00 3.00 0.00 1.00 9.00 9.00 5.00 11.00 7.00 1.00 11.00 5.00 3.00 5.00 15.00 7.00 9.00
6,820.00 10,500.00 11,720.00 6,660.00 5,290.00 8,910.00 10,660.00 10,460.00 4,130.00 13,710.00 12,890.00 4,490.00 8,620.00 4560.00 9,290.00 11,920.00 6,300.00 2,080.00 4,360.00 3,930.00 13,440.00 10,200.00 5,520.00 13,050.00 6,460.00 4,490.00 7110.00 5640.00 2360.00 4680.00 9320.00 11220.00 6900.00 10130.00 7940.00 4010.00 10450.00 6490.00 5760.00 6410.00 13950.00 8320.00 8940.00
7,017.29 9,762.62 12,063.78 6,370.21 5,372.90 8,450.32 11,196.58 10,445.87 3,952.50 13,056.80 12,094.26 4,520.18 8,423.78 3750.54 8,398.58 11,104.75 6,210.85 2,345.39 3,788.70 3,800.02 13,353.48 9,395.76 6,090.37 12,622.94 6,269.19 4,520.18 7430.72 5487.65 3065.52 3914.55 8127.39 10561.80 6413.58 9471.79 7299.95 4042.95 10408.70 6307.14 5623.51 6296.79 13877.81 8445.74 9023.25
16.00 30.00 34.00 16.00 11.00 23.00 30.00 29.00 8.00 42.00 40.00 9.00 22.00 9.50 25.00 36.00 15.00 3.00 9.00 7.50 43.00 27.50 12.00 40.50 15.00 9.00 16.50 12.50 3.50 10.00 25.00 33.00 16.00 29.00 20.00 7.50 29.50 15.00 13.00 15.00 45.00 21.00 23.00
105.00 133.00 148.00 99.00 92.00 126.00 138.00 138.00 76.00 165.00 151.00 80.00 123.00 77.00 125.00 143.00 94.00 51.00 74.00 73.00 152.00 139.00 91.00 153.00 100.00 80.00 112.00 91.00 56.50 77.00 126.00 138.00 108.00 127.00 114.00 77.00 134.50 101.00 91.00 98.00 157.00 120.00 127.00
6785.10 10120.80 11397.30 6410.60 5283.30 8693.50 9626.10 10178.70 4134.50 13019.50 12297.10 4078.40 8651.10 4246.10 8754.40 11282.40 6101.60 1850.30 3358.50 3809.70 12998.70 9569.10 5358.40 12627.40 6364.50 4380.80 7256.40 5291.50 2506.70 4180.40 8813.70 11055.40 6988.00 9969.80 7432.80 3934.00 9970.50 6225.70 5601.70 6163.70 13426.70 8249.20 8875.80
(continued)
94
19
(continued) Gender Age
Artificial Intelligence (90 Patients)
Var 1
Weight Height Body surface measured Var 2 Var 3 Var 4 Var 5
Predicted from equation Var 6
Predicted from neural network Var 7
0.00 1.00 1.00 0.00 1.00 1.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 0.00 1.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00
7.00 15.00 15.00 7.00 3.00 7.00 0.00 1.00 15.00 11.00 0.00 11.00 3.00 13.00 5.00 9.00 15.00 1.00 13.00 13.00 9.00 11.00
7020.00 13450.00 15160.00 7510.00 6150.00 8080.00 2130.00 4490.00 13990.00 11100.00 2100.00 10550.00 6300.00 13170.00 6700.00 8700.00 13170.00 4530.00 11220.00 12890.00 8650.00 10750.00
6935.27 13508.38 13541.31 7161.82 6200.79 7606.17 2559.28 4531.14 13612.74 10401.88 2337.69 10291.93 6440.60 12521.73 6532.15 8056.85 12994.08 4240.36 10964.35 12045.33 8411.62 9934.60
17.00 43.50 50.00 18.00 14.00 20.00 3.00 9.50 44.00 32.00 3.00 29.00 15.00 44.00 15.50 22.00 40.00 9.50 32.00 40.00 22.00 31.00
104.00 150.00 168.00 114.00 97.00 119.00 54.00 74.00 163.00 140.00 52.00 141.00 94.00 140.00 105.00 126.00 159.50 76.00 144.00 151.00 124.00 135.00
6873.50 13082.80 14832.00 7071.80 6013.60 7876.40 2117.30 4314.20 13480.90 10583.80 2121.00 10135.30 6074.90 13020.30 6406.50 8267.00 12769.70 3845.90 10822.10 12519.90 8586.10 10120.60
Var means variable
We conclude that neural network is a very sensitive data modeling program, particularly suitable for making predictions from non-Gaussian data. Like Monte Carlo methods it is a distribution free methodology, which is based on layers of artificial neurons that transduce imputed information. It is available in the SPSS add-on module Neural Network.
Conclusion Artificial intelligence, otherwise called neural network, is a data producing methodology that simulates the structures and operating principles of the human brain. It can be used for modeling purposes, and is, particularly, suitable for modeling distribution free and non normal data patterns.
Chapter 20
Robust Testing (33 Patients)
Primary questions: Is robust testing more sensitive than standard testing of imperfect data. Robust tests are tests that can handle the inclusion into a data file of some outliers without largely changing the overall test results. The following robust tests are available. 1. 2. 3. 4.
Z-test for medians and median absolute deviations (MADs). Z-test for Winsorized variances. Mood’s test. Z-test for M-estimators with bootstrap standard error.
The first three can be performed on a pocket calculator and are reviewed in Statistics on a Pocket Calculator Part 2, Chap. 8, Springer New York, 2011, from the same authors. The fourth robust test is reviewed in this chapter.
Example The underneath study assesses whether physiotherapy reduces frailty. Frailty score improvements after physiotherapy are measured. The data file is underneath. Frailty score improvements after physiotherapy -8.00 -8.00 -8.00 -4.00 -4.00 -4.00
3.00 3.00 3.00 4.00 4.00 4.00 (continued)
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0_20, The Author(s) 2012
95
96
20
Robust Testing (33 Patients)
(continued) Frailty score improvements after physiotherapy -4.00 -1.00 0.00 0.00 0.00 1.00 1.00 2.00 2.00 2.00 3.00
4.00 5.00 5.00 5.00 5.00 6.00 6.00 6.00 7.00 8.00
First we will try and make a histogram of the data
Command: Graph….Legacy Dialogs….Histogram….Variable: frailty score improvement….Mark: Display normal Curve….OK.
The above graph suggests the presence of some central tendency: the values between 3.00 and 5.00 are observed more frequently than the rest. However, the Gaussian curve calculated from the mean and standard deviation does not fit the data very well with outliers on either side. Next, we will perform a one sample t-test to see if the calculated mean is significantly different 0.
Example
97
Command: Analyze….Compare Means….One Sample T-Test….Test Variable: frailty score improvement….OK. One-sample test Test value = 0 t
df Sig. (2-tailed) Mean difference 95 % Confidence interval of the difference
VAR00001 1.895 32 0.067
1.45455
Lower
Upper
-0.1090
3.0181
The above table shows that the t-value based on Gaussian-like t-curves is not significantly different from 0, p = 0.067.
Robust Testing M-estimators is a general term for maximum likelihood estimators (MLEs), which can be considered as central values for different types of sampling distributions. Huber described an approach to estimate MLEs with excellent performance, and this method is, currently, often applied. The Huber maximum likelihood estimator is calculated from the underneath equation (MAD = median absolute deviation, * = sign of multiplication) R 0:6745 ðx medianÞ MAD Command:….Analyze.…Descriptives….Explore: enter variable into box dependent list….Statistics: mark M-estimators….OK. Huber’s M-estimator Huber’s standard error
¼ 2:4011 ¼ not given:
Usually, the 2nd derivative of the M-estimator function is used to find the standard error. However, the problem with the second derivative procedure in practice is that it requires very large data files in order to be accurate. Instead of an inaccurate estimate of the standard error, a bootstrap standard error can be calculated. This is not provided in SPSS. Bootstrapping is a data based simulation process for statistical inference. The basic idea is sampling with replacement in order to produce random samples from the original data. Standard errors are calculated from the 95 % confidence intervals of the random samples [95 % confidence interval = (central value ± 2 standard errors)]. We will use R bootstrap Plot—Central Tendency available on the Internet as a free calculator tool.
98
20
Robust Testing (33 Patients)
Enter you data: Then command : compute: The bootstrap standard error of the median is used: Bootstrap standard error ¼ 0:8619: The ztest is used: zvalue ¼ Huber’s M-estimator=bootstrap standard error zvalue ¼ 2:4011=0:8619 ¼ 2:7858 pvalue ¼ 0:005 Unlike the one sample t-test, the M-estimator with bootstraps produces a highly significant effect. Frailty scores can be improved by physiotherapy.
Conclusion Robust tests are wonderful for imperfect data, because they often produce statistically significant results, when the standard tests do not.
Final Remarks
Clinical studies often assess the efficacy of novel treatments/medical technologies. The first part of this title reviewed basic statistical analysis methods available in SPSS. There are, however, many questions in clinical studies that were not answered by the simple tests, and additional methodologies are required. Particularly, methods for dealing with multiple outcomes, and multi-step relationships are important objectives of the current issue, SPSS for Starters Part 2. Also the assessment of categorical variables, both exposure and outcome, requiring recoding procedures, and advanced regression models like Poisson regression, curvilinear regression, and non-linear modeling like Loess and spline regression modeling are provided. Distribution free methods like Monte Carlo methods, and artificial intelligence methods are reviewed. Except for the Monte Carlo based multiple imputations method, and the artificial intelligence method both available in the respective SPSS add-on modules, all of the 20 chapters only require the basic SPSS Software program. In addition to methodologies for assessing imperfect and outlier data, like robust tests (Chap. 20) and random effect models (Chap. 12), attention is given to methodologies that are useful to help you further improve your research. The tests described in this e book were generally statistically significant, when the standard tests were not, or, at least, they provided better power/sensitivity of testing and better fit of predictive models, like curvilinear models instead of linear, or medians instead of averages. The present time witnesses a multiplicity of novel medical treatments and technologies, and this is a blessing. However, when multiple technologies are combined into clinical strategies, this is accompanied by a Malthusian growth of uncertainty. For example, two different technologies can be used in two different sequences. Take five, and the number of possible sequences is 120. This is no reason to stop using the scientific method. The scientific method is in a nutshell: reformulate your uncertainty into a hypothesis and try to test this hypothesis
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0, The Author(s) 2012
99
100
Final Remarks
against control observations. The current small e book is full of relatively simple modern methods helpful for that purpose. Like the first part of this title the current e book is very condensed, but this should be threshold lowering to readers. As a consequence, however, the theoretical background of the methods described are not sufficiently explained in the text. Extensive information is given in the books Statistics on a Pocket Calculator Parts 1 and 2, Springer New York, 2011 and 2012, and Statistics Applied to Clinical Studies, 5h Edition, Springer New York, 2012, both from the same authors.
Index
B Best fit curve, 69, 73, 70, 71 Best fit model, 71, 75 Binary data, 88, 90 Binary logistic regression, 47, 28 Bootstrap analysis, 87, 88 Bootstraps, 87, 98 Bootstrap standard error, 97, 95 Box Cox transformations, 73 b-Values, 31
Categorical data, 24 Categorical pattern, 21 Causal factors of heterogeneneity, 41 Central tendency, 96 Chart Builder, 78, 79, 83 Clustered data, V Columns, 2, 21, 23 Comparing performance of diagnostic tests, 35 Complex regression models, 73 Computationally intensive statistical methods, 73, 80 Concordance-statistics, 35 Confidence intervals, 51, 85, 97 Confounder, 16, 42, 53 Confounders, 53, 42 Confounding, 49, 51, 53, 55, 60, 68, 1 Confounding variables, 53 Continuous data, 87, 88, 90 Contrast test, 59 Copied datacurves, 82 Correlation coefficient, 82, 85 Counted rates, 46 Covariates, 41 Cox regression for survival analysis, 61 Crosstabs, V c-statistics (concordance-statistics), 35 Cubic models, 71 Curvilinear estimation, 69, 70, 73, 80 Curvilinear regression, 69, 70, 73, 80
C Categorical analysis of covariates, 24 Categorical analysis of covariate with binary outcomes
D Data-based simulation process, 97 Data pooling, 39 Dependent variable, 72, 28, 41
A ACE (alternating conditional expectations), 73 Additional outcome variable, 7 Adjustment of confounding and interaction Adjustment of predictors Analysis of variance (anova), 71 Analysis of variance for interaction, 7, 11 Analyze patterns, 32 ANOVA, 70, 73 Artificial intelligence, 91, 94 Arithmetic, 59 Assessement of seasonality, 1 Assessing seasonality, 81 AVAS ( additive and variance stabilization), 73 Autocorrelation coefficient, 82, 85, 1 Autocorrelations
T. J. Cleophas and A. H. Zwinderman, SPSS for Starters, Part 2, SpringerBriefs in Statistics, DOI: 10.1007/978-94-007-4804-0, The Author(s) 2012
101
102
D (cont.) Decomposition of correlation, 11 Diagnostic test evaluation, 35, 37 Differences in patient characteristics, 53 Direct way, 5 Distribution free data, 94 Distribution free tests, 1
E e book, 1, 2, 32 Efficacy estimator, 3 Error bars, 50 Exact tests, 88–90 Excel, 2 Excel file, 2 Excellent precision, 80 Explanatory variable Exponential kaplan-meier patterns, 61, 64 Exponential models, 71 Exposure variables, 1 Extrapolations, 80
F Factor, 11, 63, 68, 7 Fixed factors, 14–18, 58 F-value, 59
G Gaussian like t-curves, 97 Generalized Linear Models, 45, 47 General Linear Model, 14, 15, 17, 18 Goodness of fit using ANOVAs, 71
H Hazard, 68 Heterogeneity, 39, 40, 42 Hidden layers of neurons, 91 Homogeneity of variables, 15 Hot deck imputation, 32, 33 Huber, 97 Huber maximum likelihood estimator, 97 Huber’s M-estimator, 97, 98 Huber’s standard error, 97, 98 Hypothesis testing, 46, 47
I Imperfect data, 85, 95, 98 Imputed data file, 32 Impute missing data, 32
Index Imputed values, 31, 33 Independent variable, 52, 59, 68, 41 Indirect way, 5 Individual propensity scores, 53 Instrumental variable, 31 Interaction, 55–60, 65, 68 Interaction variable, 57, 59 Interaction with time, 65, 68 Interpolations, 80 Inverse models, 71 Islands of missing data, 32 Iteration, 19, 92
J Junctions, 78, 79
K Kaplan-Meier patterns, 61, 63, 64
L Lagcurves, 82 Linear pattern, 1, 21 Loess (locally weighted scatter plot smoothing), 79, 80 Loess modeling, 79, 80 Logarithmic models, 71 Logistic regression, 43, 46, 48, 24, 28, 35, 36 Logistic regression equation, 24 Logit transformations, 73 Log rank test, 61, 63, 1 Log transformed dependent variable, 43, 48
M MADs, 95 Making predictions from model equations, 80 Malthusian growth of uncertainty, 99 MANOVA, 7 Mann-Whitney test, 88 Mantel-Cox test, 63 Manual recoding variables, 23 Master’s and doctorate class european college pharmaceutical medicine, 2 Mathematical refinement, 80 Mathematical model, 91 Matrix Algebra, 87, 90 Maximum likelihood estimators (MLEs), 97 McNemar’s test, 88, 89 Mean imputation, 32, 41 Mean, 32, 33, 41, 5, 50, 51, 58, 77, 82, 83, 85, 96
Index Median, 95, 97 M-estimators, 95, 97 Meta-analysis, 39, 41 Meta-regression, 41, 42 Methodologies for improving your research, 99 Missing data, 1, 29, 30, 32, 33 Missing data imputation, 32 Missing data uncertainty, 33 Missing Values Analysis MLEs, 97 Monte Carlo method, 87, 90, 87, 32 Mood’s test, 95 Multinomial logistic regression, 25, 28 Multiple confounders, 53 Multiple imputations, 29, 33 Multiple imputations method, 32 Multiple linear regression, 52, 53, 59, 3, 8, 23, 30, 31, 40, 41 Multiple outcome variables, 1 Multiple segmented time dependent cox regression, 68 Multistage analyses, V Multistep methods, 1 Multistep relationships, 1 Multivariate analyses, V, 1 Multivariate analysis of variance (manova), 7, 11 Multivariate analysis using path statistics, 7 Multivariate analysis with binary outcomes, 19 Multivariate probit analysis, 19
N Neural network, 91 Neural network add-on module, 91 Noise in the data, 85 Non-gaussian data, 94 Non normal data patterns, 94 Non linear modeling, 77 Non linear models, 1 Non linear relationship, 70 Non-Mem, 77 Nonmathematical approach to multivariate regression, 10 Non-parametric test, 1 Normal distributions, 15
O Odds, 28, 36, 37, 41 Odds of disease, 36, 37 Odds ratios, 37, 41 Odds ratio test, 28
103 One sample t-test, 96, 98 One step linear regression, 5, 28 Operating principles of the brain, 91 Ordered logistic regression, 28 Ordering in a meaningful way, 28 Outcome variables, 46, 1, 10, 28 Outlier data given less weight, 79 Outliers, 95, 96 Overstated certainty, 33
P Paired binary data, 87, 88 Paired continuous data, 87 Parametric test Partial autocorrelation, 85 Path diagram, 5, 10 Path statistics, 5, 7 Patient characteristics, 53, 2 Pharmacokinetic modeling Pillai’s method, 15 Pocket calculator, 95 Poisson regression, 43, 45–48 Poisson regression for binary outcomes, 45 Poisson regression for rates, 45 Pooled regression coefficients, 32 Power loss, 53 Power models, 71 Predicting variables, 7 Probit analysis, 19 Probit transformations, 73 Propensity scores, 53 Propensity score matching, 53 P-values, 10, 37
Q Quadratic models, 71
R Random effect model, 58 Random effect analysis of variance, 55 Random effect regression, 59 Random numbers, 87, 90 Random Number Generators, 32 Rates, 43, 48 R Bootstrap Plot, 98 Recode the stepping variable, 23 Recoding procedures, 28 Regression analysis for interaction Regression coefficients, 71, 5, 10, 31, 32, 37 Regression data, 44
104
R (cont.) Regression equation, 80, 82, 6, 23, 24, 30, 36, 37, 73 Regression imputation, 29, 32, 33 Residual effects, 10 Robustness, 15 Robust tests, 1, 95 Rows, 2 Roy’s largest Root, 15
S Sampled tables, 88 Sampling from the data with replacement, 87 SAS, V Scattergram, 70 Scientific method, 33 Scientific pocket calculator, 95 Seasonality, 81, 84, 85, 1 Seasonal pattern, 81, 82, 84 Second derivative, 97 Segmented time-dependent Cox regression, 65, 67, 68, 1 Sensitivity of testing, 29, 31 Simple linear regression, 52, 6, 8, 10 Single path analysis, 5 Sinusoidal changes, 85 Simultaneous assessment of multiple factors, 42 Skewed data, 36 Sound clinical arguments, 19 Spline, 73, 75, 78–80 Spline modeling, 73, 75, 78, 79 Split file, 32 Spread in data, 58 SPSS, 1, 2, 24, 28, 32, 33, 84, 85, 87, 97 SPSS add-on modules, 32, 99 SPSS for Starters, V Standard deviation, 96 Standard errors, 97, 31, 37 Standardized regression coefficients, 5, 10 Standard linear regression methods, 6 STATA, 19 Statistical Analysis of Clinical Data on a Pocket Calculator, V Statistics Applied to Clinical Studies, 35 Statistics applied to clinical trials module, 2 Step by step data-analysis, 2 Stepwise rising function, 23 Subclassification, 53 Subgroup analysis in case of interaction, 42 Subgroup properties, 19
Index Subsets with unusual hig or low response, 60 Summary chi-square value, 63 Sums of squares, 15, 58 Survival studies, 1 Square root of (1 - R) Square, 10 Synaptic weights, 92
T Third order polynomes, 78, 79 Three dimensional graph Time dependent Cox regression, 67 Time to events, 65, 68 Training sample, 91 Treatment modalities, 43, 50, 53 Trend testing, 1 T-test, 56 T-test for interaction, 96 Tutor pages, 2 T-value, 32, 59, 85, 97 Two path statistics, 5 Two stage least square method (2LS), 6 Two step path analysis, 5 Type I error, 19
U Underpowered effects, 19 Unequivocal seasonality, 81 Unexplained variability, 85 Univariate, 15, 18, 58 Unpaired binary data, 87, 90 Unpaired continuous data, 87, 88 Unstandardized regression coefficients, 59
W Weighted least square regression (WLS), 45 Wilcoxon’s test, 87 Window’s Word program, 1 WLS regression, 45
Z Z-test, 37 Z-test for medians and median absolute deviations (MADs), 95 Z-test for M-estimators with bootstrap standard errors, 95 Z-test for Winsorized variances, 95