Summary
Careful data collection and analysis lies at the heart of good research, through which our understanding of psychology is enhanced. Yet the students who will become the next generation of researchers need more exposure to statistics and experimental design than a typical introductory coursepresents. Experimental Design and Analysis for Psychology provides a complete course in data collection and analysis for students who need to go beyond the basics.Acting as a true course companion, the text's engaging writing style leads readers through a range of often challenging topics, blending examples and exercises with careful explanations and custom-drawn figures to ensure even the most daunting concepts can be fully understood.Opening with a review of key concepts, including probability, correlation, and regression, the book goes on to explore the analysis of variance and factorial designs, before moving on to consider a range of more specialised, but yet powerful, statistical tools, including the General Linear Model,and the concept of unbalanced designs.Not just a printed book, Experimental Design and Analysis for Psychology is enhanced by a range of online materials, all of which add to its value as an ideal teaching and learning resource.The Online Resource Centre features:For registered adopters:Figures from the book, available to download.Answers to exercises featured in the book.Online-only Part III: bonus chapters featuring more advanced material, to extend the coverage of the printed book.For students:A downloadable workbook, featuring exercises for self-study.SAS, SPSS and R companions, featuring program code and output for all major examples in the book tailored to these three software packages.
Author Biography
Herve Abdi He is currently a full professor in the School of Behavioral and Brain Sciences at the University of Texas at Dallas and an adjunct professor of radiology at the University of Texas Southwestern Medical Center
at Dallas. His research interests include face processing and computational models of face processing, neural networks, computational and statistical models of cognitive processes (especially memory and learning), experimental design, and multivariate statistical analysis. He has published several books and papers in these domains.
Betty Edelman teaches Statistics for Psychology and Research Design
and Analysis as a senior lecturer at the University of Texas at Dallas. Her interests include modeling of cognitive processes using neural networks. She is a co-author of several research articles and a book about neural networks.
Dominique Valentin is currently associate professor at the University of Bourgogne at Dijon, France. She has published a book and several papers dealing with neural networks and modeling.
W. Jay Dowling is a professor in the School of Behavioral and Brain Sciences at the University of Texas at Dallas. His research interests have centred on the psychological reality and relevance to perception and memory of patterns of musical organization.
Table of Contents
1 Introduction to Experimental Design1.1. General overview1.2. Independent and dependent variables1.3. Independent variables1.4. Dependent variables1.5. Common defective experimental designs1.6. The choice of subjects and the representative design of experiments1.7. Key notions of the chapter2 Correlation2.1. Introduction2.2. Correlation: Overview and Example2.3. Rationale and computation of the coefficient of correlation2.4. Interpreting correlation and scatterplots2.5. The importance of scatterplots2.6. Correlation and similarity of distributions2.7. Correlation and Z-scores2.8. Correlation and causality2.9. Squared correlation as common variance2.10. Key notions of the chapter2.11. Key formulas of the chapter2.12. Key questions of the chapter3 Statistical Test: The F test3.1. Introduction3.2. Statistical Test3.3. For experts: Not zero is not enough!3.4. Key notions of the chapter3.5. New notations3.6. Key formulas of the chapter3.7. Key questions of the chapter4 Simple Linear Regression4.1. Generalities4.2. The regression line is the "best-fit" line4.3. Example: Reaction Time and Memory Set4.4. How to evaluate the quality of prediction4.5. Partitioning the total sum of squares4.6. Mathematical Digressions4.7. Key notions of the chapter4.8. New notations4.9. Key formulas of the chapter4.10. Key questions of the chapter5 Orthogonal Multiple Regression5.1. Generalities5.2. The regression plane is the "best-fit" plane5.3. Back to the example: Retroactive interference5.4. How to evaluate the quality of the prediction5.6. F tests for the simple coefficients of correlation5.7. Partitioning the sums of squares5.8. Mathematical Digressions5.9. Key notions of the chapter5.10. New notations5.11. Key formulas of the chapter5.12. Key questions of the chapter6 Non-Orthogonal Multiple Regression6.1. Generalities6.2. An example: Age, speech rate and memory span6.3. Computation of the regression plane6.4. How to evaluate the quality of the prediction6.5. Semi-partial correlation as increment in explanation6.5. F tests for the semi-partial correlation coefficients6.6. What to do with more than two independent variables6.7. Bonus: Partial correlation6.8. Key notions of the chapter6.9. New notations6.10. Key formulas of the chapter6.11. Key questions of the chapter7 ANOVA One Factor: Intuitive Approach7.1. Introduction7.2. Intuitive approach7.3. Computation of the F ratio7.4. A bit of computation: Mental Imagery7.5. Key notions of the chapter7.6. New notations7.7. Key formulas of the chapter7.8. Key questions of the chapter8 One Factor, S(A): Test, Computation, & Effect Size8.1. Statistical test: A refresher8.2. An example: back to mental imagery8.3. Another more general notation: A and S(A)8.4. Presentation of the results of the ANOVA8.5. ANOVA with two groups: F and t8.6. Another example: Romeo and Juliet8.7. How to estimate the effect size8.8. Computational formulas8.9. Key notions of the chapter8.10. New notations8.11. Key formulas of the chapter8.12. Key questions of the chapter9 One Factor, S(A): Regression Point of View9.1. Introduction9.2. Example 1: Memory and Imagery9.3. Analysis of variance for Example 19.4. Regression approach for Example 1: Mental Imagery9.5. Equivalence between regression and analysis of variance9.6. Example 2: Romeo and Juliet9.7. f regression and analysis of variance are one thing, why keep two different techniques?9.8. Digression9.9. Multiple regression and analysis of variance9.10. Key notions of the chapter9.11. Key formulas of the chapter9.12. Key questions of the chapter10 Design: S(A): Score Model10.1. The score model10.2. ANOVA with one random factor (Model II)10.3. The Score Model: Model II10.4. F 1 or The Strawberry Basket!10.5. Three exercises10.6. Key notions of the chapter10.7. New notations10.8. Key formulas of the chapter10.9. Key questions of the chapter11 The Assumptions of Analysis of Variance11.1. Overview11.2. Validity assumptions11.3. Testing the Homogeneity of variance assumption11.4. Example11.5. Testing Normality: Lilliefors11.6. Notation11.7. Numerical example11.8. Numerical approximation11.9. Transforming scores11.10. Key notions of the chapter11.11. New notations11.12. Key formulas of the chapter11.13. Key questions of the chapter12 Planned Orthogonal Comparisons12.1. General overview12.2. What is a contrast?12.3. The different meanings of alpha12.4. An example: Context and Memory12.5. Checking the independence of two contrasts12.6. Computing the sum of squares for a contrast12.7. An other view: Contrast analysis as regression12.8. Critical values for the statistical index12.9. Back to the Context12.10. Significance of F vs. specific contrasts12.11. How to present the results of orthogonal comparisons?12.12. The omnibus F is a mean12.13. Sum of orthogonal contrasts: Subdesign analysis12.14. Key notions of the chapter12.15. New notations12.16. Key formulas of the chapter12.17. Key questions of the chapter13 Planned Non-orthogonal Comparisons13.1. General Overview13.2. The classical approach13.3. Multiple regression: The return!13.4. Key notions of the chapter13.5. New notations13.6. Key formulas of the chapter13.7. Key questions of the chapter14 Post hoc or a-posteriori analyses14.1. Introduction14.2. Scheff ´e's test: All possible contrasts14.3. Pairwise comparisons14.4. Key notions of the chapter14.5. New notations14.6. Key questions of the chapter15 Two Factors, S(A × B)15.1. Introduction15.2. Organization of a two-factor design: A × B15.3. Main effects and interaction15.4. Partitioning the experimental sum of squares15.5. Degrees of freedom and mean squares15.6. The Score Model (Model I) and the sums of squares15.7. An example: Cute Cued Recall15.8. Score Model II: A and B random factors15.9. ANOVA A × B (Model III): one factor fixed, one factor random15.10. Index of effect size15.11. Statistical assumptions and conditions of validity15.12. Computational formulas15.13. Relationship between the sources5.14. Key notions of the chapter15.15. New notations15.16. Key formulas of the chapter15.17. Key questions of the chapter16 Factorial designs and contrasts16.1. Introduction16.2. Fine grained partition of the standard decomposition16.3. Contrast and standard decomposition16.4. What error term should be used?16.5. Example: partitioning the standard decomposition16.6. Contrasts non-orthogonal to the canonical decomposition16.7. A posteriori Comparisons17 One Factor Repeated Measures design, S × A17.1. Introduction17.2. Examination of the F Ratio17.3. Partitioning the SSwithin: S(A) = S + SA17.4. Computing F in an S × A design17.5. A numerical example: S × A design17.6. Score Model: Model I and II for repeated measures designs17.7. Estimating the size of the experimental effect17.8. Problems with repeated measures17.9. An example with computational formulas17.10. Another example: Proactive interference17.11. Score model (Model I) S × A design: A fixed17.12. Score model (Model II) S × A design: A random17.13. Key notions of the chapter17.14. New notations17.15. Key formulas of the chapter17.16. Key questions of the chapter18 Two Factors Completely Repeated Measures: S × A × B18.1. Introduction18.2. An example: Plungin'!18.3. Sum of Squares, Means squares and F ratios18.4. Score model (Model I), S × A × B design: A and B fixed18.5. Results of the experiment: Plungin'18.6. Score Model (Model II): S × A × B design, A and B random18.7. Score Model (Model III): S × A × B design, A fixed, B random18.8. Quasi-F: F'18.9. A cousin F''18.10. Validity assumptions, measures of intensity, key notions, etc18.11. New notations18.12. Key formulas of the chapter19 Two Factors Partially Repeated Measures: S(A) × B19.1. Introduction19.2. An Example: Bat and Hat19.3. Sums of Squares, Mean Squares, and F ratio19.4. The comprehension formula routine19.5. The 13 points computational routine19.6. Score model (Model I), S(A) × B design: A and B fixed19.7. Score model (Model II), S(A) × B design: A and B random19.8. Score model (Model III), S(A) × B design: A fixed and B random19.9. Coefficients of Intensity19.10. Validity of S(A) × B designs19.11. Prescription19.12. New notations19.13. Key formulas of the chapter19.14. Key questions of the chapter20 Nested Factorial Designs: S × A(B)20.1. Introduction20.2. An Example: Faces in Space20.3. How to analyze an S × A(B) design?20.4. Back to the example: Faces in Space20.5. What to do with A fixed and B fixed20.6. When A and B are random factors20.7. When A is fixed and B is random20.8. New notations20.9. Key formulas of the chapter20.10. Key questions of the chapter21 How to derive expected values for any design21.1. Crossing and nesting refresher21.2. Finding the sources of variation21.3. Writing the score model21.4. Degrees of freedom and sums of squares21.5. An example21.6. Expected values21.7. Two additional exercisesA Descriptive StatisticsB The sum sign: PC Expected ValuesD Elementary Probability: A RefresherE Probability DistributionsF The Binomial TestG Statistical tables