Discovering Statistics Using R

Námskeið
- V-303-TOL1 Hagnýt tölfræði
Lýsing:
Keeping the uniquely humorous and self-deprecating style that has made students across the world fall in love with Andy Field′s books, Discovering Statistics Using R takes students on a journey of statistical discovery using R, a free, flexible and dynamically changing software tool for data analysis that is becoming increasingly popular across the social and behavioural sciences throughout the world.
The journey begins by explaining basic statistical and research concepts before a guided tour of the R software environment. Next you discover the importance of exploring and graphing data, before moving onto statistical tests that are the foundations of the rest of the book (for example correlation and regression). You will then stride confidently into intermediate level analyses such as ANOVA, before ending your journey with advanced techniques such as MANOVA and multilevel models.
Although there is enough theory to help you gain the necessary conceptual understanding of what you′re doing, the emphasis is on applying what you learn to playful and real-world examples that should make the experience more fun than you might expect. Like its sister textbooks, Discovering Statistics Using R is written in an irreverent style and follows the same ground-breaking structure and pedagogical approach.
The core material is augmented by a cast of characters to help the reader on their way, together with hundreds of examples, self-assessment tests to consolidate knowledge, and additional website material for those wanting to learn more. Given this book′s accessibility, fun spirit, and use of bizarre real-world research it should be essential for anyone wanting to learn about statistics using the freely-available R software.
Annað
- Höfundar: Andy Field, Jeremy Miles, Zoë Field
- Útgáfa:1
- Útgáfudagur: 2012-03-07
- Hægt að prenta út 30 bls.
- Hægt að afrita 30 bls.
- Format:ePub
- ISBN 13: 9781446289150
- Print ISBN: 9781446200469
- ISBN 10: 144628915X
Efnisyfirlit
- Cover Page
- Title Page
- Copyright Page
- Contents
- Preface
- How to use this book
- Acknowledgements
- Dedication
- Symbols used in this book
- Some maths revision
- 1 Why is my evil lecturer forcing me to learn statistics?
- 1.1. What will this chapter tell me?
- 1.2. What the hell am I doing here? I don’t belong here
- 1.3. Initial observation: finding something that needs explaining
- 1.4. Generating theories and testing them
- 1.5. Data collection 1: what to measure
- 1.5.1. Variables
- 1.5.2. Measurement error
- 1.5.3. Validity and reliability
- 1.6. Data collection 2: how to measure
- 1.6.1. Correlational research methods
- 1.6.2. Experimental research methods
- 1.6.3. Randomization
- 1.7. Analysing data
- 1.7.1. Frequency distributions
- 1.7.2. The centre of a distribution
- 1.7.3. The dispersion in a distribution
- 1.7.4. Using a frequency distribution to go beyond the data
- 1.7.5. Fitting statistical models to the data
- What have I discovered about statistics?
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 2 Everything you ever wanted to know about statistics(well, sort of) (well, sort of)
- 2.1. What will this chapter tell me?
- 2.2. Building statistical models
- 2.3. Populations and samples
- 2.4. Simple statistical models
- 2.4.1. The mean: a very simple statistical model
- 2.4.2. Assessing the fit of the mean: sums of squares, variance and standard deviations
- 2.4.3. Expressing the mean as a model
- 2.5. Going beyond the data
- 2.5.1. The standard error
- 2.5.2. Confidence intervals
- 2.6. Using statistical models to test research questions
- 2.6.1. Test statistics
- 2.6.2. One- and two-tailed tests
- 2.6.3. Type I and Type II errors
- 2.6.4. Effect sizes
- 2.6.5. Statistical power
- What have I discovered about statistics?
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 3 The R environment
- 3.1. What will this chapter tell me?
- 3.2. Before you start
- 3.2.1. The R-chitecture
- 3.2.2. Pros and cons of R
- 3.2.3. Downloading and installing R
- 3.2.4. Versions of R
- 3.3. Getting started
- 3.3.1. The main windows in R
- 3.3.2. Menus in R
- 3.4. Using R
- 3.4.1. Commands, objects and functions
- 3.4.2. Using scripts
- 3.4.3. The R workspace
- 3.4.4. Setting a working directory
- 3.4.5. Installing packages
- 3.4.6. Getting help
- 3.5. Getting data into R
- 3.5.1. Creating variables
- 3.5.2. Creating dataframes
- 3.5.3. Calculating new variables from exisiting ones
- 3.5.4. Organizing your data
- 3.5.5. Missing values
- 3.6. Entering data with R Commander
- 3.6.1. Creating variables and entering data with R Commander
- 3.6.2. Creating coding variables with R Commander
- 3.7. Using other software to enter and edit data
- 3.7.1. Importing data
- 3.7.2. Importing SPSS data files directly
- 3.7.3. Importing data with R Commander
- 3.7.4. Things that can go wrong
- 3.8. Saving data
- 3.9. Manipulating data
- 3.9.1. Selecting parts of a dataframe
- 3.9.2. Selecting data with the subset() function
- 3.9.3. Dataframes and matrices
- 3.9.4. Reshaping data
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- 4 Exploring data with graphs
- 4.1. What will this chapter tell me?
- 4.2. The art of presenting data
- 4.2.1. Why do we need graphs
- 4.2.2. What makes a good graph?
- 4.2.3. Lies, damned lies, and … erm … graphs
- 4.3. Packages used in this chapter
- 4.4. Introducing ggplot2
- 4.4.1. The anatomy of a plot
- 4.3.2. Geometric objects (geoms)
- 4.4.3. Aesthetics
- 4.4.4. The anatomy of the ggplot() function
- 4.4.5. Stats and geoms
- 4.4.6. Avoiding overplotting
- 4.4.7. Saving graphs
- 4.4.8. Putting it all together: a quick tutorial
- 4.5. Graphing relationships: the scatterplot
- 4.5.1. Simple scatterplot
- 4.5.2. Adding a funky line
- 4.5.3. Grouped scatterplot
- 4.6. Histograms: a good way to spot obvious problems
- 4.7. Boxplots (box–whisker diagrams)
- 4.8. Density plots
- 4.9. Graphing means
- 4.9.1. Bar charts and error bars
- 4.9.2. Line graphs
- 4.10. Themes and options
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 5 Exploring assumptions
- 5.1. What will this chapter tell me?
- 5.2. What are assumptions?
- 5.3. Assumptions of parametric data
- 5.4. Packages used in this chapter
- 5.5. The assumption of normality
- 5.5.1. Oh no, it’s that pesky frequency distribution again: checking normality visually
- 5.5.2. Quantifying normality with numbers
- 5.5.3. Exploring groups of data
- 5.6. Testing whether a distribution is normal
- 5.6.1. Doing the Shapiro–Wilk test in R
- 5.6.2. Reporting the Shapiro–Wilk test
- 5.7. Testing for homogeneity of variance
- 5.7.1. Levene’s test
- 5.7.2. Reporting Levene’s test
- 5.7.3. Hartley’s Fmax: the variance ratio
- 5.8. Correcting problems in the data
- 5.8.1. Dealing with outliers
- 5.8.2. Dealing with non-normality and unequal variances
- 5.8.3. Transforming the data using R
- 5.8.4. When it all goes horribly wrong
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- 6 Correlation
- 6.1. What will this chapter tell me?
- 6.2. Looking at relationships
- 6.3. How do we measure relationships?
- 6.3.1. A detour into the murky world of covariance
- 6.3.2. Standardization and the correlation coefficient
- 6.3.3. The significance of the correlation coefficient
- 6.3.4. Confidence intervals for r
- 6.3.5. A word of warning about interpretation: causality
- 6.4. Data entry for correlation analysis
- 6.5. Bivariate correlation
- 6.5.1. Packages for correlation analysis in R
- 6.5.2. General procedure for correlations using R Commander
- 6.5.3. General procedure for correlations using R
- 6.5.4. Pearson’s correlation coefficient
- 6.5.5. Spearman’s correlation coefficient
- 6.5.6. Kendall’s tau (non-parametric)
- 6.5.7. Bootstrapping correlations
- 6.5.8. Biserial and point-biserial correlations
- 6.6. Partial correlation
- 6.6.1. The theory behind part and partial correlation
- 6.6.2. Partial correlation using R
- 6.6.3 Semi-partial (or part) correlations
- 6.7. Comparing correlations
- 6.7.1. Comparing independent rs
- 6.7.2. Comparing dependent rs
- 6.8. Calculating the effect size
- 6.9. How to report correlation coefficents
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 7 Regression
- 7.1. What will this chapter tell me?
- 7.2. An introduction to regression
- 7.2.1. Some important information about straight lines
- 7.2.2. The method of least squares
- 7.2.3. Assessing the goodness of fit: sums of squares, R and R2
- 7.2.4. Assessing individual predictors
- 7.3. Packages used in this chapter
- 7.4. General procedure for regression in R
- 7.4.1. Doing simple regression using R Commander
- 7.4.2. Regression in R
- 7.5. Interpreting a simple regression
- 7.5.1. Overall fit of the object model
- 7.5.2. Model parameters
- 7.5.3. Using the model
- 7.6. Multiple regression: the basics
- 7.6.1. An example of a multiple regression model
- 7.6.2. Sums of squares, R and R2
- 7.6.3. Parsimony-adjusted measures of fit
- 7.6.4. Methods of regression
- 7.7. How accurate is my regression model?
- 7.7.1. Assessing the regression model I: diagnostics
- 7.7.2. Assessing the regression model II: generalization
- 7.8. How to do multiple regression using R Commander and R
- 7.8.1. Some things to think about before the analysis
- 7.8.2. Multiple regression: running the basic model
- 7.8.3. Interpreting the basic multiple regression
- 7.8.4. Comparing models
- 7.9. Testing the accuracy of your regression model
- 7.9.1. Diagnostic tests using R Commander
- 7.9.2. Outliers and influential cases
- 7.9.3. Assessing the assumption of independence
- 7.9.4. Assessing the assumption of no multicollinearity
- 7.9.5. Checking assumptions about the residuals
- 7.9.6. What if I violate an assumption?
- 7.10. Robust regression: bootstrapping
- 7.11. How to report multiple regression
- 7.12. Categorical predictors and multiple regression
- 7.12.1. Dummy coding
- 7.12.2. Regression with dummy variables
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 8 Logistic regression
- 8.1. What will this chapter tell me?
- 8.2. Background to logistic regression
- 8.3. What are the principles behind logistic regression?
- 8.3.1. Assessing the model: the log-likelihood statistic
- 8.3.2. Assessing the model: the deviance statistic
- 8.3.3. Assessing the model: R and R2
- 8.3.4. Assessing the model: information criteria
- 8.3.5. Assessing the contribution of predictors: the z-statistic
- 8.3.6. The odds ratio
- 8.3.7. Methods of logistic regression
- 8.4. Assumptions and things that can go wrong
- 8.4.1. Assumptions
- 8.4.2. Incomplete information from the predictors
- 8.4.3. Complete separation
- 8.5. Packages used in this chapter
- 8.6. Binary logistic regression: an example that will make you feel eel
- 8.6.1. Preparing the data
- 8.6.2. The main logistic regression analysis
- 8.6.3. Basic logistic regression analysis using R
- 8.6.4. Interpreting a basic logistic regression
- 8.6.5. Model 1: Intervention only
- 8.6.6. Model 2: Intervention and Duration as predictors
- 8.6.7. Casewise diagnostics in logistic regression
- 8.6.8. Calculating the effect size
- 8.7. How to report logistic regression
- 8.8. Testing assumptions: another example
- 8.8.1. Testing for multicollinearity
- 8.8.2. Testing for linearity of the logit
- 8.9. Predicting several categories: multinomial logistic regression
- 8.9.1. Running multinomial logistic regression in R
- 8.9.2. Interpreting the multinomial logistic regression output
- 8.9.3. Reporting the results
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 9 Comparing two means
- 9.1. What will this chapter tell me?
- 9.2. Packages used in this chapter
- 9.3. Looking at differences
- 9.3.1. A problem with error bar graphs of repeated-measures designs
- 9.3.2. Step 1: calculate the mean for each participant
- 9.3.3. Step 2: calculate the grand mean
- 9.3.4. Step 3: calculate the adjustment factor
- 9.3.5. Step 4: create adjusted values for each variable
- 9.4. The t-test
- 9.4.1. Rationale for the t-test
- 9.4.2. The t-test as a general linear model
- 9.4.3. Assumptions of the t-test
- 9.5. The independent t-test
- 9.5.1. The independent t-test equation explained
- 9.5.2. Doing the independent t-test
- 9.6. The dependent t-test
- 9.6.1. Sampling distributions and the standard error
- 9.6.2. The dependent t-test equation explained
- 9.6.3. Dependent t-tests using R
- 9.7. Between groups or repeated measures?
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 10 Comparing several means: ANOVA (GLM 1)
- 10.1. What will this chapter tell me?
- 10.2. The theory behind ANOVA
- 10.2.1 Inflated error rates
- 10.2.2. Interpreting F
- 10.2.3. ANOVA as regression
- 10.2.4. Logic of the F-ratio
- 10.2.5. Total sum of squares (SST)
- 10.2.6. Model sum of squares (SSM)
- 10.2.7. Residual sum of squares (SSR)
- 10.2.8. Mean squares
- 10.2.9. The F-ratio
- 10.3. Assumptions of ANOVA
- 10.3.1. Homogeneity of variance
- 10.3.2. Is ANOVA robust?
- 10.4. Planned contrasts
- 10.4.1. Choosing which contrasts to do
- 10.4.2. Defining contrasts using weights
- 10.4.3. Non-orthogonal comparisons
- 10.4.4. Standard contrasts
- 10.4.5. Polynomial contrasts: trend analysis
- 10.5. Post hoc procedures
- 10.5.1. Post hoc procedures and Type I (α) and Type II error rates
- 10.5.2. Post hoc procedures and violations of test assumptions
- 10.5.3. Summary of post hoc procedures
- 10.6. One-way ANOVA using R
- 10.6.1. Packages for one-way ANOVA in R
- 10.6.2. General procedure for one-way ANOVA
- 10.6.3. Entering data
- 10.6.4. One-way ANOVA using R Commander
- 10.6.5. Exploring the data
- 10.6.6. The main analysis
- 10.6.7. Planned contrasts using R
- 10.6.8. Post hoc tests using R
- 10.7. Calculating the effect size
- 10.8. Reporting results from one-way independent ANOVA
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 11 Analysis of covariance, ANCOVA (GLM 2)
- 11.1. What will this chapter tell me?
- 11.2. What is ANCOVA?
- 11.3. Assumptions and issues in ANCOVA
- 11.3.1. Independence of the covariate and treatment effect
- 11.3.2. Homogeneity of regression slopes
- 11.4. ANCOVA using R
- 11.4.1. Packages for ANCOVA in R
- 11.4.2. General procedure for ANCOVA
- 11.4.3. Entering data
- 11.4.4. ANCOVA using R Commander
- 11.4.5. Exploring the data
- 11.4.6. Are the predictor variable and covariate independent?
- 11.4.7. Fitting an ANCOVA model
- 11.4.8. Interpreting the main ANCOVA model
- 11.4.9. Planned contrasts in ANCOVA
- 11.4.10. Interpreting the covariate
- 11.4.11. Post hoc tests in ANCOVA
- 11.4.12. Plots in ANCOVA
- 11.4.13. Some final remarks
- 11.4.14. Testing for homogeneity of regression slopes
- 11.5. Robust ANCOVA
- 11.6. Calculating the effect size
- 11.7. Reporting results
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 12 Factorial ANOVA (GLM 3)
- 12.1. What will this chapter tell me?
- 12.2. Theory of factorial ANOVA (independent design)
- 12.2.1. Factorial designs
- 12.3. Factorial ANOVA as regression
- 12.3.1. An example with two independent variables
- 12.3.2. Extending the regression model
- 12.4. Two-way ANOVA: behind the scenes
- 12.4.1. Total sums of squares (SST)
- 12.4.2. The model sum of squares (SSM)
- 12.4.3. The residual sum of squares (SSR)
- 12.4.4. The F-ratios
- 12.5. Factorial ANOVA using R
- 12.5.1. Packages for factorial ANOVA in R
- 12.5.2. General procedure for factorial ANOVA
- 12.5.3. Factorial ANOVA using R Commander
- 12.5.4. Entering the data
- 12.5.5. Exploring the data
- 12.5.6. Choosing contrasts
- 12.5.7. Fitting a factorial ANOVA model
- 12.5.8. Interpreting factorial ANOVA
- 12.5.9. Interpreting contrasts
- 12.5.10. Simple effects analysis
- 12.5.11. Post hoc analysis
- 12.5.12. Overall conclusions
- 12.5.13. Plots in factorial ANOVA
- 12.6. Interpreting interaction graphs
- 12.7. Robust factorial ANOVA
- 12.8. Calculating effect sizes
- 12.9. Reporting the results of two-way ANOVA
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 13 Repeated-measures designs (GLM 4)
- 13.1. What will this chapter tell me?
- 13.2. Introduction to repeated-measures designs
- 13.2.1. The assumption of sphericity
- 13.2.2. How is sphericity measured?
- 13.2.3. Assessing the severity of departures from sphericity
- 13.2.4. What is the effect of violating the assumption of sphericity?
- 13.2.5. What do you do if you violate sphericity?
- 13.3. Theory of one-way repeated-measures ANOVA
- 13.3.1. The total sum of squares (SST)
- 13.3.2. The within-participant sum of squares (SSW)
- 13.3.3. The model sum of squares (SSM)
- 13.3.4. The residual sum of squares (SSR)
- 13.3.5. The mean squares
- 13.3.6. The F-ratio
- 13.3.7. The between-participant sum of squares
- 13.4. One-way repeated-measures designs using R
- 13.4.1. Packages for repeated measures designs in R
- 13.4.2. General procedure for repeated-measures designs
- 13.4.3. Repeated-measures ANOVA using R Commander
- 13.4.4. Entering the data
- 13.4.5. Exploring the data
- 13.4.6. Choosing contrasts
- 13.4.7. Analysing repeated measures: two ways to skin a .dat
- 13.4.8. Robust one-way repeated-measures ANOVA
- 13.5. Effect sizes for repeated-measures designs
- 13.6. Reporting one-way repeated-measures designs
- 13.7. Factorial repeated-measures designs
- 13.7.1. Entering the data
- 13.7.2. Exploring the data
- 13.7.3. Setting contrasts
- 13.7.4. Factorial repeated-measures ANOVA
- 13.7.5. Factorial repeated-measures designs as a GLM
- 13.7.6. Robust factorial repeated-measures ANOVA
- 13.8. Effect sizes for factorial repeated-measures designs
- 13.9. Reporting the results from factorial repeated-measures designs
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 14 Mixed designs (GLM 5)
- 14.1. What will this chapter tell me?
- 14.2. Mixed designs
- 14.3. What do men and women look for in a partner?
- 14.4. Entering and exploring your data
- 14.4.1. Packages for mixed designs in R
- 14.4.2. General procedure for mixed designs
- 14.4.3. Entering the data
- 14.4.4. Exploring the data
- 14.5. Mixed ANOVA
- 14.6. Mixed designs as a GLM
- 14.6.1. Setting contrasts
- 14.6.2. Building the model
- 14.6.3. The main effect of gender
- 14.6.4. The main effect of looks
- 14.6.5. The main effect of personality
- 14.6.6. The interaction between gender and looks
- 14.6.7. The interaction between gender and personality
- 14.6.8. The interaction between looks and personality
- 14.6.9. The interaction between looks, personality and gender
- 14.6.10. Conclusions
- 14.7. Calculating effect sizes
- 14.8. Reporting the results of mixed ANOVA
- 14.9. Robust analysis for mixed designs
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 15 Non-parametric tests
- 15.1. What will this chapter tell me?
- 15.2. When to use non-parametric tests
- 15.3. Packages used in this chapter
- 15.4. Comparing two independent conditions: the Wilcoxon rank-sum test
- 15.4.1. Theory of the Wilcoxon rank-sum test
- 15.4.2. Inputting data and provisional analysis
- 15.4.3. Running the analysis using R Commander
- 15.4.4. Running the analysis using R
- 15.4.5. Output from the Wilcoxon rank-sum test
- 15.4.6. Calculating an effect size
- 15.4.7. Writing the results
- 15.5. Comparing two related conditions: the Wilcoxon signed-rank test
- 15.5.1. Theory of the Wilcoxon signed-rank test
- 15.5.2. Running the analysis with R Commander
- 15.5.3. Running the analysis using R
- 15.5.4. Wilcoxon signed-rank test output
- 15.5.5. Calculating an effect size
- 15.5.6. Writing the results
- 15.6. Differences between several independent groups: the Kruskal–Wallis test
- 15.6.1. Theory of the Kruskal–Wallis test
- 15.6.2. Inputting data and provisional analysis
- 15.6.3. Doing the Kruskal–Wallis test using R Commander
- 15.6.4. Doing the Kruskal–Wallis test using R
- 15.6.5. Output from the Kruskal–Wallis test
- 15.6.6. Post hoc tests for the Kruskal–Wallis test
- 15.6.7. Testing for trends: the Jonckheere–Terpstra test
- 15.6.8. Calculating an effect size
- 15.6.9. Writing and interpreting the results
- 15.7. Differences between several related groups: Friedman’s ANOVA
- 15.7.1. Theory of Friedman’s ANOVA
- 15.7.2. Inputting data and provisional analysis
- 15.7.3. Doing Friedman’s ANOVA in R Commander
- 15.7.4. Friedman’s ANOVA using R
- 15.7.5. Output from Friedman’s ANOVA
- 15.7.6. Post hoc tests for Friedman’s ANOVA
- 15.7.7. Calculating an effect size
- 15.7.8. Writing and interpreting the results
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 16 Multivariate analysis of variance (MANOVA)
- 16.1. What will this chapter tell me?
- 16.2. When to use MANOVA
- 16.3. Introduction: similarities to and differences from ANOVA
- 16.3.1. Words of warning
- 16.3.2. The example for this chapter
- 16.4. Theory of MANOVA
- 16.4.1. Introduction to matrices
- 16.4.2. Some important matrices and their functions
- 16.4.3. Calculating MANOVA by hand: a worked example
- 16.4.4. Principle of the MANOVA test statistic
- 16.5. Practical issues when conducting MANOVA
- 16.5.1. Assumptions and how to check them
- 16.5.2. Choosing a test statistic
- 16.5.3. Follow-up analysis
- 16.6. MANOVA using R
- 16.6.1. Packages for factorial ANOVA in R
- 16.6.2. General procedure for MANOVA
- 16.6.3. MANOVA using R Commander
- 16.6.4. Entering the data
- 16.6.5. Exploring the data
- 16.6.6. Setting contrasts
- 16.6.7. The MANOVA model
- 16.6.8. Follow-up analysis: univariate test statistics
- 16.6.9. Contrasts
- 16.7. Robust MANOVA
- 16.8. Reporting results from MANOVA
- 16.9. Following up MANOVA with discriminant analysis
- 16.10. Reporting results from discriminant analysis
- 16.11. Some final remarks
- 16.11.1. The final interpretation
- 16.11.2. Univariate ANOVA or discriminant analysis?
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 17 Exploratory factor analysis
- 17.1. What will this chapter tell me?
- 17.2. When to use factor analysis
- 17.3. Factors
- 17.3.1. Graphical representation of factors
- 17.3.2. Mathematical representation of factors
- 17.3.3. Factor scores
- 17.3.4. Choosing a method
- 17.3.5. Communality
- 17.3.6. Factor analysis vs. principal components analysis
- 17.3.7. Theory behind principal components analysis
- 17.3.8. Factor extraction: eigenvalues and the scree plot
- 17.3.9. Improving interpretation: factor rotation
- 17.4. Research example
- 17.4.1. Sample size
- 17.4.2. Correlations between variables
- 17.4.3. The distribution of data
- 17.5. Running the analysis with R Commander
- 17.6. Running the analysis with R
- 17.6.1. Packages used in this chapter
- 17.6.2. Initial preparation and analysis
- 17.6.3. Factor extraction using R
- 17.6.4. Rotation
- 17.6.5. Factor scores
- 17.6.6. Summary
- 17.7. How to report factor analysis
- 17.8. Reliability analysis
- 17.8.1. Measures of reliability
- 17.8.2. Interpreting Cronbach’s α (some cautionary tales …)
- 17.8.3. Reliability analysis with R Commander
- 17.8.4. Reliability analysis using R
- 17.8.5. Interpreting the output
- 17.9. Reporting reliability analysis
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 18 Categorical data
- 18.1. What will this chapter tell me?
- 18.2. Packages used in this chapter
- 18.3. Analysing categorical data
- 18.4. Theory of analysing categorical data
- 18.4.1. Pearson’s chi-square test
- 18.4.2. Fisher’s exact test
- 18.4.3. The likelihood ratio
- 18.4.4. Yates’s correction
- 18.5. Assumptions of the chi-square test
- 18.6. Doing the chi-square test using R
- 18.6.1. Entering data: raw scores
- 18.6.2. Entering data: the contingency table
- 18.6.3. Running the analysis with R Commander
- 18.6.4. Running the analysis using R
- 18.6.5. Output from the CrossTable() function
- 18.6.6. Breaking down a significant chi-square test with standardized residuals
- 18.6.7. Calculating an effect size
- 18.6.8. Reporting the results of chi-square
- 18.7. Several categorical variables: loglinear analysis
- 18.7.1. Chi-square as regression
- 18.7.2. Loglinear analysis
- 18.8. Assumptions in loglinear analysis
- 18.9. Loglinear analysis using R
- 18.9.1. Initial considerations
- 18.9.2. Loglinear analysis as a chi-square test
- 18.9.3. Output from loglinear analysis as a chi-square test
- 18.9.4. Loglinear analysis
- 18.10. Following up loglinear analysis
- 18.11. Effect sizes in loglinear analysis
- 18.12. Reporting the results of loglinear analysis
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- 19 Multilevel linear models
- 19.1. What will this chapter tell me?
- 19.2. Hierarchical data
- 19.2.1. The intraclass correlation
- 19.2.2. Benefits of multilevel models
- 19.3. Theory of multilevel linear models
- 19.3.1. An example
- 19.3.2. Fixed and random coefficients
- 19.4. The multilevel model
- 19.4.1. Assessing the fit and comparing multilevel models
- 19.4.2. Types of covariance structures
- 19.5. Some practical issues
- 19.5.1. Assumptions
- 19.5.2. Sample size and power
- 19.5.3. Centring variables
- 19.6. Multilevel modelling in R
- 19.6.1. Packages for multilevel modelling in R
- 19.6.2. Entering the data
- 19.6.3. Picturing the data
- 19.6.4. Ignoring the data structure: ANOVA
- 19.6.5. Ignoring the data structure: ANCOVA
- 19.6.6. Assessing the need for a multilevel model
- 19.6.7. Adding in fixed effects
- 19.6.8. Introducing random slopes
- 19.6.9. Adding an interaction term to the model
- 19.7. Growth models
- 19.7.1. Growth curves (polynomials)
- 19.7.2. An example: the honeymoon period
- 19.7.3. Restructuring the data
- 19.7.4. Setting up the basic model
- 19.7.5. Adding in time as a fixed effect
- 19.7.6. Introducing random slopes
- 19.7.7. Modelling the covariance structure
- 19.7.8. Comparing models
- 19.7.9. Adding higher-order polynomials
- 19.7.10. Further analysis
- 19.8. How to report a multilevel model
- What have I discovered about statistics?
- R packages used in this chapter
- R functions used in this chapter
- Key terms that I’ve discovered
- Smart Alex’s tasks
- Further reading
- Interesting real research
- Epilogue: life after discovering statistics
- Troubleshooting R
- Glossary
- Appendix
- A.1. Table of the standard normal distribution
- A.2. Critical values of the t-distribution
- A.3. Critical values of the F-distribution
- A.4. Critical values of the chi-square distribution
- References
- Index
- Functions in R
- Packages in R
UM RAFBÆKUR Á HEIMKAUP.IS
Bókahillan þín er þitt svæði og þar eru bækurnar þínar geymdar. Þú kemst í bókahilluna þína hvar og hvenær sem er í tölvu eða snjalltæki. Einfalt og þægilegt!Rafbók til eignar
Rafbók til eignar þarf að hlaða niður á þau tæki sem þú vilt nota innan eins árs frá því bókin er keypt.
Þú kemst í bækurnar hvar sem er
Þú getur nálgast allar raf(skóla)bækurnar þínar á einu augabragði, hvar og hvenær sem er í bókahillunni þinni. Engin taska, enginn kyndill og ekkert vesen (hvað þá yfirvigt).
Auðvelt að fletta og leita
Þú getur flakkað milli síðna og kafla eins og þér hentar best og farið beint í ákveðna kafla úr efnisyfirlitinu. Í leitinni finnur þú orð, kafla eða síður í einum smelli.
Glósur og yfirstrikanir
Þú getur auðkennt textabrot með mismunandi litum og skrifað glósur að vild í rafbókina. Þú getur jafnvel séð glósur og yfirstrikanir hjá bekkjarsystkinum og kennara ef þeir leyfa það. Allt á einum stað.
Hvað viltu sjá? / Þú ræður hvernig síðan lítur út
Þú lagar síðuna að þínum þörfum. Stækkaðu eða minnkaðu myndir og texta með multi-level zoom til að sjá síðuna eins og þér hentar best í þínu námi.
Fleiri góðir kostir
- Þú getur prentað síður úr bókinni (innan þeirra marka sem útgefandinn setur)
- Möguleiki á tengingu við annað stafrænt og gagnvirkt efni, svo sem myndbönd eða spurningar úr efninu
- Auðvelt að afrita og líma efni/texta fyrir t.d. heimaverkefni eða ritgerðir
- Styður tækni sem hjálpar nemendum með sjón- eða heyrnarskerðingu
- Gerð : 208
- Höfundur : 7800
- Útgáfuár : 2012
- Leyfi : 379