Introductory Statistics for the Behavioral Sciences
Lýsing:
A comprehensive and userfriendly introduction to statistics for behavioral science students—revised and updated Refined over seven editions by master teachers, this book gives instructors and students alike clear examples and carefully crafted exercises to support the teaching and learning of statistics for both manipulating and consuming data. One of the most popular and respected statistics texts in the behavioral sciences, the Seventh Edition of Introductory Statistics for the Behavioral Sciences has been fully revised.
The new edition presents all the topics students in the behavioral sciences need in a uniquely accessible and easytounderstand format, aiding in the comprehension and implementation of the statistical analyses most commonly used in the behavioral sciences. The Seventh Edition features: A continuous narrative that clearly explains statistics while tracking a common data set throughout, making the concepts unintimidating and memorable, and providing a framework that connects all of the topics and allows for easy comparison of different statistical analyses Coverage of important aspects of research design throughout the text, such as the "correlation is not causality" principle Updated and annotated SPSS output at the end of each chapter with stepbystep instructions Updated examples and exercises An expanded website, at www.
Annað
- Höfundar: Joan Welkowitz, Barry H. Cohen, R. Brooke Lea
- Útgáfa:7
- Útgáfudagur: 2011-12-12
- Hægt að prenta út 10 bls.
- Hægt að afrita 2 bls.
- Format:ePub
- ISBN 13: 9781118367094
- Print ISBN: 9780470907764
- ISBN 10: 111836709X
Efnisyfirlit
- Front Matter
- Dedication
- Preface
- 1. Numbering of Key Formulas
- 2. Consolidation of the Early Chapters
- 3. Moving Ordinal Tests From the Last Chapter to the Middle of the Text
- 4. Creation of Separate Correlation and Regression Chapters
- 5. Updating the Computer Exercises and SPSS Sections
- 6. Advanced Chapter (18) on the Web
- 7. New Ancillaries
- Acknowledgments
- Postscript
- Glossary of Symbols
- Part I Descriptive Statistics
- Chapter 1 Introduction
- Why Study Statistics?
- Descriptive and Inferential Statistics
- Populations, Samples, Parameters, and Statistics
- Measurement Scales
- Nominal Scales
- Ordinal Scales
- Interval/Ratio Scales
- Independent and Dependent Variables
- Summation Notation
- Summation Rules
- Ihno's Study
- Summary
- Exercises
- Table 1.1 Hypothetical Scores on a 20-Point Friendliness Measure for Students From Four Different Dormitories at One Midwestern College
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- Chapter 2 Frequency Distributions and Graphs
- The Purpose of Descriptive Statistics
- Regular Frequency Distributions
- Table 2.1 Number of Friend Requests Within 24 Hours of Opening a Facebook Account by 80 Eighth Graders
- Table 2.2 Regular and Cumulative Frequency Distributions for Data in Table 2.1
- Cumulative Frequency Distributions
- Grouped Frequency Distributions
- Table 2.3 Scores of 85 Students on a 50-Point Math Background Quiz
- Table 2.4 Grouped and Cumulative Frequency Distributions for Data in Table 2.3
- Real and Apparent Limits
- Interpreting a Raw Score
- Definition of Percentile Rank and Percentile
- Computational Procedures
- Case 1: Given a Raw Score, Compute the Corresponding Percentile Rank
- Case 2: Given a Percentile, Compute the Corresponding Raw Score
- Case 3: Finding Percentile Ranks and Percentiles for Grouped Frequency Distributions
- Deciles, Quartiles, and the Median
- Graphic Representations
- Bar Charts
- Histograms
- Figure 2.1 Bar Chart Expressing Number of Prior Math Courses Taken by 25 Pre-Med Students
- Frequency Polygons
- Figure 2.2 Histogram Representing the Grouped Data in Table 2.4
- Regular Frequency Polygons
- Figure 2.3 Regular Frequency Polygon for Data in Table 2.4
- Stem-and-Leaf Displays
- Table 2.5 Stem-and-Leaf Display for Data in Table 2.3
- Chapter 1 Introduction
- Shapes of Frequency Distributions
- Symmetry Versus Skewness
- Figure 2.4 Shapes of Frequency Distributions. (A) Normal curve (symmetric, unimodal). (B) Symmetric, unimodal. (C) Symmetric, bimodal. (D) Asymmetric, bimodal. (E) Unimodal, skewed to the left. (F) Unimodal, skewed to the right. (G) Rectangular. (H) J-curve.
- Modality
- Special Distributions
- Symmetry Versus Skewness
- Summary
- Regular Frequency Distributions
- Grouped Frequency Distributions
- Cumulative Frequency Distributions
- Finding Percentiles and Percentile Ranks
- Case 1: Given a Score, Find the Corresponding Percentile Rank (PR)
- Case 2: Given a Percentile, Find the Corresponding Raw Score
- Case 3: Finding Percentile Ranks and Percentiles for Grouped Frequency Distributions
- Graphic Representations
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- Introduction
- Table 3.1 Grade Distribution for 20 Courses Taken by One Hypothetical Student
- The Mode
- The Median
- Properties
- Usage
- The Mean
- Computation
- The Weighted Mean
- Properties of the Mean
- Usage
- The Concept of Variability
- Testing
- Table 3.2 Six Distributions
- Sports
- Figure 3.1 Frequency Polygons of Two Distributions With the Same Mean but Different Variability
- Psychology
- Statistical Inference
- Testing
- The Range
- The Semi-Interquartile Range
- The Standard Deviation and Variance
- Computing Formulas
- Table 3.3 Computation of σ2 and σ and of s2 and s, for Two Small Samples Using the Definition and Computing Formulas
- Properties of the Standard Deviation
- Computing Formulas
- Summary
- Central Tendency
- 1. The Mode
- 2. The Median
- 3. The Population Mean
- Variability
- 1. The Range
- 2. The Interquartile and Semi-Interquartile (SIQ) Ranges
- 3. The Mean Deviation
- 4. The Biased Variance and Standard Deviation
- 5. The (Unbiased) Population Variance Estimate and the Unbiased Standard Deviation
- Central Tendency
- Figure 3.2 The Frequencies: Statistics Dialog Box
- Figure 3.3 The Explore Dialog Box
- Interpreting a Raw Score Revisited
- Rules for Changing μ and σ
- Standard Scores (z Scores)
- Table 4.1 Distribution of 20 Baseline Heart Rate Scores Expressed in Beats per Minute (bpm), z Scores, and Beats per Second (bps)
- Figure 4.1 Frequency Distribution of Raw HR Scores and Their z Scores From Table 4.1
- T Scores, SAT Scores, and IQ Scores
- T Scores
- SAT Scores
- IQ Scores
- The Normal Distribution
- Figure 4.2 The Normal Curve: A Theoretical Distribution
- Parameters of the Normal Distribution
- Table of the Standard Normal Distribution
- Characteristics of the Normal Curve
- Figure 4.3 Normal Curve: Percent Areas From the Mean to Specified z Distances
- Characteristics of the Normal Curve
- Figure 4.4 Percent of Population With Math SAT Scores Between 500 and 670
- Figure 4.5 Percent of Population With Math SAT Scores Between 450 and 500
- Figure 4.6 Percent of Population With Math SAT Scores Between 360 and 540
- Figure 4.7 Percent of Population With Math SAT Scores Between 630 and 700
- Figure 4.8 Percent of Population With Math SAT Scores Above 720
- Figure 4.9 Math SAT Score That Demarcates the Top 10% of the Population
- Figure 4.10 Math SAT Scores Enclosing the Middle 60% of the Population
- General Transformations
- z Scores (Standard Scores)
- T Scores
- SAT Scores
- IQ Scores
- Normal Distributions
- Type 1 Problem: Given a Raw Score, Find the Corresponding Proportion or Percentage of the Normal Curve
- Type 2 Problem: Given a Proportion or Percentage of a Normal Curve, Find the Corresponding Raw Score
- Chapter 5 Introduction to Statistical Inference
- Introduction
- The Goals of Inferential Statistics
- Sampling Distributions
- Table 5.1 Hypothetical Frequency Distribution of Mean Height for 1,000 Samples (Sample N = 100)
- Figure 5.1 Hypothetical Distribution of Mean Height for 1,000 Samples (Sample N = 100)
- The Standard Error of the Mean
- Figure 5.2 Distribution of Observations of Heights (in Inches) for a Population of 5,000,000 Scandinavian Women
- Figure 5.3 Distribution of Mean Heights (in Inches) for 1,000,000 Randomly Selected Samples of N = 100 Height Observations in Each Sample
- The z Score for Sample Means
- Null Hypothesis Testing
- The Null and Alternative Hypotheses
- The Criterion of Significance
- Critical Values
- Figure 5.4 Areas for Rejecting H0 in Both Tails of the Normal Sampling Distribution Using the .05 Criterion of Significance When H0 Actually Is True
- Figure 5.5 Areas for Rejecting H0 in Both Tails of the Normal Sampling Distribution Using the .01 Criterion of Significance When H0 Is True
- Consequences of Possible Decisions
- Table 5.2 Model for Error Risks in Hypothesis Testing
- One- Versus Two-Tailed Tests of Significance
- Figure 5.6 One-Tailed Test of the Null Hypothesis That μ ≥ 100 Against the Alternative That μ < 100
- Assumptions Required by the Statistical Test for the Mean of a Single Population
- Summary
- The Standard Error of the Mean
- The z Score for Sample Means
- Null Hypothesis Testing: General Considerations
- One- Versus Two-Tailed Tests of Significance
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- Appendix: The Null Hypothesis Testing Controversy
- Introduction
- The Statistical Test for the Mean of a Single Population When σ Is Not Known: The t Distributions
- Degrees of Freedom
- Using the t Distributions to Test Null Hypotheses
- The t and z Distributions Compared
- Figure 6.1 A Comparison Between the Normal Curve and the t Distribution for df = 9
- Computation
- Confidence Intervals and Null Hypothesis Tests
- Figure 6.2 Sampling Distribution of P When π = .50 and N = 400
- Confidence Intervals for π
- Testing the Mean of a Single Population With the t Distributions
- Confidence Intervals
- Test for a Single Proportion
- The Standard Error of the Difference
- Figure 7.1 Illustration of Procedure for Obtaining the Empirical Sampling Distribution of Differences Between Two Means (N = 30)
- Figure 7.2 Frequency Polygon of the Data in Table 7.1
- Table 7.1 Empirical Sampling Distribution of 1,000 Differences Between Pairs of Sample Means (N = 30) Drawn From Two Populations Where μ1 = μ1 = 80 (Hypothetical Data)
- Estimating the Standard Error of the Difference
- The t Test for Two Sample Means
- Step 1: State the Null and Alternative Hypotheses, and the Significance Criterion, α
- Step 2: Calculate the Appropriate Test Statistic
- Step 3: Find the Critical Value and Make a Statistical Decision With Respect to Your Null Hypothesis
- Reporting the Results of a t Test
- Figure 7.3 Acceptance and Rejection Regions for the Caffeine/Control Comparison in the t Distribution for df = 40
- Interpreting the Results of a t Test...
- When Retaining H0
- When Rejecting H0
- Table 7.2 Heart Rate in bpm Before and After the Announcement of a Pop Statistics Quiz
- Step 1: State the Null and Alternative Hypotheses, and the Significance Criterion, α
- Step 2: Calculate the Appropriate Test Statistic
- Step 3: Find the Critical Value and Make a Statistical Decision With Respect to Your Null Hypothesis
- Confidence Interval for the Population Mean of Difference Scores
- Comparing the t Test for Matched Pairs With the t Test for Independent Samples
- The Different Types of Matched-Pairs Designs
- The Repeated-Measures (RM) Design
- The Matched-Pairs (MP) Design
- For Independent Samples
- For Matched Samples
- Independent-Samples t Test
- Matched-Pairs t Test
- Introduction
- Parametric Versus Nonparametric Tests
- Ordinal Tests
- The Power Efficiency of Statistical Tests
- The Basics of Dealing With Data in the Form of Ranks
- An Example of Ranking Data With Ties
- The Difference Between the Locations of Two Independent Samples: The Rank-Sum Test
- Step 1: State the Null and Alternative Hypotheses, and the Significance Criterion, α
- Step 2: Calculate the Appropriate Test Statistic
- Table 8.1 The Rank-Sum Test for Two Independent Samples
- Step 3: Find the Critical Value and Make a Statistical Decision With Respect to Your Null Hypothesis
- Interpreting the Results of a Rank-Sum Test
- Measure of Strength of Relationship: The Glass Rank Biserial Correlation
- The Difference Between the Locations of Two Matched Samples: The Wilcoxon Test
- Rationale and Computational Procedures
- Table 8.2 The Wilcoxon Test for Two Matched Samples
- Step 2: Calculate the Appropriate Test Statistic
- Step 3: Find the Critical Value and Make a Statistical Decision With Respect to Your Null Hypothesis
- Interpreting the Results of a Wilcoxon Matched-Pairs Signed-Ranks Test
- The Sign Test
- Measure of Strength of Relationship: The Matched-Pairs Rank Biserial Correlation
- Rationale and Computational Procedures
- Summary
- Basic Considerations
- The Difference Between the Locations of Two Independent Samples: The Rank-Sum Test
- The Difference Between the Locations of Two Matched Samples: The Wilcoxon Test
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- The Rank-Sum Test
- The Wilcoxon Test
- Introduction
- Figure 9.1 Possible Relationship Between Income of a Family and Their Child's IQ
- Figure 9.2 Possible Relationship Between Years of Play and Average Golf Score
- Figure 9.3 Possible Relationship Between the Length of Big Toe and Male IQ Scores
- Describing the Linear Relationship Between Two Variables
- The z Score Product Formula for r
- Table 9.1 Raw Scores and z Scores on SAT (X) and GPA (Y) for 25 Students in an Upper Midwestern U.S. College
- Figure 9.4 Perfect Linear Relationship Between Two Variables: (A) Perfect Positive Linear Relationship; (B) Perfect Negative Linear Relationship
- Computing Formulas for r
- Table 9.2 Calculation of Pearson Correlation Coefficient Between SAT (X) and GPA (Y) by the Raw Score and z Product Methods
- Figure 9.5 Scatter Plot for Data in Table 9.1 (r = + .65)
- The Before–After Heart Rate Example
- The z Score Product Formula for r
- Interpreting the Magnitude of a Pearson r
- Correlation and Causation
- Correlation and Restriction of Range
- Figure 9.6 Effect of Restriction of Range on the Correlation Coefficient
- Figure 9.7 Example of Perfect Relationship Between Two Variables Where r = 0
- Correlation and Nonlinear Relationships
- Correlation and Bivariate Outliers
- Figure 9.8 Scatter Plot for Data in Table 9.1 (r = + .324) With One Data Point Moved to Create a Bivariate Outlier
- Reliability
- Validity
- Implications of Rejecting H0
- Assumptions Underlying the Use of r
- Rationale and Computational Procedures
- Table 9.3 Ten Entrepreneurs Both Measured and Ranked for IQ and for Annual Income (1 = Least, 10 = Most)
- Dealing With Tied Ranks
- Testing the Significance of the Rank-Order Correlation Coefficient
- When to Use the Spearman Correlation Coefficient
- The Pearson Correlation Coefficient
- Points to Remember
- The Spearman Rank-Order Correlation Coefficient
- Linear Correlation
- The Spearman Rank-Order Correlation Coefficient
- Using the Syntax Window
- Introduction
- Using Linear Regression to Make Predictions
- Computational Procedures
- Figure 10.1 Use of Regression Line to Obtain Predicted Scores on Y
- Figure 10.2 Regression Line of Y on X Showing Extent of Error (Difference Between Actual Weight Score and Predicted Weight Score)
- Properties of Linear Regression
- A Technical Note: Formulas for Predicting X From Y
- Computational Procedures
- Measuring Prediction Error: The Standard Error of Estimate
- Figure 10.3 Relationship Between the Standard Error of Estimate (σest), the Standard Deviation of Y(σY), and the Accuracy of Prediction
- The Proportion of Variance Accounted for by a Correlation
- Estimated Standard Error From a Sample
- The Connection Between Correlation and the t Test
- Figure 10.4 A Scatter Plot in Which One of the Variables Has Only Two Values
- Table 10.1 Height and Weight Data for Figure 10.1
- The Point-Biserial Correlation Coefficient
- The Relationship Between rpb and the t Test
- The Relationship Between rpb and g (Effect Size)
- The Proportion of Variance Accounted for by a Grouping Variable
- Estimating the Proportion of Variance Accounted for in the Population
- The Relationship Between ω2 and d2
- Publishing Effect-Size Estimates
- Summary
- Linear Regression
- Points to Remember
- The Connection Between Correlation and the t Test
- Testing the Point-Biserial Correlation Coefficient for Significance
- Converting Significant Values of t to rpb
- Comparing rpb to Another Measure of Effect Size, g
- Estimating the Proportion of Variance Accounted for in the Population
- Linear Regression
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- Linear Regression
- Point-Biserial Correlation
- Introduction
- Concepts of Power Analysis
- The Significance Test of the Mean of a Single Population
- Power Determination
- Figure 11.1 Sampling Distribution of Means (N = 164) Assuming μ0 = 500 and μ1 = 525
- Sample Size Determination
- Power Determination
- The Significance Test of the Proportion of a Single Population
- Power Determination
- Sample Size Determination
- The Significance Test of a Pearson r
- Power Determination
- Sample Size Determination
- Testing the Difference Between Independent Means
- Figure 11.2 Overlap of Populations as a Function of Effect Size
- Power Determination
- Sample Size Determination
- Testing the Difference Between the Means of Two Matched Populations
- Power Determination
- Sample Size Determination
- Choosing a Value for d for a Power Analysis Involving Independent Means
- Estimating d From Previous Research
- Trying Extreme Values for d
- Using Power Analysis Concepts to Interpret the Results of Null Hypothesis Tests
- Lack of Statistical Significance Does Not Imply That the Null Hypothesis Is Likely to Be True
- Statistical Significance Is More Impressive When the Samples Are Smaller
- Large Samples Tend to Produce More Accurate Results But Can Lead to Misleading Conclusions
- The Null Hypothesis Testing Controversy Revisited
- Summary
- The Importance of Power
- The Four Major Parameters of Power Analysis
- The General Procedures for the Two Most Common Forms of Power Analysis
- The Specific Formulas for d, δ, and N
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- Chapter 12 One-Way Analysis of Variance
- Introduction
- The General Logic of ANOVA
- Table 12.1 Two Versions of the Music Experiment (Hypothetical Data)
- Computational Procedures
- Sums of Squares
- Mean Squares
- Testing the F Ratio for Statistical Significance
- Figure 12.1 F Distributions for and , and for and
- Table 12.2 Summary of the One-Way ANOVA of the Second Music Experiment (Table 12.1B)
- The ANOVA Summary Table
- Calculating the One-Way ANOVA From Means and Standard Deviations
- Table 12.3 Data From the Caffeine Experiment of Chapter 7
- Comparing the One-Way ANOVA With the t Test
- A Simplified ANOVA Formula for Equal Sample Sizes
- Effect Size for the One-Way ANOVA
- Interpreting the Size of Eta Squared
- Unbiased Estimate of the Amount of Variance Accounted for in the Population (Omega Squared)
- Some Comments on the Use of ANOVA
- Underlying Assumptions
- Publishing Your Results
- Following Up on a Significant ANOVA
- ANOVA When the Independent Variable Has Quantitative Levels
- Computational Practice for the One-Way ANOVA: Three Unequal-Sized Groups
- Table 12.4 ANOVA of Attitudes of Students From Three Areas of Study at a Small College to Student Participation in Determining College Curricula
- A Nonparametric Alternative to the One-Way ANOVA: The Kruskal-Wallis H Test
- Table 12.5 The Kruskal-Wallis H Test for k = 3 Independent Samples
- A Measure of the Strength of the Relationship: Rl
- Summary
- Meaning of Symbols
- Definition and Computing Formulas
- Steps in One-Way Analysis of Variance
- The Kruskal-Wallis H Test
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- The Kruskal-Wallis H Test
- Appendix: Proof That the Total Sum of Squares Is Equal to the Sum of the Between-Group and the Within-Group Sum of Squares
- Introduction
- Table 13.1 ANOVA of Attitudes of Students From Three Areas of Study at a Small College to Student Participation in Determining College Curricula
- Fisher's Protected t Tests and the Least Significant Difference (LSD)
- Table 13.2 Protected t Tests Among the Three Mean Attitude Scores Following a Significant ANOVA F
- Confidence Intervals for the Protected t Test
- Tukey's Honestly Significant Difference (HSD)
- The Studentized Range Statistic
- Using Tukey's HSD Formula
- Table 13.3 Selected Critical Values of Tukey's Studentized Range Statistic
- Confidence Intervals for Tukey's HSD Test
- Tukey's HSD for Unequal Sample Sizes: The Harmonic Mean
- Comparing HSD to LSD
- Follow-Up Tests for Published Results
- Other Multiple Comparison Procedures
- Table 13.4 Means for the Music Experiment Data in Table 12.1B
- The Fisher-Hayter (Modified LSD) Test
- Table 13.5 Difference Between Each Pair of Means in Table 13.4
- Which Multiple Comparison Test Should I Use?
- Planned and Complex Comparisons
- The Bonferroni Correction
- Complex Comparisons
- Trend Components
- Nonparametric Multiple Comparisons: The Protected Rank-Sum Test
- Summary
- Fisher's Protected t Tests
- Tukey's HSD Test
- The Fisher-Hayter (or Modified LSD) Test
- The Bonferroni Correction
- Exercises
- Table 13.6 Means for the Music Experiment Data in Table 12.1B
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- Introduction
- Figure 14.1 Partitioning of Variation in a Two-Way Factorial Design
- Computational Procedures
- Table 14.1 Scores on a 20-Item Spatial Ability Test as a Function of Classical Composer and Musical Background (4 × 2 Factorial Design)
- Sums of Squares
- Mean Squares
- Figure 14.2 Partitioning of Degrees of Freedom in the Music Experiment
- F Ratios and Tests of Significance
- ANOVA Summary Table
- Multiple Comparisons Following a Factorial ANOVA
- Table 14.2 Summary of Two-Way ANOVA of Music Experiment
- Familywise Alpha
- Table 14.3 Cell Means Illustrating Some Interaction and Zero Interaction
- Comparing Zero to Some Interaction
- Figure 14.3 Graphic Representation of the Data in Table 14.3
- Different Types of Interactions
- Figure 14.4 Two Kinds of Interaction Patterns for Caffeine and Sex on Test Scores (2 × 2 Factorial Design)
- Significant Interactions in a 2 × 2 ANOVA
- Significant Interactions Involving Multilevel Factors
- Table 14.4 Cell Means for the Music Experiment, With One Cell Modified to Produce a Significant Interaction
- Sums of Squares for a Balanced (i.e., Equal-n) Design
- Degrees of Freedom
- Mean Squares
- F Ratios and Tests of Significance
- Types of Interactions
- Follow-Up Tests (Multiple Comparisons) for a Two-Way ANOVA
- Measuring Effect Size in a Factorial ANOVA
- Table 14.5 Tests of Between-Subjects Effects
- Introduction
- Calculating the One-Way RM ANOVA
- Table 15.1 Music Experiment Data From 12.1A Presented as Repeated Measures
- The Critical F for the RM ANOVA
- Comparing the Matched-Pairs t Test With the One-Way RM ANOVA for Two Conditions
- Figure 15.1 Graph of Data in Table 15.1
- Table 15.2 Data From Table 15.1 for Schubert and Beethoven Only
- The Sphericity Assumption
- Follow-Up Tests
- Power and Effect Size
- Problems With the RM Design
- Advantages and Disadvantages of Alternatives to the RM Design
- Adding a Grouping Factor to a One-Way Independent-Samples ANOVA
- Figure 15.2 Reduction in Error Variance as a Result of Adding a Relevant Grouping Factor to an ANOVA
- The Treatment-by-Block Design
- The Randomized-Blocks (RB) Design
- Adding a Grouping Factor to a One-Way Independent-Samples ANOVA
- Calculating the Mixed-Design ANOVA
- Table 15.3 Music Experiment Data From Table 12.1A Presented as Repeated Measures
- Table 15.4 Cell Means by Composer and Subject Group for the Data in Table 15.3
- Assumptions
- Table 15.5 Summary of Two-Way Mixed-Design ANOVA of Music Experiment
- Follow-Up Tests
- The Before–After Case
- The One-Way RM ANOVA With Counterbalancing
- One-Way RM ANOVA
- 1. Sums of Squares
- 2. Degrees of Freedom
- 3. Mean Squares
- 4. F Ratio
- 5. The Sphericity Assumption
- 6. Alternative Experimental Designs
- Table 15.6 Recommended Experimental Designs That Reduce the Error Term of a One-Way ANOVA in Various Experimental Situations
- 1. Sums of Squares
- 2. Degrees of Freedom
- 3. Mean Squares
- 4. F Ratios
- 5. Additional Assumption
- One-Way RM ANOVA Output
- Mixed-Design ANOVA
- Chapter 16 Probability of Discrete Events and the Binomial Distribution
- Introduction
- Dichotomous Data
- Probability
- Definition of Probability
- Odds
- The Probability of A or B
- The Probability of A and Then B
- Conditional Probability
- The Binomial Distribution
- Constructing the Binomial Distribution for N = 2
- Constructing the Binomial Distribution for N = 6 (P = .5)
- Testing Null Hypotheses With the Binomial Distribution
- Figure 16.1 Binomial Distribution for N = 12, P = .5
- Using the Normal Distribution as an Approximation
- The Formula for Proportions
- Applications of the Binomial Test
- Simplified Formula for P = .5
- The Sign Test for Matched Samples
- Summary
- Probability
- Using the Normal Distribution as an Approximation to the Binomial Distribution
- The Sign Test for Matched Samples
- Exercises
- Thought Questions
- Computer Exercises
- Bridge to SPSS
- The Binomial Test
- The Sign Test
- Introduction
- Chi Square and the Goodness of Fit: One-Variable Problems
- Table 17.1 Chi-Square Test for Literary Title Choices
- Figure 17.1 Chi-Square Distribution for df = 1 and df = 6
- Expected Frequencies From a Preexisting Population
- The Relationship Between the Binomial Test and the Chi-Square Test With Two Categories
- Table 17.2 Chi-Square Test for Attitude Toward Cumulative Final Examinations
- Some Precautions Involving the Use of χ2
- Chi Square as a Test of Independence: Two-Variable Problems
- Table 17.3 Hypothetical Illustration of Perfectly Independent Relationship Between Major and Career Preference, Using Frequency Data
- Table 17.4 Table of Frequencies Relating Undergraduate Major to Career Preference (Hypothetical Data)*
- Figure 17.2 Illustration of Degrees of Freedom for a 4 × 3 Table
- Computing Formula for 2 × 2 Tables
- Two-Way Chi-Square Example With Multiple Levels for Both Variables
- Table 17.5 Table of Frequencies Relating Treatment Response to Psychiatric Diagnosis
- The Phi Coefficient (φ) for 2 × 2 Tables
- Table 17.6 Table of Frequencies Relating Undergrad Major to Career Preference
- Cramér's Phi Coefficient (φC)
- One-Variable Problems
- Two-Variable Problems: Test of Association
- Measures of Strength of Association in Two-Variable Tables
- Some Precautions on the Use of χ2
- One-Way Chi-Square Tests
- Two-Way Chi-Square Tests
- Appendix
- Statistical Tables
- Table A Percent area under the normal curve between the mean and z
- Table B Critical values of t
- Table C Critical values of the Pearson r
- Table D Power as a function of δ and significance criterion (α)
- Table E δ as a function of significance criterion (α) and power
- Table F Critical values of F (α = .05 in standard type, α = .01 in boldface)
- Table G Critical values of the studentized range statistic (q) for α = .05
- Table H Critical values of chi square
- Table I Critical values of rs (Spearman rank-order correlation coefficient)
- Answers to Odd-Numbered Exercises
- Chapter 1
- Chapter 2
- Chapter 3
- Chapter 4
- Chapter 5
- Chapter 6
- Chapter 7
- Chapter 8
- Chapter 9
- Chapter 10
- Chapter 11
- Chapter 12
- Chapter 13
- Chapter 14
- Chapter 15
- Chapter 16
- Chapter 17
- Data from Ihno's Experiment
- Statistical Tables
- Glossary of Terms
- References
- Index
UM RAFBÆKUR Á HEIMKAUP.IS
Bókahillan þín er þitt svæði og þar eru bækurnar þínar geymdar. Þú kemst í bókahilluna þína hvar og hvenær sem er í tölvu eða snjalltæki. Einfalt og þægilegt!Rafbók til eignar
Rafbók til eignar þarf að hlaða niður á þau tæki sem þú vilt nota innan eins árs frá því bókin er keypt.
Þú kemst í bækurnar hvar sem er
Þú getur nálgast allar raf(skóla)bækurnar þínar á einu augabragði, hvar og hvenær sem er í bókahillunni þinni. Engin taska, enginn kyndill og ekkert vesen (hvað þá yfirvigt).
Auðvelt að fletta og leita
Þú getur flakkað milli síðna og kafla eins og þér hentar best og farið beint í ákveðna kafla úr efnisyfirlitinu. Í leitinni finnur þú orð, kafla eða síður í einum smelli.
Glósur og yfirstrikanir
Þú getur auðkennt textabrot með mismunandi litum og skrifað glósur að vild í rafbókina. Þú getur jafnvel séð glósur og yfirstrikanir hjá bekkjarsystkinum og kennara ef þeir leyfa það. Allt á einum stað.
Hvað viltu sjá? / Þú ræður hvernig síðan lítur út
Þú lagar síðuna að þínum þörfum. Stækkaðu eða minnkaðu myndir og texta með multi-level zoom til að sjá síðuna eins og þér hentar best í þínu námi.
Fleiri góðir kostir
- Þú getur prentað síður úr bókinni (innan þeirra marka sem útgefandinn setur)
- Möguleiki á tengingu við annað stafrænt og gagnvirkt efni, svo sem myndbönd eða spurningar úr efninu
- Auðvelt að afrita og líma efni/texta fyrir t.d. heimaverkefni eða ritgerðir
- Styður tækni sem hjálpar nemendum með sjón- eða heyrnarskerðingu
- Gerð : 208
- Höfundur : 11419
- Útgáfuár : 2011
- Leyfi : 380