Lýsing:
Introduction to Econometrics provides students with clear and simple mathematics notation and step-by-step explanations of mathematical proofs, to give them a thorough understanding of the subject. Extensive exercises throughout build confidence by encouraging students to apply econometric techniques. Retaining its student-friendly approach, Introduction to Econometrics has a comprehensive revision guide to all the essential statistical concepts needed to study econometrics, additional Monte Carlo simulations, new summaries, and non-technical introductions to more advanced topics at the end of chapters.
This book is supported by an Online Resource Centre, which includes: For lecturers: · Instructor's manual for the text and data sets, detailing the exercises and their solutions. · Customizable PowerPoint slides. For students: · Data sets referred to in the book. · A comprehensive study guide offers students the opportunity to gain experience with econometrics through practice with exercises. · Software manual.
Annað
- Höfundur: Christopher Dougherty
- Útgáfa:5
- Útgáfudagur: 2016-04-21
- Engar takmarkanir á útprentun
- Engar takmarkanir afritun
- Format:Page Fidelity
- ISBN 13: 9780192655783
- Print ISBN: 9780199676828
- ISBN 10: 0192655787
Efnisyfirlit
- Preface
- Contents
- Introduction
- Why study econometrics?
- Aim of this text
- Mathematics and statistics prerequisites for studying econometrics
- Additional resources
- Econometrics software
- Review: Random Variables, Sampling, Estimation, and Inference
- R.1 The need for a solid understanding of statistical theory
- R.2 Discrete random variables and expectations
- Discrete random variables
- Expected values of discrete random variables
- Expected values of functions of discrete random variables
- Expected value rules
- Population variance of a discrete random variable
- Fixed and random components of a random variable
- R.3 Continuous random variables
- Probability density
- R.4 Population covariance, covariance and variance rules, and correlation
- Covariance
- Independence of random variables
- Covariance rules
- Variance rules
- Correlation
- R.5 Samples, the double structure of a sampled random variable, and estimators
- Sampling
- Estimators
- R.6 Unbiasedness and efficiency
- Unbiasedness
- Efficiency
- Conflicts between unbiasedness and minimum variance
- R.7 Estimators of variance, covariance, and correlation
- R.8 The normal distribution
- R.9 Hypothesis testing
- Formulation of a null hypothesis and development of its implications
- Compatibility, freakiness, and the significance level
- R.10 Type II error and the power of a test
- R.11 t tests
- The reject/fail-to-reject terminology
- R.12 Confidence intervals
- R.13 One-sided tests
- H0:μ=μ0, H1:μ=μ1
- Generalizing from H0:μ=μ0, H1:μ=μ1 to H0:μ=μ0, H1:μ>μ0
- H0:μ=μ0, H1:μ<μ0
- One-sided t tests
- Important special case: H0:μ=0
- Anomalous results
- Justification of the use of a one-sided test
- R.14 Probability limits and consistency
- Probability limits
- Why is consistency of interest?
- Simulations
- R.15 Convergence in distribution and central limit theorems
- Limiting distributions
- Key terms
- Appendix R.1 Unbiased estimators of the population covariance and variance
- Appendix R.2 Density functions of transformed random variables
- 1 Simple Regression Analysis
- 1.1 The simple linear model
- 1.2 Least squares regression with one explanatory variable
- 1.3 Derivation of the regression coefficients
- Least squares regression with one explanatory variable: the general case
- Two decompositions of the dependent variable
- Regression model without an intercept
- 1.4 Interpretation of a regression equation
- Changes in the units of measurement
- 1.5 Two important results relating to OLS regressions
- The mean value of the residuals is zero
- The sample correlation between the observations on X and the residuals is zero
- 1.6 Goodness of fit: R2
- Example of how R2 is calculated
- Alternative interpretation of R2
- Key terms
- 2 Properties of the Regression Coefficients and Hypothesis Testing
- 2.1 Types of data and regression model
- 2.2 Assumptions for regression models with nonstochastic regressors
- 2.3 The random components and unbiasedness of the OLS regression coefficients
- The random components of the OLS regression coefficients
- The unbiasedness of the OLS regression coefficients
- Normal distribution of the regression coefficients
- 2.4 A Monte Carlo experiment
- 2.5 Precision of the regression coefficients
- Variances of the regression coefficients
- Standard errors of the regression coefficients
- The Gauss–Markov theorem
- 2.6 Testing hypotheses relating to the regression coefficients
- 0.1 percent tests
- One-sided tests
- Confidence intervals
- 2.7 The F test of goodness of fit
- Relationship between the F test of goodness of fit and the t test on the slope coefficient in simple
- Key terms
- Appendix 2.1 The Gauss–Markov theorem
- 3 Multiple Regression Analysis
- 3.1 Illustration: a model with two explanatory variables
- 3.2 Derivation of the multiple regression coefficients
- The general model
- Interpretation of the multiple regression coefficients
- 3.3 Properties of the multiple regression coefficients
- Unbiasedness
- Efficiency
- Precision of the multiple regression coefficients
- t tests and confidence intervals
- 3.4 Multicollinearity
- Multicollinearity in models with more than two explanatory variables
- Examples of multicollinearity
- What can you do about multicollinearity?
- 3.5 Goodness of fit: R2
- F tests
- Further analysis of variance
- Relationship between F statistic and t statistic
- 3.6 Prediction
- Properties of least squares predictors
- Key terms
- 4 Nonlinear Models and Transformations of Variables
- 4.1 Linearity and nonlinearity
- 4.2 Logarithmic transformations
- Logarithmic models
- Semilogarithmic models
- The disturbance term
- Comparing linear and logarithmic specifications
- 4.3 Models with quadratic and interactive variables
- Quadratic variables
- Higher-order polynomials
- Interactive explanatory variables
- Ramsey’s RESET test of functional misspecification
- 4.4 Nonlinear regression
- Key terms
- 5 Dummy Variables
- 5.1 Illustration of the use of a dummy variable
- Standard errors and hypothesis testing
- 5.2 Extension to more than two categories and to multiple sets of dummy variables
- Joint explanatory power of a group of dummy variables
- Change of reference category
- The dummy variable trap
- Multiple sets of dummy variables
- 5.3 Slope dummy variables
- Joint explanatory power of the intercept and slope dummy variables
- 5.4 The Chow test
- Relationship between the Chow test and the F test of the explanatory power of a set of dummy variabl
- Key terms
- 5.1 Illustration of the use of a dummy variable
- 6 Specification of Regression Variables
- 6.1 Model specification
- 6.2 The effect of omitting a variable that ought to be included
- The problem of bias
- Invalidation of the statistical tests
- R2 in the presence of omitted variable bias
- 6.3 The effect of including a variable that ought not to be included
- 6.4 Proxy variables
- Unintentional proxies
- 6.5 Testing a linear restriction
- F test of a linear restriction
- The reparameterization of a regression model
- t test of a linear restriction
- Multiple restrictions
- Zero restrictions
- Key terms
- 7 Heteroskedasticity
- 7.1 Heteroskedasticity and its implications
- Possible causes of heteroskedasticity
- 7.2 Detection of heteroskedasticity
- The Goldfeld–Quandt test
- The White test
- 7.3 Remedies for heteroskedasticity
- Weighted least squares
- Mathematical misspecification
- Robust standard errors
- How serious are the consequences of heteroskedasticity?
- Key terms
- 7.1 Heteroskedasticity and its implications
- 8 Stochastic Regressors and Measurement Errors
- 8.1 Assumptions for models with stochastic regressors
- 8.2 Finite sample properties of the OLS regression estimators
- Unbiasedness of the OLS regression estimators
- Precision and efficiency
- 8.3 Asymptotic properties of the OLS regression estimators
- Consistency
- Asymptotic normality of the OLS regression estimators
- 8.4 The consequences of measurement errors
- Measurement errors in the explanatory variable(s)
- Measurement errors in the dependent variable
- Imperfect proxy variables
- Example: Friedman’s permanent income hypothesis
- 8.5 Instrumental variables
- Asymptotic distribution of the IV estimator
- Multiple instruments
- The Durbin–Wu–Hausman specification test
- Key terms
- 9 Simultaneous Equations Estimation
- 9.1 Simultaneous equations models: structural and reduced form equations
- 9.2 Simultaneous equations bias
- A Monte Carlo experiment
- 9.3 Instrumental variables estimation
- Underidentification
- Exact identification
- Overidentification
- Two-stage least squares
- The order condition for identification
- Unobserved heterogeneity
- Durbin–Wu–Hausman test
- Key terms
- 10 Binary Choice and Limited Dependent Variable Models, and Maximum Likelihood Estimation
- 10.1 The linear probability model
- 10.2 Logit analysis
- Generalization to more than one explanatory variable
- Goodness of fit and statistical tests
- 10.3 Probit analysis
- 10.4 Censored regressions: tobit analysis
- 10.5 Sample selection bias
- 10.6 An introduction to maximum likelihood estimation
- Generalization to a sample of n observations
- Generalization to the case where σ is unknown
- Application to the simple regression model
- Goodness of fit and statistical tests
- Key terms
- Appendix 10.1 Comparing linear and logarithmic specifications
- 11 Models Using Time Series Data
- 11.1 Assumptions for regressions with time series data
- 11.2 Static models
- 11.3 Models with lagged explanatory variables
- Estimating long-run effects
- 11.4 Models with a lagged dependent variable
- The partial adjustment model
- The error correction model
- The adaptive expectations model
- More general autoregressive models
- 11.5 Assumption C.7 and the properties of estimators in autoregressive models
- Consistency
- Limiting distributions
- t tests in an autoregressive model
- 11.6 Simultaneous equations models
- 11.7 Alternative dynamic representations of time series processes
- Time series analysis
- Vector autoregressions
- Key terms
- 12 Autocorrelation
- 12.1 Definition and consequences of autocorrelation
- Consequences of autocorrelation
- Autocorrelation with a lagged dependent variable
- 12.2 Detection of autocorrelation
- The Breusch–Godfrey test
- The Durbin–Watson test
- 12.3 Fitting a model subject to AR(1) autocorrelation
- Issues
- Inference
- The common factor test
- 12.4 Apparent autocorrelation
- 12.5 Model specification: specific-to-general versus general-to-specific
- Comparison of alternative models
- The general-to-specific approach to model specification
- Key terms
- Appendix 12.1 Demonstration that the Durbin–Watson d statistic approximates 2 − 2r in large samp
- 12.1 Definition and consequences of autocorrelation
- 13 Introduction to Nonstationary Time Series
- 13.1 Stationarity and nonstationarity
- Stationary time series
- Nonstationary time series
- Deterministic trend
- Difference-stationarity and trend-stationarity
- 13.2 Spurious regressions
- Spurious regressions with variables possessing deterministic trends
- Spurious regressions with variables that are random walks
- 13.3 Graphical techniques for detecting nonstationarity
- 13.4 Tests of nonstationarity: the augmented Dickey–Fuller t test
- Untrended process
- Trended process
- 13.5 Tests of nonstationarity: other tests
- The Dickey–Fuller test using the scaled estimator of the slope coefficient
- The Dickey–Fuller F test
- Power of the tests
- Further tests
- Tests of deterministic trends
- Further complications
- 13.6 Cointegration
- 13.7 Fitting models with nonstationary time series
- Detrending
- Differencing
- Error correction models
- Key terms
- 13.1 Stationarity and nonstationarity
- 14 Introduction to Panel Data Models
- 14.1 Reasons for interest in panel data sets
- 14.2 Fixed effects regressions
- Within-groups fixed effects
- First differences fixed effects
- Least squares dummy variable fixed effects
- 14.3 Random effects regressions
- Assessing the appropriateness of fixed effects and random effects estimation
- Random effects or OLS?
- A note on the random effects and fixed effects terminology
- 14.4 Differences in differences
- Key terms
- APPENDIX A Statistical tables
- APPENDIX B Data sets
- Bibliography
- Author Index
- Subject Index
UM RAFBÆKUR Á HEIMKAUP.IS
Bókahillan þín er þitt svæði og þar eru bækurnar þínar geymdar. Þú kemst í bókahilluna þína hvar og hvenær sem er í tölvu eða snjalltæki. Einfalt og þægilegt!Rafbók til eignar
Rafbók til eignar þarf að hlaða niður á þau tæki sem þú vilt nota innan eins árs frá því bókin er keypt.
Þú kemst í bækurnar hvar sem er
Þú getur nálgast allar raf(skóla)bækurnar þínar á einu augabragði, hvar og hvenær sem er í bókahillunni þinni. Engin taska, enginn kyndill og ekkert vesen (hvað þá yfirvigt).
Auðvelt að fletta og leita
Þú getur flakkað milli síðna og kafla eins og þér hentar best og farið beint í ákveðna kafla úr efnisyfirlitinu. Í leitinni finnur þú orð, kafla eða síður í einum smelli.
Glósur og yfirstrikanir
Þú getur auðkennt textabrot með mismunandi litum og skrifað glósur að vild í rafbókina. Þú getur jafnvel séð glósur og yfirstrikanir hjá bekkjarsystkinum og kennara ef þeir leyfa það. Allt á einum stað.
Hvað viltu sjá? / Þú ræður hvernig síðan lítur út
Þú lagar síðuna að þínum þörfum. Stækkaðu eða minnkaðu myndir og texta með multi-level zoom til að sjá síðuna eins og þér hentar best í þínu námi.
Fleiri góðir kostir
- Þú getur prentað síður úr bókinni (innan þeirra marka sem útgefandinn setur)
- Möguleiki á tengingu við annað stafrænt og gagnvirkt efni, svo sem myndbönd eða spurningar úr efninu
- Auðvelt að afrita og líma efni/texta fyrir t.d. heimaverkefni eða ritgerðir
- Styður tækni sem hjálpar nemendum með sjón- eða heyrnarskerðingu
- Gerð : 208
- Höfundur : 16337
- Útgáfuár : 2020
- Leyfi : 380