ASE18ProjectAdvice ProjectdataUK.xlsxShockstomilitarysupportandsubsequentassassinationsinAncient ShillerEViews EVIEWSresult1stMay1 x
Applied Statistics and Econometrics: advice on
doing your project
Ron Smith: r.smith@bbk.ac.uk
2
01
8
-19
Department of Economics Mathematics and Statistics
Birkbeck, University of London
Economics: Graduate Diplomas and BScs
1. BASIC RULES
• To do your project you need to choose a topic; collect some data; do some
statistical analysis of the data (e.g. graphs, summary statistics); draw some
conclusions; and write up your results clearly in a standard academic style,
in less than
3
,000 words.
• It often helps if the topic is posed as a question, which can be answered in
the project, as is done in section
4
of the notes.
• The project is designed to test your ability to collect and interpret data, not
a test of the material covered in this course, so you do not need to repeat
text-book material in the project or use every technique covered.
• Choose something that interests you. It does not have to be on a economic
or financial topic, it could be on a sporting topic.
• Most projects use standard data. But you should consult Ron Smith if your
project:.
—uses confidential work data We can provide a letter to your em-
ployers about how it will be handled.
—needs ethical approval. This will be the case if it involves collecting
of data from human participants, e.g. a survey„See Guidelines on Re-
searchwithEthical Implications . onhttp://www.bbk.ac.uk/committees/research-
integrity/.
• It counts for 30% of the marks for the module.
1.1. YOU MUST
• Choose a topic and develop the research project. You can do your project
on anything that involves interpreting data, it does not have to be narrowly
economic or financial.
• Get the data before you decide on the topic: you cannot do the project
without data. Often choice of topic is prompted by the data available.
Check Birkbeck eLibrary, statistical databases; Bloomberg and Datastream
are available in the library. Try Google or other search engines: just type the
topic you are interested in and then data, e.g. “Road Traffi c Deaths Data”
got various sites with international data on road traffi c deaths. Gapminder
has a vast amount of country data very well presented.
• Submit a short proposal to Ron Smith by Monday 2
5
February 2019.
You can submit it earlier. It just needs to indicate a topic and where you
will get the data.
• Conductanappropriate statistical analysis of thedataanddrawconclusions.
The type of analysis appropriate will depend on the question and data.
• Write up the results clearly in an academic style in less than 3,000 words
(excluding graphs, tables, appendices, title page and abstract, but including
everything else). Do not exceed this upper limit: part of the exercise is to
write up your results briefly.
• Submit the final version, with the data, by Monday 13 May 2019.
• The data must be in a form that allows us to replicate what you did, e.g. in
a file from a standard program. If you need to use confidential work-related
2
data, we can provide a letter to your employer explaining that it will be
kept confidential. We do not show projects to anyone but the examiners, so
there are no past projects to look at.
• Follow the mitigating circumstances procedure if you think you will miss the
deadline..
• Make a copy for your own use. Showing the project to potential employers
is often useful. We will give you feedback on the project but we will not
return your project, which we keep on file for writing references, etc.
• Keep safe backup copies of your data and drafts of your text as you
work (college computers are a safe place). We are very unsympa-
thetic if you lose work because it was not backed up properly.
1.2. TITLE PAGE MUST CONTAIN
• Programme and year (eg GDE ASE Project 2019)
• Title of project
• Your name
• An abstract: maximum length 100 words
• The number of words in the project
• The programs you used.
1.3. THE PROJECT MUST HAVE
• Numbered pages.
• Graphs of the data (line graphs, histograms or scatter diagrams)
• Numbered graphs and tables, give them titles and specify units and source.
• A short literature survey and bibliography in a standard academic style
• Detailed sources of the data and the data provided electonically.
3
• Been your own work. You can discuss it with friends or colleagues and it is a
good idea for you to read and comment on each others work but it must be
your work which you submit. Plagiarism is a serious offence (see the section
in the programme handbook).
4
2. ASSESSMENT CRITERIA
The criteria are listed below. We will give you feedback under these headings.
There are no fixed weights attached to them and projects differ very much in the
balance between them.
2.1. Writing and Scholarly conventions
Is there: a clear structure overall; clarity in individual paragraphs and sentences;
logical arguments; and careful use of evidence? Are spelling and grammar correct?
Are any technical terms or abbreviations explained? The word limit is short so
make every word count. Are sources of ideas and quotations properly acknowl-
edged? Is there a list of references? Are data sources properly documented? Is
the project written in an academic (as opposed to, say, journalistic) style? Copy
the styles of articles in economics/finance journals.
2.2. Originality/interest.
Most topics can be made interesting if presented suffi ciently well, but it is harder
to find something interesting to say about a standard topic, than about a slightly
more unusual topic.
2.3. Analysis.
Does the work indicate a good understanding of the relevant context and liter-
ature? Does it use the appropriate concepts from relevant economic or finance
theory. Is there a a logical argument and effective use of evidence to support the
argument? Did it answer the question posed?
2.4. Data collection/presentation
Has the appropriate data been collected (given time limitations)? Have the data
been checked? Does the work show understanding of what the data actually
measure and the limitations of the data? If students indicate that they put an
unusual amount of work into collecting data, they will get some credit for it. Does
the work demonstrate the ability to summarize and present data in a clear and
effective way?
5
2.5. Statistical Methods.
Are the appropriate statistical methods used? Have any conclusions been suitably
qualified? Does the work show understanding of the methods.
2.
6
. Interpretation.
How does the report answer the question it posed?
3. WHAT YOUR REPORT SHOULD LOOK LIKE
Your project report should tell a story, with a beginning, a middle and an end.
It is a story about your investigation, how you answered the question, not part of
your autobiography and the problems you had doing the project. The following
structure is a suggestion, adapt it to suit your question. Look at the structure
used in section 4 of the notes, which describes UK growth and inflation.
3.1. ABSTRACT
Here you must summarize your project in 100 words or less. Many journals print
abstracts at the start of each paper, copy their form
3.2. INTRODUCTION.
Explain what you are going to investigate, the question you are going to answer,
and why it is interesting. Say briefly what sort of data you will be using (eg.
quarterly UK time-series in section 4). Finish this section with a paragraph which
explains the organization of the rest of your project.
3.3. BACKGROUND
Provide context for the analysis to follow, discuss any relevant literature, theory
or other background, explain specialist terms. Do not give standard textbook
material; you have to tell us about what we do not know, not what we do know.
On some topics there is a large literature on others there will be very little. The
library and a search engine, like Google Scholar, can help you to find literature.
In many cases, this section will describe features of the market or industry you are
analyzing. In particular, if you are writing about the industry in which you work,
6
you should make sure you explain features of the industry, or technical terms used
in it, which may be very well known to everyone in it, but not to outsiders.
3.4. DATA
Here you should aim to provide the reader with precise enough information about
definitions and sources to follow the rest of the report. Further details can be
provided in an appendix. You should discuss any peculiarities of the data, or
measurement diffi culties. You may need to discuss changes in the definition of a
variable over time. Check your data, no matter where it comes from. Check for
units, discontinuities and changes in definitions of series, such as from the unifica-
tion of Germany. Check derived variables as well as the raw data. Calculating the
minimum, maximum and mean can help to spot errors. Carry out checks again if
you move data from one type of file to another.
3.5. ANALYSIS
The background should guide you in suggesting features of the data to look at,
hypotheses to test, questions to ask. You must have tables and graphs describing
the broad features of the data. In the case of time series data these features might
include trends, cycles, seasonal patterns and shifts in the mean or variance of
the series. In the case of cross-section data they might include tables of means
and standard deviations, histograms or cross-tabulations. When interpreting the
data, do not to draw conclusions beyond those that are warranted by it. Often
the conclusions you can draw will be more tentative than you would like. Do not
allow your emotional or ethical responses to cloud your interpretation of what you
find in the data.
If you run regressions, report: the names of variables (including the depen-
dent variable); number of observations and definition of the sample; coeffi cents
and either t-ratios, standard errors or p values; R-squared (or R-bar-squared);
standard error of the regression; and any other appropriate test statistics such as
Durbin-Watson for time-series.
3.6. SUMMARY AND CONCLUSIONS.
What are the main findings of your work: the answers to the questions you posed
inthe introduction? Howmustyourfindingsbequalifiedbecauseof the limitations
7
of the data or the methods of analysis you have employed? Do they have policy
implications (public or private)? Do you have suggestions for further work?
3.7. BIBLIOGRAPHY
You must give a bibliographic citation for any work referred to in the text. If in
doubt, follow the Harvard system, used in most economics articles.
3.8. APPENDICES
Extra material, e.g. program output or detailed data definitions can be put in
appendices. These do not count to the word total but the mark will be based on
the main text.
3.9. Good luck
You can learn a lot by doing your project. The skills you can develop in data
analysis, interpretation and presentation are valuable in the labour market; and
having a project to show a potential employer can be useful in getting a job.
Doing your project can also be a very enjoyable experience. The more care you
take with it, the more you will learn, and the more fun you will have.
8
>Sheet1
ble> a 3.4 1.6 1.6 1.9 4.1 4.9 5.5 2.1 1.6 1.9 2.7 3.8 4.1 4.3 4.3 3.7 .5
3.7 5.4 5.5 3.7 5.4 -2 1.6 -0.8 2 4.2 2.5 2.3 3.7 4.2 4.3 1.8 5.6 5.4 0.7 4.1 2.1 0.4 2.8 2.5 10.4 2.6 2.0 2.5 8.6 2.7 2.5 2.9 4.3 3.7 3.3 2.2 3.2 6.0 3.5 5.4 2.8 2.5 5.2 2.4 3.3 2.5 2.3 4.8 1.4 3.1 4.8 2.0 2.5 5.4 1.6 2.5 5.3 2.7 -0.3 2.2 1.7 1.0 1.6 8.1 1.4 2.2 2 7.6 2.9 6.2 1.5 2.3 5.4 3.6 1.8 4.9 1.5 1.8 4.4 1.4 4.1 Economics Letters 171 (2018) 79–82
Contents lists available at ScienceDirect
Economics Letters
journal homepage: www.elsevier.com/locate/ecolet
Shocks to military support and subsequent assassinations in Ancient h i g h l i g h t s
• Rainfall predicts assassinations of Ancient Roman emperors, from 27 BC to 476 AD. a r t i c l e i n f o
Article history: Keywords: a b s t r a c t
A dictator relies on his military’s support; shocks to this support can threaten his rule. Motivated by this, © 2018 Elsevier B.V. All rights reserved.
An army marches on its stomach.
– Napoleon Bonaparte
1. Introduction
Dictators rely upon a military to retain power. Therefore, shocks We therefore ask the question, what were the shocks that ∗ Corresponding author. 1 The Empire formally started with Augustus in 27 BC. While the entire Empire’s of the Roman Empire involved an emperor’s murder. In our def- Our research adds to recent quantitative work on Ancient po- The following historical facts are relevant:
(1) The Roman economy was largely agricultural, depending on (2) The bulk of the Roman army was stationed along the Western 2 We consider both Eastern and Western emperors. Over the period we consider, our period of study.
https://doi.org/10.1016/j.econlet.2018.06.030 https://doi.org/10.1016/j.econlet.2018.06.030 http://www.elsevier.com/locate/ecolet http://www.elsevier.com/locate/ecolet http://crossmark.crossref.org/dialog/?doi=10.1016/j.econlet.2018.06.030&domain=pdf mailto:cchristian2@brocku.ca https://doi.org/10.1016/j.econlet.2018.06.030 80 C. Christian, L. Elbourne / Economics Letters 171 (2018) 79–82
(3) Food transport, in Ancient Rome, was very slow (Terpstra, Using time series analysis, we find that lower rainfall in north- A starving military is probably not the sole determinant of a We proceed as follows. Section 2 discusses our data and em- 2. Data and empirical strategy
2.1. Empirical strategy
To test for the effects of rainfall and drought on Roman emperor Assassinationt = β0 + β1Dt−1 + γ Here, Assassinationt is either a dummy for whether or not an Dt−1 is Precipitationt−1, a rainfall shock, lagged by one year. We use Newey–West standard errors to account for serial corre- Our identification strategy is based on the fact that rainfall is 2.2. Data
We acquire data on Roman assassinations from Scarre’s (2012) Precipitation data for this period are from Buengten et al. Roman Germania grew grain, which requires favourable rainfall Summary statistics are shown in Appendix Table A.1. We pro- 3. Results
3.1. Main results
In Table 1, we report our main results for Roman assassinations Our identifying assumption is that rainfall is unrelated to un- 3.2. Mechanisms
In Table 2, we test for whether rainfall predicts mutinies, using 4. Conclusion
We suggest a mechanism that facilitated a Roman emperor’s Acknowledgments
Michael Carter, Andrew Dickens, Kyle Harper, Matthias Lalisse, Appendix A. Supplementary data
Supplementary material related to this article can be found https://doi.org/10.1016/j.econlet.2018.06.030 C. Christian, L. Elbourne / Economics Letters 171 (2018) 79–82 81
Fig. 1. Assassinations and Precipitation, 27 BC – 476 AD. The above figure shows the number of assassinations of Roman Emperors (red), against reconstructed April–May– Table 1 (1) (2) (3) (4) (5) (6) Precipitationt−1 −.061*** −.013*** −.095*** Precipitationt −.044** −.010** −.074*** Estimation OLS Logit OLS Logit OLS OLS Columns (1)–(4) report results using an assassinations dummy, while columns (5) and (6) report results using the total number of Table 2 (1) (2) (3) (4) (5) (6) Precipitationt−1 −.054** −.020*** −.055** Precipitationt −.051** −.019*** −.059** Estimation OLS Logit OLS Logit OLS OLS Columns (1)–(4) report mutiny dummy results. Columns (5) and (6) report results using the total number of mutinies. Newey–West References
Anderson, Robert Warren, Johnson, Noel D., Koyama, Mark., 2017. Jewish persecu- Brouwer, C., Heibloem, M., 1986. Irrigation Water Management: Irrigation Water Buckley, Emma, Dinter, Martin, 2013. A Companion to the Neronian Age. Wiley- Buengten, Ulf, Tegel, Willy, Nicolussi, Kurt, McCormick, Michael, Frank, David, Chaney, Eric, 2013. Revolt on the nile: economic shocks, religion, and political Collier, Paul, Hoeffler, Anke, 2004. Greed and grievance in civil war. Oxf. Econ. Pap. Cook, Edward R., 2013. Megadroughts, ENSO, and the Invasion of Late-Roman Derpanopoulos, George, Frantz, Erica, Geddes, Barbara, Wright, Joseph, 2016. Are Elton, Hugh, 1996. Frontiers of the Roman Empire. Butler and Tanner, London. Princeton University Press, Princeton, NJ. on institutions and war. Am. Econ. J. 1 (2), 55–87. Sigl, Michael, Marlon, Jennifer R., 2017. Volcanic suppression of Nile summer http://refhub.elsevier.com/S0165-1765(18)30253-2/b1 http://refhub.elsevier.com/S0165-1765(18)30253-2/b1 http://refhub.elsevier.com/S0165-1765(18)30253-2/b1 http://refhub.elsevier.com/S0165-1765(18)30253-2/b2 http://refhub.elsevier.com/S0165-1765(18)30253-2/b2 http://refhub.elsevier.com/S0165-1765(18)30253-2/b2 http://refhub.elsevier.com/S0165-1765(18)30253-2/b3 http://refhub.elsevier.com/S0165-1765(18)30253-2/b3 http://refhub.elsevier.com/S0165-1765(18)30253-2/b3 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b4 http://refhub.elsevier.com/S0165-1765(18)30253-2/b5 http://refhub.elsevier.com/S0165-1765(18)30253-2/b5 http://refhub.elsevier.com/S0165-1765(18)30253-2/b5 http://refhub.elsevier.com/S0165-1765(18)30253-2/b6 http://refhub.elsevier.com/S0165-1765(18)30253-2/b6 http://refhub.elsevier.com/S0165-1765(18)30253-2/b6 http://refhub.elsevier.com/S0165-1765(18)30253-2/b7 http://refhub.elsevier.com/S0165-1765(18)30253-2/b7 http://refhub.elsevier.com/S0165-1765(18)30253-2/b7 http://refhub.elsevier.com/S0165-1765(18)30253-2/b7 http://refhub.elsevier.com/S0165-1765(18)30253-2/b7 http://refhub.elsevier.com/S0165-1765(18)30253-2/b8 http://refhub.elsevier.com/S0165-1765(18)30253-2/b8 http://refhub.elsevier.com/S0165-1765(18)30253-2/b8 http://refhub.elsevier.com/S0165-1765(18)30253-2/b9 http://refhub.elsevier.com/S0165-1765(18)30253-2/b10 http://refhub.elsevier.com/S0165-1765(18)30253-2/b10 http://refhub.elsevier.com/S0165-1765(18)30253-2/b10 http://refhub.elsevier.com/S0165-1765(18)30253-2/b11 http://refhub.elsevier.com/S0165-1765(18)30253-2/b11 http://refhub.elsevier.com/S0165-1765(18)30253-2/b11 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 82 C. Christian, L. Elbourne / Economics Letters 171 (2018) 79–82
flooding triggers revolt and constrains interstate conflict in Egypt. Nature Com- Miguel, Edward, Satyanath, Shanker, Sergenti, Ernest, 2004. Economic shocks and Miller, Michael K., 2012. Economic development, violent leader removal, and de- Newey, Whitney K., West, Kenneth D., 1987. A simple, positive, semi-definite, Roth, Jonathan P., 1998. The Logistics of the Roman Army at War (264 BC – AD 235). Scarre, Chris, 2012. Chronicle of the Roman Emperors. Thames & Hudson. tion of income in the Roman Empire. J. Roman Stud. 99, 61–91. and Institutional Perspective. Brill Press, Leiden. Group, London. http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b12 http://refhub.elsevier.com/S0165-1765(18)30253-2/b13 http://refhub.elsevier.com/S0165-1765(18)30253-2/b13 http://refhub.elsevier.com/S0165-1765(18)30253-2/b13 http://refhub.elsevier.com/S0165-1765(18)30253-2/b13 http://refhub.elsevier.com/S0165-1765(18)30253-2/b13 http://refhub.elsevier.com/S0165-1765(18)30253-2/b14 http://refhub.elsevier.com/S0165-1765(18)30253-2/b14 http://refhub.elsevier.com/S0165-1765(18)30253-2/b14 http://refhub.elsevier.com/S0165-1765(18)30253-2/b15 http://refhub.elsevier.com/S0165-1765(18)30253-2/b15 http://refhub.elsevier.com/S0165-1765(18)30253-2/b15 http://refhub.elsevier.com/S0165-1765(18)30253-2/b15 http://refhub.elsevier.com/S0165-1765(18)30253-2/b15 http://refhub.elsevier.com/S0165-1765(18)30253-2/b16 http://refhub.elsevier.com/S0165-1765(18)30253-2/b16 http://refhub.elsevier.com/S0165-1765(18)30253-2/b16 http://refhub.elsevier.com/S0165-1765(18)30253-2/b17 http://refhub.elsevier.com/S0165-1765(18)30253-2/b18 http://refhub.elsevier.com/S0165-1765(18)30253-2/b18 http://refhub.elsevier.com/S0165-1765(18)30253-2/b18 http://refhub.elsevier.com/S0165-1765(18)30253-2/b19 http://refhub.elsevier.com/S0165-1765(18)30253-2/b20 http://refhub.elsevier.com/S0165-1765(18)30253-2/b20 http://refhub.elsevier.com/S0165-1765(18)30253-2/b20 http://refhub.elsevier.com/S0165-1765(18)30253-2/b21 http://refhub.elsevier.com/S0165-1765(18)30253-2/b21 http://refhub.elsevier.com/S0165-1765(18)30253-2/b21 Introduction Using EViews on Shiller1 6 Data
Ron Smith: r.smith@bbk.ac.uk
Autumn 2 01 8 Department of Economics, Mathematics and Statistics, 1. Data
This exercise is designed to teach you to use a variety of different estimators in these notes. In general you should get the same answers from different programs. 16 .xls, on Moodle. The data is updated
from Robert J Shiller ‘Market Volatility’, MIT Press 1 9 89 and downloaded from 5 6. 7 1 to 2016 on CPI Consumers Price Index 2. Initial Data Handling
2.1. Loading the data
Click on EViews icon. boxes for beginning and end. the residuals. Notice that we renamed the variable C in the original Shiller File Shiller16, click on that. Notice that it will start reading data at B2. This is correct, column A has years, box. You will get a third menu bar when you do operations like graph or regress. 2.2. Transforming and graphing the data
Using the top menu click Quick (six from the left) Graph and enter ND and NE 2 In looking at data, it is often useful to form ratios. Construct the pay out on it, you will see the series. Click on View, top left of the workfile box. Notice the view, descriptive statistics, histogram and stats. This will give minimum and 3 for a normal distribution) statistics before starting any empirical work. Make sure series are in comparable Series and type into box 3. Regression: a static model of dividends and earnings
Run a static regression 3 Always look at the dialogue window and note the options. Notice the default Dependent Variable: LD 4 /16 Tim e: 14 :25 Variable Coefficient Std. Error t-Statistic Prob.
C -0.428854 0.02 15 86 -19.86709 0.0000 17 0.0 10 850 80.70330 0.0000
R-squared 0.978663 Mean dependent var 0.352726 12 S.D. dependent var 1.579280 13 .023 Durbin-Watson stat 1.148272 To copy this equation so that you can paste it into a word processing file, bigger than 2 in absolute values and p values (Prob) of zero. The P value gives 4 errors that are robust to serial correlation, click estimate on the equation box, Fitted Residual Graph and you will get a graph of the residuals in blue and the 4. Dynamic model of earnings and dividends
Given that the serial correlation in the original regression suggested dynamic Dependent Variable: LD Variable Coefficient Std. Error t-Statistic Prob.
C -0.194563 0.042665 -4.560248 0.0000 LE(-1) 0. 11 9784 0.032892 3.641803 0.0004 @TREND 0.000987 0.000643 1.534663 0.1272
R-squared 0.997148 Mean dependent var 0.364613 5 The standard error of regression is much smaller at 0.0855 rather than 0.2315 4.1. Misspecification/Diagnostic tests
Click View on the equation box, choose Residual Diagnostics, Serial Correlation togram and in bottom right the JB test of 100.48 and a p value of 0.0000. There Godfrey, thepvalue is 0.0037 so there is an indicationof heteroskedasticity, in that ber of fitted terms at 1, This tests for the null of linearity by adding powers of parameters at a particular date, you would use the Chow breakpoint test, specify- CUSUM and CUSUM of squares graphs. The test statistics should not cross the 6 Diagnostic tests for the same null hypothesis (that the model is well speci- deletingvariables, e.g. the trendwhich is insignificanthereandommittedvariables C(4))-1=0. This tests that the long-run coeffi cient on log earnings equals unity. 4.2. ARDL and ECM
We re-estimate it without a trend, first in the ARDL form above and then in 7 Dependent Variable: LD Variable Coefficient Std. Error t-Statistic Prob. C -0.133988 0.016276 -8.232040 0.0000 LE(-1) 0.115419 0.032928 3.505239 0.0006 R-squared 0.997099 Mean dependent var 0.364613 Dependent Variable: D(LD) Variable Coefficient Std. Error t-Statistic Prob. C -0.133988 0.016276 -8.232040 0.0000 R-squared 0.526412 Mean dependent var 0.035118 5. Theoretical background.
Lintner suggested that there was a target or long-run dividend pay-out ratio, say, 8 used. Taking logs we get d∗t = log(Θ) + et. This can be written in an unrestricted ∆dt = λ(d dt = λθ0 + λθ1et + (1 −λ)dt−1 + ut. The PAM can be justified if, for instance, firms smooth dividends, not adjusting be allowed for using the more general error correction model, ECM, with a long d∗t = θ0 + θ1et,
∆dt = λ1∆d ∗ ∆dt = λ1θ1∆et + λ2θ0 + λ2θ1et−1 −λ2dt−1 + ut which is statistically identical to the ARDL
dt = α0 + β0et + β1et−1 + α1dt−1 + ut;
a0 = α0; .b0 = β0; a1 = a1 − 1; b1 = β0 + β1;
θ1 = − = = 0.914374(0.01228)
where the standard error of the long-run coeffi cient is got using the Wald com- 6. Estimate the ECM by Non-linear Least Squares
Close the equation, you could name it and save it, and click, quick, estimate an estimatesof the long-runparametersandspeedofadjustmentdirectly.: C(1) = λ1, 9 Dependent Variable: D(LD) Coefficient Std. Error t-Statistic Prob. C(1) 0.222157 0.028335 7.840289 0.0000 R-squared 0.526412 Mean dependent var 0.035118 Save this by giving it a name and then close it. of squared residuals. Most programs ask you to provide starting values, EViews the toolbar. values for all parameters at 0.0. Now open the non-linear regression again and run using the economic interpretation, scale or preliminary OLS regressions to give 10 that you have not got a local minimum.
6.1. Estimate the ECM allowing for non-normality and ARCH.
Above we estimated the model on the assumption that ut ∼ IN(0,σ2). But ht = c0 + c1û Close or save any equations. Click quick, estimate an equation, enter D(LD) C 11 Dependent Variable: D(LD) Variable Coefficient Std. Error z-Statistic Prob.
C -0.089663 0.010518 -8.524973 0.0000 Variance Equation
C 0.000542 0.000538 1.008014 0.3134 T-DIST. DOF 2.859034 0.785262 3.640866 0.0003
R-squared 0.494314 Mean dependent var 0.035118 It is very non-normal v = 2.859. The t distibution has moments greater than 7. ARIMA and unit roots
Estimate a random walk model for log stock prices, up till 2006; then an ARIMA 12 Youshouldgetanestimateof thedrift (C)0.041586, MLL=43.96852, s=0.174939. and type in the AR (t=-2.56) and MA (t=4.30) terms are significant. Click forecast on the the standard error of the regression very much relative to a random walk, and on 7.1. Testing for Unit Roots
Click on LSP, then view, then unit root tests. You will get a dialogue box. Leave set if for first difference rather than level, choose just intercept. The lag length LSP, therefore LSP is clearly I(1). In practice, unit root tests are not always as 13 8. VAR, cointegration and VECM
Use Quick, estimate VAR and you will get a dialogue box. Use the default, lag of 8. You will get a table which shows that everything except the LR indicates Granger causality in both directions, though the p value for LD on LE at 0.0319 It shows four inverse roots. One close to the unit circle, three within it, two intervals box, keep C and @trend in the exogenous variables box. Look at the
14 new estimates, note that lagged dividends still do not quite significantly influence ous correlation between the residuals is 0.538, quite high. choose generalised impulses. These graphs of the impulse response functions, VAR; reverse the order of the variables to LD LE, remove the @trend from the information criteria are given below. From the stars, you can see that both Akaike option 3, note it is set at one cointegrating vector which is what we want and OK. 15 You can impose restrictions on the cointegrating vectors and adjustment co- 9. Endogeneity
Above we estimated an ARDL(1,1) model by regressing log dividends on current LS over the period, it will use 1873-2014, since one observation is lost for the lag. TSLS, you will get a new dialogue box with two windows. Leave the upper 16 Dependent Variable: LD Variable Coefficient Std. Error t-Statistic Prob. C -0.153071 0.025172 -6.080924 0.0000 LE(-1) 0.043038 0.078348 0.549316 0.5837 R-squared 0.996763 Mean dependent var 0.375659 We can test this formally with a Wu-Hausman test. Estimate by OLS: LE model, in which dividends are determined by expected earnings in the next period, 172
Year
GDP growth of UK
Unemployment
Population
Government spending
Real households disposable income cvm annual growth rate nsa
1949
3.4
2.5
1950
3.3
3.2
1951
3.7
don’t use thius
1952
1.6
2.1
1953
5.5
4.9
1954
4.3
1955
3.8
4.6
1956
2.3
1957
1.9
1958
1.3
1959
4.1
5.2
1960
6.3
6.6
1961
2.7
1962
1.1
1.2
1963
4.0
1964
4.4
1965
2.0
1966
2.2
1967
2.8
1.4
1968
5.4
1.7
1969
1.0
1970
1971
3.5
55928000
1.5
1972
56096700
8.4
1973
6.5
56222900
6.4
1974
-2
56235600
-0.8
1975
-1.5
4.5
56225700
0.9
1976
2.9
56216100
-0.3
1977
2.4
5.6
56189900
–
1.8
1978
4.2
56178000
7.3
1979
56240100
6.0
1980
6.8
56329700
1981
9.6
56357500
–
0.4
1982
1
0.7
56290700
0.2
1983
11.5
56315700
1984
11.8
56409300
1985
11.4
56554000
4.8
1986
3.1
11.3
56683800
1987
5.3
10.4
56804000
1988
5.8
8.6
56916400
1989
2.6
7.2
57076500
1990
7.1
57237500
1991
-1.1
8.9
57438700
1992
9.9
57584500
1993
57713900
1994
3.9
9.5
57862100
1995
58024800
1996
8.1
58164400
1997
6.9
58314200
1998
6.2
58474900
1999
58684400
3.6
2000
58886100
6.1
2001
5.1
59113000
4.7
2002
59365700
2003
5.0
59636700
2004
59950400
2005
60413300
2006
60827100
2007
61319100
2008
5.7
61823800
-0.9
2009
-4.2
7.6
62260500
2010
7.9
62759500
2011
63285100
-2.1
2012
8.0
63705000
2013
64105700
-0.1
2014
64596800
2015
65110000
2016
65648100
2017
66040200
2018
Sheet2
Sheet3
Rome
Cornelius Christian a,∗, Liam Elbourne b
a Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON L2S 3A1, Canada
b St. Francis Xavier University, Antigonish, NS, Canada
• When rainfall is low, Roman troops starve, and are more likely to mutiny.
• Lower rainfall predicts more troop mutinies.
• More mutinies predict more assassinations of Roman emperors.
• These results suggest that an emperor relied on his military for support.
Received 16 June 2018
Received in revised form 26 June 2018
Accepted 28 June 2018
Available online 19 July 2018
Assassinations
Rainfall
Mutiny
Military
we find that lower rainfall, along the north-eastern Roman Empire, predicts more assassinations of Roman
emperors. Our proposed mechanism is as follows: lower precipitation increases the probability that
Roman troops, who relied on local food supplies, starve. This pushes soldiers to mutiny, hence weakening
the emperor’s support, and increasing the probability he is assassinated.
to a dictator’s military support can affect his tenure in office,
yet little work has explored this from a quantitative and causal
perspective (Derpanopoulos et al., 2016; Miller, 2012).
caused assassinations of Roman emperors. The Roman Empire,
which lasted from 27 BC to 476 AD, had a total of eighty-two
emperors.1 It therefore provides a rich historical laboratory from
which to draw inferences. Moreover, assassinations were not rare:
roughly 20% of emperors were assassinated, and 5% of the years
E-mail address: cchristian2@brocku.ca (C. Christian).
end date is contestable, the Western Empire fell in 476 AD, when Emperor Romulus
was deposed.
initions, we do not consider speculated murders, or attempted
killings.2
litical economy (Manning et al., 2017; Harper, 2017; Cook, 2013).
Also, no causal econometric work has examined Ancient Rome’s’
political and military intrigues, though some work has focused
on Roman economics (Temin, 2012; Scheidel and Friesen, 2009).
Our analysis uncovers how vital a dictator’s military support is, in
cementing his power.
rainfed agriculture (Harper, 2017).
frontier, and relied heavily on local food sources (Roth, 1998;
Elton, 1996).3
only one Eastern emperor, Numerian, was assassinated.
3 Despite an Eastern military presence, Eastern climate data is not available for
0165-1765/© 2018 Elsevier B.V. All rights reserved.
2013).
ern frontier provinces, like Germania (present-day Germany), in-
creases the likelihood of Roman emperor assassination, in a given
year. Such provinces had heavy troop concentrations. A standard
deviation reduction in rainfall (mm) causes an 11% standard devi-
ation rise in assassination probability. We hypothesize that when
rainfall is low, Roman soldiers stationed along the frontier become
agitated, due to lack of food, hence weakening the emperor’s hold
on power; we provide evidence for this mechanism.
Roman emperor’s violent demise. However, we explain one po-
tential forcing variable, which can heighten political instability
within the Roman Empire. Other factors might also have played a
role. Our study informs the literature on the economic causes and
consequences of violence (Anderson et al., 2017; Chaney, 2013;
Jones and Olken, 2009; Collier and Hoeffler, 2004; Miguel et al.,
2004).
pirical strategy. We show empirical results in Section 3. Section 4
concludes.
assassinations in year t, we estimate the following time-series
specification:
′X + ϵt . (1)
emperor was killed, or the total number of emperors killed in a
given year.
We use a lagged shock because Roman armies had sufficient grain
storage capacity for one year, and were able to temporarily smooth
negative shocks. However, we show that this is robust to a simul-
taneous shock.
lation and heteroskedasticity (Newey and West, 1987). We assume
that the error structure is autocorrelated up to 10 lags.
exogenous. A negative coefficient on β1 implies that the shock
negatively predicts assassinations.
Chronicle of the Roman Emperors. Scarre indicates when a Roman
emperor had been murdered. We exclude speculated murders,
attempted killings, and suicides, since in these cases there is often
historical ambiguity, and it is difficult to ascertain a counterfactual.
For instance, the emperor Nero committed suicide, mistakenly
thinking that armed men were on their way to kill him (Buckley
and Dinter, 2013).
(2011). These authors collect data from 7284 precipitation-
sensitive oak tree rings from France, southeastern Germany, and
northeastern Germany, corresponding to the Ancient Roman fron-
tier. They supplement this with 104 historical accounts to recon-
struct AMJ (April–May–June) precipitation, for the region, from 398
BC to 2008 AD. Precipitation is measured in millimetres.
(Roth, 1998). The AMJ precipitation reconstructions, spanning 91
days, coincide with the planting, initial, crop development, and
mid-season stages of spring wheat’s 120–150 day growth period
(Brouwer and Heibloem, 1986). Moreover, the Roman army had
grain storage technology, usually for up to a year (Roth, pp. 176).
vide a time series graph of the total number of assassinations,
against rainfall, over this period in Fig. 1.
from 27 BC to 476 AD. Negative rainfall shocks predict significantly
more assassinations. In Column (1), for example, a standard devia-
tion decline in rainfall causes an 11.6% standard deviation increase
in assassination probability. In Column (5), a standard deviation
drop in rainfall causes a 13.4% standard deviation increase in total
assassinations.
observables that could bias our results. To test this, we perform a
placebo test (in Appendix B), regressing assassinations on future
rainfall, one year forward. We find no significant effects from this
exercise, and the coefficients are smaller than those for our main
results. This supports our identification strategy.
data from Venning (2011). We find an effect; for instance, in col-
umn (1), a standard deviation drop in rainfall causes a 13.3% stan-
dard deviation rise in mutiny occurrence. In the appendix Table
A2, we test whether mutinies predict assassinations. In appendix,
section C, we offer a historical argument.
murder: troops along the Western frontier, incited by starvation,
weakened the empire’s political stability, in turn increasing the
probability of assassinating an emperor. We show that rainfall in
the northern empire predicts conditions that make assassinations
more likely, and that low rainfall irks troops. Negative shocks to a
dictator’s military support, in the case of Ancient Rome, predict his
demise.
Taco Terpstra, and an anonymous referee provided helpful com-
ments. We are also grateful to participants at the Canadian Eco-
nomics Association’s 2018 Meeting, Northwestern University’s
seminar series, the Canadian Network in Economic History’s 2017
Meeting, and Brock University’s economics speaker series. Any
errors are our own.
online at https://doi.org/10.1016/j.econlet.2018.06.030.
June (AMJ) precipitation (blue), over time. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Effect of rainfall on assassinations.
Dependent variable: Dummy Dummy Dummy Dummy Count Count
(.019) (.005) (.036)
(.020) (.004) (.027)
AR(10) .012 .012 .065 .012
No. of observations 503 503 503 503 503 503
assassinations. Newey–West standard errors are reported in parentheses in columns (1), (3), (5), and (6). Robust standard errors are
reported in columns (2) and (4). All columns, except for (2) and (4), report the coefficient multiplied by 100. The AR(10) row reports
the p-value from the Breusch–Godfrey test, with null hypothesis of no autocorrelation up to 10 lags. Significance levels are ***< .01,
**< .05, and *< .1.
Mechanism: Effect of rainfall on frontier mutinies.
Dependent variable: Dummy Dummy Dummy Dummy Count Count
(.024) (.006) (.025)
(.025) (.007) (.029)
AR(10) .00004 .00003 .0001 .0001
No. of observations 503 503 503 503 503 503
standard errors are in parentheses in columns (1), (3), (5), and (6). Robust standard errors are in columns (2) and (4). The coefficient is
multiplied by 100, except in columns (2) and (4). The AR(10) row reports the p-value from the Breusch–Godfrey test, with null hypothesis
of no autocorrelation up to 10 lags. Significance levels are ***< .01, **< .05, and *< .1.
tions and weather shocks: 1100-1800. Econ. J. 127 (602), 924–958.
Needs. Food and Agriculture Organization of the United Nations.
Blackwell.
Trouet, Valerie, O’Kaplan, Jed, Herzig, Franz, Heusser, Karl-Uwe, Wanner, Heinz,
Luterbacher, Jurg, Esper, Jan, 2011. 2500 years of European climate variability
and human susceptibility. Science 331 (6017), 578–582.
power. Econometrica 81 (5), 2033–2053.
54 (4), 563–595.
Europe by the Huns and Avars. In: Harris, W.V. (Ed.), The Ancient Mediterranean
Environment Between Science and History. Brill Press.
coups good for democracy? Res. Polit. 3 (1), 1–7.
Harper, Kyle, 2017. The Fate of Rome: Climate, Disease, and the End of an Empire.
Jones, Benjamin F., Olken, Benjamin A., 2009. Hit or miss? the effect of assassination
Manning, Joseph G., Ludlow, Francis, Stine, Alexander R., Boos, William R.,
mun. 8, 900.
civil conflict: an instrumental variables approach. J. Polit. Econ. 112 (4), 725–
753.
mocratization. Amer. J. Polit. Sci. 56 (4), 1002–1020.
heteroskedasticity and autocorrelation consistent covariance matrix. Econo-
metrica 55 (3), 703–708.
Brill Press, Leiden.
Scheidel, Walter, Friesen, Steven J., 2009. The size of the economy and the distribu-
Temin, Peter, 2012. The Roman Market Economy. Princeton University Press.
Terpstra, Taco, 2013. Trading Communities in the Roman World: A Micro-Economic
Venning, Timothy, 2011. A Chronology of the Roman Empire. Continuum Publishing
Data and Empirical Strategy
Empirical Strategy
Data
Results
Main Results
Mechanisms
Conclusion
Acknowledgments
Appendix A Supplementary data
References
Birkbeck, University of London
EViews and interpret the output.
There are notes for other programs, Stata and gretl, but more detail is given in
You may get different results on some tests, e.g. for heteroskedasticity, on different
versions of EViews, and different programs, because the alternative hypothesis,
the assumed form of heteroskedasticity is different. In non-linear routines you
may also get different answers.because the optimisation routines are different or
it converges to a different maximum.
The data is in an Excel file: Shiller
his webpages. This is a subset of the full data he provides. We will use Shiller16
to re-examine the hypotheses in a famous paper J Lintner ‘Distribution of Income
of Corporations among Dividends, Retained Earnings and Taxes’, American Eco-
nomic Review May 19
The file contains annual US data from 18
NSP: S&P composite stock price index January value.
ND: nominal dividends for the year
NE : nominal earnings (profits) for the year
R : average short interest rate for the year
RL: average long interest rate for the year
RR: real interest rate
RC: real consumption in 2005 dollars
Some data are not provided for the whole period.
Click on File, Create a new EViews workfile.
In the dialog box specify annual data (the default), and put 1871 2016 in the
You will then get a box with two variables, C for the constant and Resid for
to RC because EViews uses C for the constant, similarly D is a reserved name
you cannot use it for a variable.
Choose File, Import, Import from file, then you will get a browser and find
A dialog box will appear, note what it is doing and click next, next, finish.
which it already knows and row 1 has names, which it will read as names You
will see the variables in the workfile box.
You can name and save this file and any changes to it.
Notice that you have two menu bars one at the top and one in the workfile
There is a white box where you can type commands.
in the box. OK. See the various types of graph you can do, but accept the default
line graph. Notice that the trends and the change in scale dominate the data,
earnings are usually above dividends but it is diffi cult to see. You can see the
effect of the great recession on earnings in 2008 Close graph. It will ask Delete
untitled graph? Choose Yes. There is a box to name it if you wanted to save it.
ratio, the proportion of earnings paid out in dividends PO = ND/NE which
removes the common trend in the two variables and is more stationary.
Generate transformations of the data to create new series
Type Quick, Generate Series and type into box
PO=ND/NE
Press OK.
You will see, PO has been added to the list of variables in the workfile. Click
options. Choose graph, accept the defaults to get a line graph. Notice the years
when the ratio was above one, firms were paying out more in dividends than they
earned. Click on the arrow on the workfile menu next to where it says default,
this gives you various transformations you can do. There is a slider at the bottom
which allows you to change the sample. Close the graph.
Get summary statistics on the ratio. Click on the series name, PO, choose
maximum values (check these are sensible), mean, median, skewness (which is
zero for a normal distribution) and kurtosis (which is
and the JB test for the null hypothesis that the distribution is normal. If the p
value is less than 0.05, you reject the hypothesis of normality. It is clearly not
normal.
Always graph the data and transformations of it and look at the descriptive
units before putting them on the same graph.
We are going to work with the logarithms of the data. Type Quick, Generate
LD=LOG(ND)
And OK. You will get a new series in the box LD. Similarly generate
LE=LOG(NE)
LSP=LOG(NSP)
Click, Quick, Estimate an Equation, Type in
LD C LE
You should always include C for the constant.
estimationmethod isLS- LeastSquares (NLSandARMA). NLS isnon-linear least
squares, arma, autoregressive moving average. We use these below. If you click
the arrow on the right of LS, you will see that there are other methods you could
choose: including Two stage Least Squares and GARCH, which we will use below.
There is an option tag at the top, which you can use to get Heteroskedasticity and
Autocorrelation Consistent (HAC) Standard Errors. You could also have entered
Log(ND) C Log(NE) directly in the equation box rather than generating them.
Click OK and you will get the following output
Method: Least Squares
Date: 08/0
Sample (adjusted): 1871 2014
Included observations: 144 after adjustments
LE 0.8756
Adjusted R-squared 0.9785
S.E. of regression 0.231501 Akaike info criterion -0.074676
Sum squared resid 7.610149 Schwarz criterion -0.033428
Log likelihood 7.376650 Hannan-Quinn criter. -0.057915
F-statistic 65
Prob(F-statistic) 0.000000
highlight what you want to copy; use the edit button on the top menu and click
copy. You can save the equation in EViews by using the name box on the equation
toolbar menu.
Both the constant and the coeffi cient of LE are very significant, t ratios much
you what you can loosely think of as the probability of getting that value of the
test statistic if the null hypothesis (in this case that the coeffi cient is zero) were
true. It is conventional to reject the null hypothesis if the p value is less than
0.05. However the Durbin Watson Statistic (which should be close to 2) of 1.15
indicates severe serial correlation that suggests dynamic misspecification. The
serial correlation will also bias the standard errors upwards. To get standard
options tab beside the specification tab at the top, change covariance method
from ordinary to HAC (Newey-West) OK you will get the equation again with the
same coeffi cients, but different standard errors, t stats and p-values. The standard
error on LE increases from 0.010850 to 0.012746.
From the equation box menu choose View; Actual Fitted Residual; Actual
actual in red and the fitted in green. The graph shows that the residuals are not
random, there are quite long runs where the actual is above or below the fitted
and there are some big spikes, larger positive residuals than one would expect,
where the actual is much higher than the fitted. These were cases where earnings
dropped sharply, but dividends did not respond, because dividends were smoothed
relative to earnings. Alwasys plot the residuals.
misspecification, we add lagged values, denoted by (-1) in EViews.
Click, Quick, estimate equation and type in
LD C LE LE(-1) LD(-1) @TREND
@trend, is a variable that goes 1,2,3, etc.
Method: Least Squares
Date: 08/04/16 Tim e: 14:29
Sample (adjusted): 1872 2014
Included observations: 143 after adjustments
LE 0.195017 0.025996 7.501697 0.0000
LD(-1) 0.630665 0.034941 18.04949 0.0000
Adjusted R-squared 0.997065 S.D. dependent var 1.578354
S.E. of regression 0.085509 Akaike info criterion -2.046045
Sum squared resid 1.009033 Schwarz criterion -1.942449
Log likelihood 151.2922 Hannan-Quinn criter. -2.003949
F-statistic 12060.64 Durbin-Watson stat 1.795495
Prob(F-statistic) 0.000000
the Durbin Watson is much better at 1.795 and all the variables except the trend
are significant. Click View on the equation box; then Actual Fitted Residual;
then Actual Fitted Residual Graph. The estimates of the residuals still show
some outliers, big errors.
LM tests, and accept the default number of lags to include 2. You will get the LM
serial correlation test. It just fails to reject the hypothesis of no serial correlation
up to second order at the 5% level, p=0.0571 and the second lag of the resiudal
is just significant. On diagnostic tests, the null hypothesis is that they are well
specified, p values below 0.05 indicate that there is a problem.
Click View, Residual tests, histogram- normality test. You will get the his-
is clearly a failure of normality, caused by the outliers.
Click View, residual, Heteroskedasticity tests. Choose the first Breush-Pagan-
the variances of the residuals seem to be related to the values of the regressors, so
at the 5% level we would reject the null hypothesis of constant variance. Notice
that thare are a long list of heteroskedasticity tests which differ in what they
make the squared residuals a function of, e.g. ARCH. They all have the same
null, constant variance, but different alternatives.
Click View, stability diagnostics, Ramsey Reset tests, accept the default num-
the fitted values. Look at the regression below.
There are other stability diagnostics. If you wanted to test for a change in the
ing the date at which you thought the relationship changed. The Breakpoint tests
for equality of the regression coeffi cients before and after the break, assuming the
variances in the two periods are constant. The Chow Forecast tests whether the
estimates for the first period forecast the second period.
If you do not know the breakpoint choose recursive estimates and look at the
confidence intervals if the model is stable. Alternatively the Quandt Andrews test
will determine the most likely breakpoint. It identifies a significant break in 1972.
fied e.g. homoskedasticity or structural stability) and can give conflicting results
because they are testing against different alternative hypotheses.
View coeffi cient diagnostics, redundant variables tests allows us to test for
allows us to test for adding variables.
Click View, coeffi cient tests, Wald and type in to the box: (C(2)+C(3))/(1-
Click OK. The hypothesis is clearly rejected with Chi-squared p value of 0.0003.
Wald tests are not invariant to how you write non-linear restrictions. We could
have written the same restriction: C(2)+C(3)+C(4)-1=0. This gives a Chi-
squared p value of 0.0017, so we still reject. But there are cases where writing the
restriction one way leads to rejection and another way to acceptance.
the statistically identical ECM reparameterisation. Note that D(..) takes first
difference of the variable. Although the trend was not significant even at the 10%
level the AIC would choose the model with trend over the model without trend
−2.046 < −2.043, whereas the Schwarz criterior chooses the model without trend.
Although the ECM and ARDL are statistically identical with the same standard
error of regression, the R2 of the ARDL at 0.997 is much larger than the R2 of the
ECM at 0.526. This is because the dependent variable in the ECM is the change
in log dividends (growth in dividends) not the level of log dividends. The model
explains a smaller proportion of the change than of the level. This does not mean
that the ARDL is better. It means that R2 can be misleading.
Method: Least Squares
Date: 08/04/16 Tim e: 15:34
Sample (adjusted): 1872 2014
Included observations: 143 after adjustments
LE 0.203134 0.025576 7.942294 0.0000
LD(-1) 0.651616 0.032320 20.16136 0.0000
Adjusted R-squared 0.997036 S.D. dependent var 1.578354
S.E. of regression 0.085925 Akaike info criterion -2.043109
Sum squared resid 1.026254 Schwarz criterion -1.960232
Log likelihood 150.0823 Hannan-Quinn criter. -2.009431
F-statistic 15924.80 Durbin-Watson stat 1.805160
Prob(F-statistic) 0.000000
Method: Least Squares
Date: 08/04/16 Tim e: 15:35
Sample (adjusted): 1872 2014
Included observations: 143 after adjustments
D(LE) 0.203134 0.025576 7.942294 0.0000
LE(-1) 0.318554 0.028591 11.14183 0.0000
LD(-1) -0.348384 0.032320 -10.77922 0.0000
Adjusted R-squared 0.516191 S.D. dependent var 0.123533
S.E. of regression 0.085925 Akaike info criterion -2.043109
Sum squared resid 1.026254 Schwarz criterion -1.960232
Log likelihood 150.0823 Hannan-Quinn criter. -2.009431
F-statistic 51.50140 Durbin-Watson stat 1.805160
Prob(F-statistic) 0.000000
Θ, such that D∗t = ΘEt. We will take logs of this relationship, using lower case
letters for logs, e.g. dt = log(Dt), etc. Notice natural logs are almost universally
form as d∗t = θ0 + θ1et, where his theory suggests that θ1 = 1 and θ0 = log(Θ). To
this he added a ‘Partial Adjustment Model’(PAM) and a random error
∗
t −dt−1) + ut
dt = a0 + b0et + a1dt−1 + ut
them completely to short term variations in earnings.
Our estimates above suggest that lagged earnings are significant and this can
run equilibrium determining d∗t and an adjustment process towards it.
∗
t + λ2(d
t−1 −dt−1) + ut
∆dt = a0 + b0∆et + b1et−1 + a1dt−1 + ut
b1
a1
β0 + β1
1 −α1
mand.
equation again, type in
D(LD)=C(1)*C(4)*D(LE)+C(2)*(C(3)+C(4)*LE(-1)-LD(-1))
The D(.. .) first differences the data on LD. This estimates the ECM giving
C(2) = λ2, C(3) = θ0,C(4) = θ1. We get the same estimate of θ1 and its standard
error as we got above.
Method: Least Squares (Gauss-Newton / Marquardt steps)
Date: 11/09/16 Tim e: 10:28
Sample (adjusted): 1872 2014
Included observations: 143 after adjustments
Convergence achieved after 5 iterations
Coefficient covariance computed using outer product of gradients
D(LD)=C(1)*C(4)*D(LE)+C(2)*(C(3)+C(4)*LE(-1)-LD(-1))
C(4) 0.914374 0.012283 74.43963 0.0000
C(2) 0.348384 0.032320 10.77922 0.0000
C(3) -0.384599 0.023683 -16.23973 0.0000
Adjusted R-squared 0.516191 S.D. dependent var 0.123533
S.E. of regression 0.085925 Akaike info criterion -2.043109
Sum squared resid 1.026254 Schwarz criterion -1.960232
Log likelihood 150.0823 Hannan-Quinn criter. -2.009431
Durbin-Watson stat 1.805160
This is a non-linear procedure, it took 5 iterations to get to the minimum sum
does not, and this can lead to problems. To provide some good starting values, if
you did not get the results above type
param c(1) 0.3 c(2) 0.3 c(3) 1 c(4) 1 in the command window at the top under
To provide some bad starting values type
param c(1) 0.0 c(2) 0.0 c(3) 0.0 c(4) 0
in the command window at the top under the toolbar. This sets the starting
it again with these starting values and see what happens. It should not converge
and give silly values for the parameters.
When doing non-linear estimation, try to start with sensible starting values,
you sensible values. Also experiment with different starting values to make sure
there was evidence that the errors were heteroskedastic and non-normal and this
was mainly caused by excess kurtosis. Now we are going to assume that ut ∼
It(0,ht,ν), the errors are independent with a student t distribution, expected
value zero, a time varying variance E(u2t ) = ht, and degrees of freedom ν. The
degrees of freedom determine how thick the tails of the distribution are. If ν is
small, the tails are much fatter than the normal distribution, if v is around 30,
it is very similar to the normal. The form of time varying variance we will use is
GARCH(1, 1)
2
t−1 + c2ht−1 + εt
D(LE) LE(-1) LD(-1) and then change method from LS to ARCH using the arrow
on the right of the method box. You will now get a GARCH box. Change error
distribution from Normal to Student’s t. Accept the other defaults, click OK.
Method: ML ARCH – Student’s t distribution (BFGS / Marquardt steps)
Date: 08/04/16 Tim e: 16:22
Sample (adjusted): 1872 2014
Included observations: 143 after adjustments
Convergence achieved after 60 iterations
Coefficient covariance computed using outer product of gradients
Presample variance: backcast (parameter = 0.7)
GARCH = C(5) + C(6)*RESID(-1)^2 + C(7)*GARCH(-1)
D(LE) 0.161176 0.017711 9.100466 0.0000
LE(-1) 0.234743 0.017747 13.22749 0.0000
LD(-1) -0.254972 0.019814 -12.86813 0.0000
RESID(-1)^2 0.255953 0.212146 1.206496 0.2276
GARCH(-1) 0.749364 0.123303 6.077413 0.0000
Adjusted R-squared 0.483399 S.D. dependent var 0.123533
S.E. of regression 0.088789 Akaike info criterion -2.383006
Sum squared resid 1.095811 Schwarz criterion -2.217252
Log likelihood 178.3849 Hannan-Quinn criter. -2.315652
Durbin-Watson stat 1.860517
its degrees of freedom. Since, ν > 2, a variance exists, but third and fourth
moments do not. There is a very strong and significant GARCH effect c2 = 0.749
though the lagged squared residual is not significant. Both short-run adjustment,
λ1 and long run adjustment λ2 are slightly slower than previous estimates and the
long run coeffi cient 0.234743/0.254972=0.92 is similar to before.
model and use it to forecast. Use quick estimate an equation, set the sample to
1873-2006 and type in
D(LSP) C.
Estimate an ARIMA(1,1,1) model for log stock prices.
Click estimate on the equation equation box, check the sample is 1873 2006
D(LSP) C AR(1) MA(1)
You will get estimates with MLL=47.46484, s=0.172309. Notice that both
equation box. Set the forecast period to 2006 2016 look at the graph. It will
save the forecast as LSPF. Close the equation and graph LSP and LSPF. This is
clearly not a great forecast.
AlthoughtheARandMAtermsare individually significant, theydonot reduce
a likelihood ratio test they are just jointly significant, LR=2(47.46-43.97)=6.98
compared to a χ2(2) at the 5% of 5.99 but not at the 1% of 9.21. There may be
a common factor which cancels out. It would probably be better to use real stock
prices rather than nominal ones.
the test as Augmented Dickey Fuller (there are lots of other alternatives), choose
level, choose trend and intercept, choose Akaike, set maximum lags at 12. Choose
OK. You will get the ADF test results. The ADF statistic is -1.413932, much
greater than the 5% critical value of -3.446 (given on the program output). The
p-value is 0.8529, so we do not reject a unit root. Below is given the regression
that was run to get the results. Notice that the lag length is 5 and that the test
statistic is just the t ratio on LSP(-1) in the regression.
Close the equation box and repeat the process (choose view, unit root test)
chosen is 3. The ADF is -6.839723 which is much smaller than the 5% critical
value of -2.88. Note that the critical values are different depending whether or
not you have a trend.
We cannot reject a unit root for LSP but we can for the first difference of
clear-cut as this and can be sensitive to lag length, treatment of the deterministic
elements and to choice of test.
unrestricted VAR. Enter LE LD as endogenous variables. In the list of exogenous
variables add @trend to C. Accept the defaults for everything else. This will give
you a second order unrestricted VAR with intercept and trend. Notice: the trend
is significant in both equations; the second lags of both variables are insignificant;
LD(-1) is insignificant in the LE equation.
ChooseView, lagstructure, lag lengthcriteriaandaccept thedefaultmaximum
that one lag is optimal. The optimal value has a star beside it.
Choose View, lag structure, Granger Causality test. At the 5% level there is
is larger than that for LE on LD which is 0.0000.
Choose, View, lag structure, AR roots, graph.
matching complex roots. From the AR Roots table option the largest root is 0.92,
which may not be significantly different from one. In the complex pair i is the
square root of minus one. This suggests that there may be one stochastic trend
(root on the unit circle) and one cointegrating vector.
Choose estimate from the equation box and replace 1 2 by 1 1 in the lag
earnings.
Choose View, residuals, correlation matrix, and note that the contemporane-
Choose View, impulse responses, click the impulse definition tab at the top,
IRFs, show the effect of a shock to each variable on itself and on the other variable.
LD shows a humped shaped response to a shock to LE, which remains significantly
positive for 10 years. LE shows an immediate response to a shock to LD, through
the contemporaneous covariance matrix, but it declines to zero. The generalised
IRFs assume that the shocks have the estimated correlation in the sample. The
Choleski IRFs assume a recursive causal structure for the shocks specified by the
ordering of the variables. In this case it is plausible that LE influences LD, but
not vice versa. So the ordering of the variables entered is correct and we can
calculate the Cholesky IRFs. Notice that here there is no significant effect of LD
on LE, in period zero by construction, and subsequently because lagged dividends
do not influence earnings. There is a hump shaped response of LD to LE.
Choose estimate, and click Vector Error Correction, rather than unrestricted
exogenous variable box; then click cointegration box at the top and choose option
4 rather than the default option 3. Notice that what EViews calls a VECM(1 1)
corresponds to a VAR(1 2) and the lagged change terms are insignificant, which is
what we would expect given that the second lag terms were insignificant. Choose,
View, cointegration tests, and click the bottom button, option 6, summarise all 5
sets of assumptions.
All the tests except quadratic intercept trend say 1 cointegrating vector. The
choose one cointegrating vector (equation) and option 3 (no trend in the cointe-
grating equation).
Go back to estimate, set the lag length at 0 0, under cointegration choose
There is significant feedback from the cointegrating equation on both variables.
The long-run coeffi cient is 0.91, very similar to what we got with the ECM. Note
you have to change the sign. You would clearly reject the hypothesis that the long-
run elasticity was unity, t = (0.911845−1)/0.01120 = −7.87. View, Cointegration
Graph, will give you a plot of the cointegrating relation, a measure of the deviation
from equilibrium: dt −θ0 −θ1et.
effi cients using the tab at the top marked VEC restrictions. Click impose restric-
tions and then type B(1,1)=1,B(1,2)=-1. This imposes a long-run unit coeffi cient
of unity on earnings. Click OK. You will get the restricted estimates and a Like-
lihood ratio test that indicates that the restriction is rejected, as we determined
above.
and lagged log earnings and lagged log dividends. The evidence of the VAR
suggests that earnings may be treated as exogenous, since there was little feedback
from lagged dividends to earnings. However, if E(utεt) 6= 0, this may cause et to
be correlated with ut. We now investigate this.
First re-estimate the ARDL by OLS, i.e. run LD C LE LE(-1) LD(-1) using
The coeffi cient on LE is 0.202599 with a standard error of 0.025582. We are now
going to estimate it by IV/2SLS using the second lags and a trend as instruments.
Click estimate on the equation-box toolbar, change the method from LS to
equation one the same and in the lower one for instrument list enter: C LE(-1)
LD(-1) LE(-2) LD(-2) @TREND. Click OK and you will get the TSLS estimates.
Thecoeffi cientofLEisnowlargerwitha larger standarderror. ItgivesaJstatistic
(Sargan test) with a p value of 0.16, so the over-identifying restrictions are not
rejected. We might have anticipated this because the trend and lagged earnings
were not significant in explaining dividends. The OLS and TSLS estimates do not
look significantly different, the TSLS estimate ±2 standard errors covers the LS
estimate.
Method: Two-Stage Least Squares
Date: 09/06/16 Tim e: 15:39
Sample (adjusted): 1873 2014
Included observations: 142 after adjustments
Instrument specification: C LE(-1) LD(-1) LE(-2) LD(-2) @TREND
LE 0.304909 0.103416 2.948389 0.0038
LD(-1) 0.618579 0.047998 12.88767 0.0000
Adjusted R-squared 0.996693 S.D. dependent var 1.578384
S.E. of regression 0.090770 Sum squared resid 1.136999
F-statistic 14154.51 Durbin-Watson stat 1.829364
Prob(F-statistic) 0.000000 Second-Stage SSR 1.410362
J-statistic 3.680687 Instrument rank 6
Prob(J-statistic) 0.158763
C LE(-1) LE(-2) LD(-1) LD(-2) @TREND. This gives us the same estimates
as we got for the LE equation from the VAR 2 with intercept and trend. The
VAR is the reduced form. This is the first stage of two stage least squares. You
should always check this first stage, to see whether the instruments explain the
endogenous variable, in this case LE(-1) and @TREND are very significant and
the F statistic is much bigger than 10, the rule of thumb. Close the equation, use
Quick, Generate and define ULE=RESID. This saves the residuals from the first
stage (reduced form equation for LE) as ULE. Then use OLS to estimate LD C
LE LE(-1) LD(-1) ULE. The coeffi cient on ULE has a t statistic of -1.083368, so
we do not reject the hypothesis that we can treat LE as exogenous.
We could also use two stage least squares to estimate a rational expectations
the expectations based on information in the current period. Click estimate,
choose TSLS, and type into the equation box: LD C LE(1) LD(-1) and into the
instrument box: C LE LE(-1) LE(-2) LD(-1) LD(-2) @trend. You will get a
coeffi cient on future earnings of 0.29. Notice we have lost one observation at the
end of the period, because of the future variable on the right hand side.