Journal of Scholastic Inquiry: Behavioral Sciences
Volume 1, Page 1
Journal of Scholastic Inquiry:
Business
Business Edition, Volume 4, Issue 1 Spring 2015
Published by: Center for Scholastic Inquiry, LLC ISSN: 2330-6815 (online) ISSN: 2330-6807
Journal of Scholastic Inquiry: Business
Volume 3, Page 2
ISSN: 2330-6815 (online) ISSN: 2330-6807 (print)
Journal of Scholastic Inquiry: Business
Spring 2015
Volume 4, Issue 1
www.csiresearch.com
Journal of Scholastic Inquiry: Business
Volume 3, Page 3Â
Â
Journal of Scholastic Inquiry: Business The Center for Scholastic Inquiry (CSI) publishes the Journal of Scholastic Inquiry: Business (JOSI: B) to recognize, celebrate, and highlight scholarly research, discovery, and evidence-based practice in the field of business. Academic and action research emphasizing leading edge inquiry, distinguishing and fostering best practice, and validating promising methods will be considered for publication. Qualitative, quantitative, and mixed method study designs representing diverse philosophical frameworks and perspectives are welcome. The JOSI: B publishes papers that perpetuate thought leadership and represent critical enrichment in the field of business. The JOSI: B is a rigorously juried journal. Relevant research may include topics in business, economics, business information systems, international business, business management, accounting, business law, business ethics, management information systems, finance, foreign trade, international politics, and related fields. If you are interested in publishing in the JOSI: B, feel free to contact our office or visit our website. Sincerely,
Tanya Dr. Tanya McCoss-Yerigan Executive Director & Managing Editor Center for Scholastic Inquiry 4857 Hwy 67, Suite #2 Granite Falls, MN 56241
Web: www.csiresearch.com
Phone: 855-855-8764
Email: editor@csiresearch.com
Journal of Scholastic Inquiry: Business
Volume 3, Page 4
JOURNAL OF SCHOLASTIC INQUIRY: BUSINESS Spring 2015, Volume 4, Issue 1 Managing Editor Dr. Tanya McCoss-Yerigan Editor-in-Chief Dr. Jamal Cooks General Editor & APA Editor Jay Meiners
Editorial Advisory Board Shirley Barnes, Alabama State University Joan Berry, University of Mary Hardin-Baylor Brooke Burks, Auburn University at Montgomery Timothy Harrington, Chicago State University Michelle Beach, Southwest Minnesota State University Kenneth Goldberg, National University Linda Rae Markert, State University of New York at Oswego Lucinda Woodward, Indiana University Southeast Peer Reviewers Hugh Sales
Richard Colfax
Judy Lawrence
Howard Lawrence
Wendy Bodwell
William Holmes
Steven Lamb
Tom DeBerry
Paul Stock
Brien Smith
Stephanie Hurt
Scott Brooks
Charles Enis
Alice Etim
Patty Castelli
Not all reviewers are utilized in each publication cycle.
Journal of Scholastic Inquiry: Business
Volume 3, Page 5
CALL FOR PAPERS
JOIN US Center for Scholastic Inquiry’s International Academic Research Conference October 28‐30, 2015
Charleston, South Carolina
Would you like to elevate your status as a scholar‐practitioner and develop your professional reputation and credentials through presentation and publication? Would you enjoy stimulating professional rejuvenation and tranquil personal relaxation at the same time by combining meaningful professional development with a luxury getaway? Would you enjoy tailored continuing education experiences by choosing the conference sessions that best suit your professional interests and vocational pursuits? Would you appreciate collaborative collegiality with luminaries and pioneers conducting the most academically and scientifically meritorious research? Are you interested in developing your thought leadership and contributing to the body of validated knowledge in your academic or professional field? Are you interested in diverse scholarship by learning with and from education, business and behavioral science practitioners and professionals from around the world on a wide range of contemporary topics?
No matter your role in business, education, or behavioral science, there is something for everyone. Professionals from the public and private sectors will learn about emerging trends, best practice and innovative strategies. For more information about attending, presenting and/or publishing, check out CSI’s website:
www.csiresearch.com
Journal of Scholastic Inquiry: Business
Volume 3, Page 6
TABLE OF CONTENTS
Publication Agreement and Assurance of Integrity Ethical Standards in Publishing Disclaimer of Liability
7
Research Manuscripts
8-82
Markov Models and the Dissemination of Tax‐Incentive Choices: An Illustration Charles R. Enis, The Pennsylvania State University
8
Auction Theory for Multi-unit Reverse Auctions: Testing Predictions in the Lab William B. Holmes, Georgia Gwinnett College The Use of Computer-Aided Audit Tools in Fraud Detection Judy Ramage Lawrence, Christian Brothers University Denny L. Weaver, American National Diversified, Inc. Howard Lawrence, University of Mississippi U.S. Versus European Voluntary Earnings Forecasts… How Different Are They and Do They Vary by Economic Cycle? Ronald A. Stunda, Valdosta State University
26
Manuscript Submission Guide
83
Why Read Our Journals
85
39
57
Journal of Scholastic Inquiry: Business
Volume 3, Page 7
PUBLICATION AGREEMENT AND ASSURANCE OF INTEGRITY By submitting a manuscript for publication, authors confirm that the research and writing is their exclusive, original, and unpublished work. Upon acceptance of the manuscript for publication, authors grant the Center for Scholastic Inquiry, LLC (CSI) the sole and permanent right to publish the manuscript, at its option, in one of its academic research journals, on the CSI's website, in other germane, academic publications; and/or on an alternate hosting site or database. Authors retain copyright ownership of their research and writing for all other purposes.
ETHICAL STANDARDS IN PUBLISHING The CSI insists on and meets the most distinguished benchmarks for publication of academic journals to foster the advancement of accurate scientific knowledge and to defend intellectual property rights. The CSI stipulates and expects that all practitioners and professionals submit original, unpublished manuscripts in accordance with its code of ethics and ethical principles of academic research and writing.
DISCLAIMER OF LIABILITY The CSI does not endorse any of the ideas, concepts, and theories published within the JOSI: B. Furthermore, we accept no responsibility or liability for outcomes based upon implementation of the individual author’s ideas, concepts, or theories. Each manuscript is the copyrighted property of the author.
Journal of Scholastic Inquiry: Business
Volume 3, Page 8Â
Â
Markov Models and the Dissemination of TaxIncentive Choices: An Illustration Charles R. Enis The Pennsylvania State University Abstract This paper reports on the use of Markov models to predict, describe and evaluate the dissemination of behaviors that are intended to result from policy initiatives. A Markov model is a numerical exercise that uses frequencies of past behaviors to provide reasonable estimates of subsequent behaviors. The present study used a Markov model to estimate the extent to which taxpayers participated in Individual Retirement Accounts (IRAs) for specified years based upon their participation in prior years. The setting for the analysis involved the extent to which a panel of 506 eligible households participated in IRAs from 1982 to 1985, a period having the least restrictions on taxpayers making deductible IRA contributions. IRAs are a tax policy initiative designed to encourage retirement savings behavior among households. Prior research has shown that contemporaneous IRA participation is related to past IRA participation. Markov models are processes used to estimate future behaviors based upon past behaviors. Results suggest that the dissemination of IRA participation reasonably followed a Markov process. Such findings could have provided the Treasury Department with an additional benchmark related to the acceptance of policy initiatives. In short, the Treasury Department and other policy makers may be able to judge the degree to which the actual implementations of initiatives follow that estimated by Markov processes during each of the years such policies were available. Keywords: Markov process, Markov chains, Individual Retirement Accounts, Tax policy initiatives
Journal of Scholastic Inquiry: Business
Volume 3, Page 9
Introduction The Internal Revenue Code has offered many incentives for individual taxpayers to engage in worthwhile activities such as investing in education, health care, home ownership, and retirement savings. Before one can determine whether such incentives were effective in achieving their normative objectives, it is important to determine the extent to which individual taxpayers embraced these opportunities. This paper addresses deductible individual retirement accounts for married two-income couples filing joint returns from 1982 through 1985. During these years there were no income restrictions on participation as long as contributions did not exceed the “earned income” of the respective participants. Here each spouse could contribute up to $2,000 of “before-tax” dollars to his/her IRA for a combined $4,000 deduction for adjusted gross income (AGI) on a joint return. Income on these contributions is tax deferred until distribution. Typically, penalty-free distributions from IRAs are allowed after contributors turn age 59 ½. In short, the tax incentives associated with deductible IRAs are “immediate” tax deductions, and tax deferrals on accrued income (Burman, Gale, & Weiner, 2001). This study focused on modeling the extent to which choices to participate in deductible IRAs were disseminated across two-income married couples whose only sources of income were employment and savings. In other words, the Markov process was applied to how these taxpayers adopted IRAs to save for retirement from 1982 through 1985. The Treasury Department has been concerned about the lack of financial resources that will be available for retirees to supplement Social Security benefits. Hence, IRAs were created as a tax subsidy to encourage retirement savings. Such a policy initiative would have failed if taxpayers did not participate. On the other hand, far greater participation than expected could have indicated that IRAs were too costly in terms of tax dollars. Hence, the extent to which taxpayers adopted IRAs over time was important to the Treasury Department. Although Treasury has many sophisticated models to track such participation, the robustness and simplicity of Markov processes offer findings that can be used to compare with the results of other forecasts.
Journal of Scholastic Inquiry: Business
Volume 3, Page 10
Review of Literature Choices regarding retirement savings programs are important policy issues given the low level of retirement assets available to many households (Poterba, Venti, & Wise, 1998). The intent of this research was not to assess the overall effectiveness of deductible IRAs as a means to increase retirement savings, but to model the extent to which choices to participate in such plans were diffused across a targeted group. The period 1982-1985 was examined as this was a window of opportunity to study retirement savings choices when taxpayers faced the least restrictions regarding the deductions that they could have realized from making IRA contributions. Most IRA studies of this period focused on economic and psychological factors in attempting to explain what motivates people to contribute to IRAs. Among the economic variables offered as explanatory factors were wealth effects, life-cycle stages, demographic factors, income elasticity, interest rates, inflation, employment, fiscal policies, tax laws, etc. (Diamond & Hausman, 1984; Hubbard, 1984; Long, 1990). Many studies were based on models that incorporated contemporaneous versus expected future tax rates, expected before-tax rates of return on IRA contributions, and the lengths of time until retirement (Hulse, 2003; Seida & Stern, 1998). Constructing models that incorporate all such variables would have been an onerous task. However, a feature of Markov processes is the ability to approximate predictions of complex multi-variate models with relatively simple mathematical formulations, as done in the sciences (Rosales & Varanda, 2009). Although the formulation of Markov processes is attributed to the Mathematician Andrey Markov about 100 years ago, they are still very relevant today. Markov processes, also referred to as Markov chains, are topics still covered in many statistical texts as well as in those that specifically focus on its many variations (see Puterman, 2005). Markov processes resemble ‘memoryless’ systems that apply to data transitioning between observations. Markov chains are used in a wide variety of statistical applications because they are simple representations that closely replicate predictions of more complex and sophisticated models (Rosales & Varanda, 2009). Markov chains are used in the biological sciences to model the flow of calcium ions across receptors (Gin, Falcke, Wagner, Yule, & Sneyd, 2009; Siekmann, Wagner, Yule, Fox, Bryant, Crampin, & Sneyd, 2011). Furthermore, Markov chains are also used in modeling
Journal of Scholastic Inquiry: Business
Volume 3, Page 11Â
Â
complex systems in other diverse disciplines, as for examples; engineering (Chen & Trivedi, 2005), pricing policy (Aviv & Pazgal, 2005), brand switching in consumer behavior (Alderson & Green, 1964; Howard, 1960), artificial intelligence (Matthijs, Spaan, & Vlassis, 2005), and geriatric health care (Hoey, von Bertoldi, Poupart, & Mihailidis, 2007; Pineau, Montemerlo, Pollack, Roy, & Thrum, 2003). Prospect theory (Kahneman & Tversky, 1979) was the major psychological theory associated with IRA participation studies. One of the tenants of prospect theory is that gains loom larger than losses. Taxpayers that receive refunds frame these situations as gains, while those that have to pay frame these situations as losses. Studies have shown that those that have to pay are more likely to contribute to IRAs than those who receive refunds (Feenberg & Skinner, 1989). IRA contributions that are deductible for Year t can be made up to April 15th of year t+1. Thus, taxpayers know whether their withholding positions will be refunds or payments up to that point before deciding to make IRA contributions. Approximately two-thirds of all IRA deductions claimed in year t were made in the first quarter of year t+1 (Boynton, 1984; Summers, 1986). Furthermore taxpayers dissatisfied with their withholding positions (i.e., their refunds were lesser than expected, or their payments were greater than expected) were shown to be more likely to contribute to IRAs (Carroll, 1992; Copeland & Cuccia, 2002). Thaler (1994) indicates that IRA contributions can be framed as positive reinforcements for retirement savings as the tax benefits from deductions are realized shortly after contributions were made. This association reduces the psychological stress of withholding positions framed as losses. This study applied a Markov process to model the frequencies of choices to participate in IRAs. It should be pointed out that economic or psychological factors that induce IRA participations were not examined, but instead the focus was on the extent to which IRA contributions were embraced from year to year. In other words, the extent that levels of IRA participations for year t were related to participations in year t-1 were the major observations of this study. Markov processes cover a sequence of events that are presumed to be generated by a stochastic mechanism. The underlying premise of Markov chains is that events such as making IRA contributions for any given year were related to contributions chosen in prior years. Thus, the probability of IRA contributions in year t depends upon the contributions in year t-1. Prior research has documented that past IRA participation is one of the most important factors
Journal of Scholastic Inquiry: Business
Volume 3, Page 12
explaining choices to contribute to IRAs (Burman, Cordes, & Ozanne, 1990; Frischmann, Gupta, & Weber, 1998; Gale & Scholtz, 1994; O’Neil & Thompson, 1987; Venti & Wise, 1998).
Method The data set used in this exercise was derived from Statistics of Income (SOI) data extracted from the Panel of Individual Returns for years 1982-1985. For a technical description of this panel, see Crum (1991). The targeted panel for this illustration consisted of married joint returns that qualified to deduct the maximum $4,000 IRA contribution and whose only sources of income were those reported on forms W-2, 1099-DIV, 1099-INT, as well as other capital gains and losses (Enis, 1991). In other words, the individuals represented in the targeted panel were those that received paychecks and either consumed or saved their money. Furthermore, to be included in the targeted panel both spouses had to be under age 65 and combined AGI had to be no more than $200,000. These latter requirements were imposed to assure that the IRA contributions were for future retirement savings, and that AGIs were not distorted by “blurring” techniques employed in the data for privacy purposes. On average, households would have to have contributed more than 10% of their AGIs to max out on their allowable IRA contributions. In short, lower to middle income wage earners represented a group that in particular needed to have some form of retirement savings plan to augment their social security benefits. Often consumer panels are chosen using a random process. However, if one wanted to test the effects of an anti-hypertensive drug, subjects would not be selected on a random basis. The chosen panel for the test would be those at risk for developing high blood pressure, and thus benefit from the marketing of a safe and effective product. In like manner, panels chosen for taxincentive studies should be those that will benefit the most from implementing the desired behavior. The targeted panel for this illustration consisted of 506 tax returns that met all criteria for years 1982-1985. The research question is whether IRA participation followed a Markov process. Testing for a Markov process typically begins by organizing discreet data from two periods into a contingency format. The variables that follow (i.e., C82…C85) are dummy variables equal to one if an IRA contribution was made in the indicated year, zero otherwise. For any of these
Journal of Scholastic Inquiry: Business
Volume 3, Page 13
dummy variables to be equal to one it was not necessary that the maximum contribution to have been made. Any non-zero contribution was coded one. It should also be pointed out that values of zero did not necessarily mean that such households did not have an IRA. It just means that no contributions to IRAs were made for the indicated year. The products associated with the C82…C85 variables represent the number of households in the targeted panel that made IRA contributions for the indicated years, see Table 1. For example, the mean for C82 of .1936759 indicated that approximately 19.4% of 506 households participated in IRAs in 1982. In other words, 98 of 506 households contributed to IRAs in 1982. This .1936759 figure can be interpreted as the probability of IRA participation in 1982; i.e., p(C82) = .194. If 98 of 506 made contributions in 1982, then 408 did not make contributions in 1982 (506 – 98 = 408); thus, ~ C82 = 408. The number of households in the panel that participated in IRAs in 1983, 1984, and 1985 are C83 = 114, C84 = 144, and C85 = 151, respectively, as shown in Table 1. These results were used to estimate the extent to which the choices to participate in IRAs by the panel could have been be described as a Markov process during the 1982 to 1985 period. A first-order Markov chain is one in which the probability of participation in IRAs in 1984 (1985) was solely a function of the frequencies of participation in 1983 (1984).
Results The parameters of a first-order Markov chain were estimated from the marginal proportions shown in the contingency chart in Table 2, Panel A. A matrix of transitional probabilities (T82,83) were derived from this contingency format, and shown in Table 2, Panel B. The elements of the T82,83 matrix were conditional probabilities; for example, p(C83 | ~ C82) refers to the probability of IRA contributions that were made in 1983 given that no contributions were made in 1982 by the respective households (this conditional probability was 8.1%, 33/408). Thus, the estimated transitional probabilities (p) were cell frequencies changed to row proportions as shown in Table 2, Panel B. A second matrix (R83) was derived from the frequency of IRA and No IRA participation for 1983 as shown in Table 2, Panel C.
Journal of Scholastic Inquiry: Business
Volume 3, Page 14Â
Â
The matrix that calculated the predicted probabilities for IRA participation for 1984 (
84)
and No IRA
is the product of the T82,83 and R83 matrices shown in
Equations (1) and (2). All figures in Equations (1) & (2) are from Table 2.
Multiplying
by N=506 shows the Markov model to have predicted that 126
households in the panel were to have participated in IRAs in 1984 (.249 * 506 = 126). However, 144 households in the panel actually participated in IRAs in 1984. The z statistic computed using Equation (3) was used to evaluate the predictive accuracy of the Markov process for 1984.
Thus, the actual frequency of IRA participation in 1984 was significantly greater than that predicted by the Markov process based upon 1983 participation. In other words, the
(4)
Journal of Scholastic Inquiry: Business
Volume 3, Page 15
rejection of the null hypothesis that the dissemination of IRA contributions for 1984 followed a first-order Markov chain in the positive direction suggests that the targeted panel’s IRA participation exceeded expectations. All hypotheses tests in this paper were two-tail. In a similar manner, one can calculate the expected frequency of 1985 IRA participation based upon 1984 participation using a Markov chain. As was the case in predicting 1984 participation using a Markov chain, the process for predicting 1985 participation began with a contingency chart showing actual frequencies of participation, and no participation for 1983 and 1984, respectively, see Table 3. The format for Table 3 is the same as that for Table 2, Panel A. The matrix of transitional probabilities and the matrix of 1984 IRA and No IRA contribution proportions were derived in the same manner as in the previous example, as shown in Table 2, Panels B & C.
The product of
and
equals the matrix containing the predicted proportions for 1985 as
follows:
The Markov process predicted 167 IRA participants for 1985 (.331 * 506 = 167), as compared to 151 actual participants for 1985. The null hypothesis that the predicted 167 participants for 1985 followed a Markov process was tested by the z statistic computed by Equation (4).
In short, the Markov process under (over) estimated the frequency of IRA participation among households in the panel for 1984 (1985), the latter difference is not statistically significant at the .05 level.
Journal of Scholastic Inquiry: Business
Volume 3, Page 16
A prediction for 1985 could have been estimated in the absence of 1984 data by using the 1982-1983 contingency figures (see Table 2, Panel A) to start a Markov chain to arrive at the matrix as follows: No. of IRAs Predicted For Year 126 1984 135 1985 The Markov chain predicted IRA participation for 1985 at 135 which is less than the actual participation of 151; however, this difference is not statistically significant at the .05 level (z = 1.58, α = .057). The Markov chain computations can be continued indefinitely as shown in Table 4. As the Markov chain continued in Table 4, changes in the
matrix got smaller and
smaller until the changes became negligible, thus steady state was achieved. In the present example, a steady state would have been reached when approximately 160 households participated in IRAs. The difference between this steady state of 160 participants and the actual 151 participants for 1985 was not statistically significant at the .05 level (z = -0.803, α = .21). In short, convergence to a steady state was forecasted to occur in 1995 when 160 households in the panel (160/506=31.6%) were predicted to have participated in IRAs. Although the data do not extend beyond 1985, the observation of 151 actual participants in 1985 was not statistically different than 160, this observation suggested that the predicted level of participation at convergence was expected to be realized. The panel of 506 households contained data for each year from 1982 to 1985; a total of four years or four “waves” of data. Testing the predictive accuracy of a first-order Markov chain required at least three waves of data. The results of the predictive accuracy tests have been mixed with significance levels that were close to traditional thresholds. The extent to which the frequency of IRA participation fits a Markov process may be evaluated more precisely by testing the hypothesis that the T82,83 (T83,84) matrix was equal to the matrix based on the 1983 and 1984 (1984 and 1985) data. A chi-square (χ2) statistic was used to test this hypothesis. This goodness-of-fit test also required at least three waves of data. Because the present demonstration involved four waves of data, this test could have been conducted twice. This discussion will detail only the first of these tests; i.e., the null hypothesis that the
Journal of Scholastic Inquiry: Business
Volume 3, Page 17
T82,83 matrix was equal to the relevant matrix based on the 1983 and 1984 data. In other words, could one have rejected the null hypothesis of a constant transition matrix? If not, then a Markov process was descriptive of the observed pattern of IRA participation. The test involved computing and totaling two χ2 statistics. The first (second) χ2 was computed using observed and expected frequencies compiled from data regarding households that did (did not) participate in an IRA in either 1982 or 1984. Illustrated in Table 5, Panel A were the observed frequencies of the households that actually participated in IRAs in either 1982 or 1983. Because 98 (114) households participated in 1982 (1983), N=212 (98 + 114). The derivation of the expected cell frequencies was that shown in Table 5, Panel B. The first of these two χ2 statistics were computed from the observed and expected cell frequencies using Equation (5), and data from Table 5, Panels A & B. =
+
+
+
=
The procedure was repeated with the exception that the row totals pertained to the frequency of households that did not participate in IRAs in either 1982 or 1983. Because 408 (392) households did not participate in 1982 (1983), N= 800 (408 + 392). The observed cell frequencies are as shown in Table 5, Panel C. Next, the expected cell frequencies were derived as reported in Table 5, Panel D. The second χ2 statistic was computed from the observed and expected cell frequencies using Equation (6), and data from Table 5, Panels C & D. =
+
+
+
=
The IRA and No IRA components of the analysis were a two by two contingency chart. Each two by two component had one degree of freedom (df), for the combined χ2 statistics, df = 2. The χ2 figure for the IRA (No IRA) component was 2.095 (1.404), see Equations 5 & 6, respectively. The combined χ2 statistics was 3.499. The critical χ2 value (α = .10, df = 2) was 4.60. Thus, one could not reject the null hypothesis of a constant transition matrix. If this χ2 test was also conducted using the T83,84 matrix and 1983 (1984) as year t = 1 (t = 2), the combined χ2 statistic would have equaled 2.907. In other words, the notion that the pattern of IRA participation among the 506 households in the panel could have been described by a first order
Journal of Scholastic Inquiry: Business
Volume 3, Page 18Â
Â
Markov chain for the period 1982-1984 appeared reasonable in spite of the 1984 participation exceeding expectations.
Discussion This paper reports on the use of the Markov process as a means of describing and evaluating the dissemination of behaviors that are encouraged through public policy initiatives. Specifically the focus was on IRA participation, a tax policy initiative that provides tax incentives aimed at encouraging households to save for retirement. Rather than looking at economic and psychological motivations for IRA participation, the diffusion of this behavior across time was examined regardless of the economic and psychological factors that may have influenced participation. The study was conducted using the period 1982 to 1985, a time when deductible IRA contributions had the least restrictions. The panel used was 506 households where both spouses were eligible to make maximum IRA contributions for all years from 1982 to 1985. Although far from perfect, the assertion that the dissemination of IRA participation through the households in the panel followed a Markov process appeared reasonable and within conventional significance levels (i.e., p<.10). The predictions of the disseminations of behaviors encouraged through policy initiatives using Markov processes appeared to have had the potential to assist policy makers in evaluating benchmarks derived from more sophisticated models. In short, if the results from Markov processes are consistent with forecasts from other models, then policy makers can have greater confidence in evaluating acceptance levels of behaviors suggested by other means. In the present exercise, the IRA participation of the panel households exceeded expectations predicted by the Markov process in 1984, but fell short in 1985. Using Markov predictions to support other forecasts can direct the attention of policy makers to situations where their initiatives are not gaining the levels of acceptance that were desired, or whether the acceptance levels were unreasonably high. In either case policy makers would be in better positions to make timely and appropriate responses. Within the context of the present analysis, a too low level of IRA participation might have indicated that IRA tax incentives were inadequate in motivating retirement savings. On the other hand, a too high level of IRA
Journal of Scholastic Inquiry: Business
Volume 3, Page 19Â
Â
participation might have proven too costly to Treasury. Nevertheless, caution should be exercised in extending the results reported in this paper beyond the model, time period, panel, and tax setting used in the analyses. More research is needed in evaluating the acceptance levels of policy initiatives. Author Biography Charles R. Enis is an associate professor of accounting at the Pennsylvania State University. He received his doctorate in business from the University of Maryland and is a certified public accountant. His research areas are taxation, accounting, and supply chain. He has published in The Accounting Review, Journal of Accounting Research, Journal of the American Taxation Association, Accounting, Organizations, and Society, Decision Sciences, Logistics and Transportation Review, Transportation Law Journal, Transportation Journal, Real Estate Review, Policy Sciences, Advances in Marketing, Journal of Economic Psychology, Journal of Business and Economics, Tax Notes, Public Finance Review, among others. References Alderson, W., & Green, P. E. (1964). Planning and problem solving in marketing. Homewood. IL: Richard D. Irwin. Aviv, Y., & Pazoal, A. (2005). A partially observed Markov decision process for dynamic pricing. Management Science, 51(8), 1400-1416. Boynton, N. D. (1984). The IRA sweepstakes, A consumer study. Hartford, CT: Life Insurance Marketing Research Association. Burman, L., Cordes, J., & Ozanne, L. (1990). IRAs and national savings. National Tax Journal, 43(3), 259-283. Burman, L. E., Gale, W. G., & Weiner, D. (2001). The taxation of retirement saving: Choosing between front-loaded and back-loaded options. National Tax Journal, 54(3), 689-702.
Journal of Scholastic Inquiry: Business
Volume 3, Page 20Â
Â
Carroll, J. S. (1992). How taxpayers think about their taxes: Frames and values. In J. Slemrod (Ed.), Why people pay taxes: Tax compliance and enforcement (pp. 43-63). Ann Arbor, MI: University of Michigan. Chen, D., & Trivedi, K. S. (2005). Optimization for condition-based maintenance with semiMarkov decision process. Reliability Engineering and System Safety, 90(1), 25-29. Copeland, P. V., & Cuccia, A. D. (2002). Multiple determinants of framing referents in tax reporting and compliance. Organizational Behavior and Human Decision Processes, 88(1), 499-526. Crum, R. P. (1991). Statistics of income panel of individual returns: An overview. In C. R. Enis (Ed.), A guide to tax research methodologies (pp. 96-114). Sarasota, FL: American Taxation Association of the American Accounting Association. Diamond, P. A., & Hausman, J. A. (1984). Individual retirement and savings behavior. Journal of Public Economics, 23(1/2), 81-114. Enis, C. R. (1991). The use of individual tax model files to obtain data for empirical research in taxation. In C.R. Enis (Ed.), A guide to tax research methodologies (pp. 81-95). Sarasota, FL: American Taxation Association of the American Accounting Association. Feenberg, D. R., & Skinner, J. (1989). Sources of IRA saving. In L. Summers (Ed.), Tax policy and the economy (pp. 25-46). Cambridge, MA: MIT Press. Frischmann, P. J., Gupta, S., & Weber, G. J. (1998). New evidence on participation in individual retirement accounts. Journal of the American Taxation Association, 20(2), 57-82. Gale, W. G., & Scholz, J. K. (1994). IRAs and household saving. American Economic Review, 84(5), 1233-1260. Gin, E., Falcke, M., Wagner, L. E., Yule, D. I., & Sneyd, J. (2009). Markov chain Monte Carlo fitting of single-channel data from inositol trisposphate receptors. Journal of Theoretical Biology, 257(3), 460-474. Hoey, J., von Bertoldi, A., Poupart, P., & Mihailidis, A. (2007). Assisting persons with Dementia during handwashing using a partially observable Markov decision process. Proceedings of the 5th International Conferences on Commuter Vision Systems (ICVS 2007) (pp. 2129). Bielefeld, Germany: Applied Computer Science Group.
Journal of Scholastic Inquiry: Business
Volume 3, Page 21
Howard, R. A. (1960). Dynamic programming and Markov processes. New York, NY: John Wiley & Sons. Hubbard, G. R. (1984). Do IRAs and KEOGHs increase saving? National Tax Journal, 37(1), 43-54. Hulse, D. S. (2003). Embedded options and tax decisions: A reconsideration of the traditional vs. Roth IRA decision. Journal of the American Taxation Association, 25(1), 39-52. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decisions under risk. Econometrica, 47(2), 263-291. Long, J. E. (1990). Marginal tax rates and IRA contributions. National Tax Journal, 43(2), 143154. Spaan, Matthijs T. J. & Vlassis, N. (2005). Perseus: Randomized point-based value iteration for POMDPs. Journal of Artificial Intelligence Research, 24, 195-220. O’Neil, C., & Thompson, G. R. (1987). Participation in individual retirement accounts: An empirical investigation. National Tax Journal, 40(4), 617-624. Pineau, J., Montemerlo, M., Pollack, M., Roy, N., & Thrun, S. (2003). Towards robotic assistants in nursing homes. Robotics and Autonomous Systems, 42, 3-4. Poterba, J. M., Venti, S. F., & Wise, D. A. (1998). Personal retirement savings programs and asset accumulations: Reconciling the evidence. In D. A. Wise (Ed.), Frontiers of the economics of aging (pp. 23-106). Chicago, IL: University of Chicago Press. Putterman, M. L. (2005). Markov decision processes: Discrete stochastic dynamic programming. Hoboken, NJ: John Wiley & Sons. Rosales, R. A., & Varanda, W. A. (2009). Allosteric control of gating mechanisms revisited: The large conductance Ca2+- activated K+ channel. Biophysical Journal, 96(10), 3987-3996. Seida, J. A., & Stern, J. J. (1998). Extending Scholes/Wolfson for post-1997 pension investments: Application to the Roth IRA contribution and rollover decisions. Journal of the American Taxation Association, 20(2), 100-110. Siekmann, I., Wagner, II, L. E., Yule, D., Fox, C., Bryant, D., Crampin, E.J., & Sneyd, J. (2011). MCMC estimation of Markov Models for ion channels. Biophysical Journal, 100(8), 1919-1929. Summers, L. (1986). Reply to Galper & Byce. Tax Notes 31 (June 9), 1014-16.
Journal of Scholastic Inquiry: Business
Volume 3, Page 22
Thaler, R. H. (1994). Psychology and savings policies. American Economic Review, 84(2), 18692. Venti, S. F., & Wise, D. A. (1988). The determinants of IRA contributions and the effects of limit changes. In Z. Bodie, J. B. Shoven, & D. A. Wise (Eds.), Pensions in the U.S. economy (pp. 9-52). Chicago, IL: University of Chicago Press.
Table 1 IRA Participation Frequencies from 1982 through 1985 Variable C82 C83 C84 C85
N 506 506 506 506
Mean N * Mean .1936759 98 .2252964 114 .2845850 144 .2984190 151
Note: C82 through C85 shows the percentage of frequency of IRA participation by the 506 households in the panel from 1982 – 1985, respectively.
Journal of Scholastic Inquiry: Business
Volume 3, Page 23Â
Â
Table 2 Parameters of a Markov Chain Panel A: Contingency chart showing the marginal portions of the frequencies of IRA participation in 1982 and 1983 among the 506 households in the panel. 1983 IRA? Yes
No
Yes
81
17
98 = C82
No
33
375
408 = ~ C82
114 = C83
392 = ~ C83
506 = N
1982 IRA?
Note:
C82 = 98 of 506 panel households participated in IRAs in 1982. ~C82 = 408 of 506 panel households did not participate in IRAs in 1982. C83 = 114 of 506 panel households participated in IRAs in 1983. ~C83 = 392 of 506 panel households did not participate in IRAs in 1983.
Panel B: A matrix of transitional probabilities (T82,83)
Note: p(C83|C82) = Probability of IRA participation in 1983 given the frequency of participation in 1982. p(~C83|C82) = Probability of no IRA participation in 1983 given the frequency of participation in 1982. p(C83|~C82) = Probability of 1983 participation given the frequency of no participation in 1982. p(~C83|~C82) = Probability of no 1983 participation given the frequency of no participation in 1982. Panel C: Matrix (R85) showing the probabilities of no participation and participation in 1983
p(C83) = Probability of IRA participation in 1983. p(~C83) = Probability of no IRA participation in 1983.
Journal of Scholastic Inquiry: Business
Volume 3, Page 24
Table 3 Contingency Chart Showing Actual Frequencies of IRA Participation, and no Participation for 1983 and 1984, Respectively 1984 IRA?
1983 IRA?
Yes
Yes 101
No 12
114 = C83
No
43
349
392 = ~ C83
144 = C84 362 = ~ C84 506 = N Note: C83 = 114 IRA participants in 1983. ~C83 = 392 no IRA participants in 1983. C84 = 144 IRA participants in 1984. ~C84 = 362 no IRA participants in 1984 Table 4 The Frequencies of IRA Participation Predictions for the 506 Panel Households from 1986 to 1996 that were Generated by the 1982 – 1983 Transitional Probabilities (T82,83) No. of IRAs Predicted For Year 141 1986 146 1987 159 160 160
1994 1995 1996
Note: T82,83: A matrix of transitional probabilities whose elements were derived from the matrix of conditional probabilities shown in Table 2, Panel B. …
: The predicted IRA frequencies from the Markov chain generated by the T82,83
transitional matrix for years 1986 through 1996, respectively.
Journal of Scholastic Inquiry: Business
Volume 3, Page 25
Table 5 Matrices Used in Testing the Null Hypothesis that the T82,83 and T83,84 Were Constant Transitional Matrices Panel A: The observed cell frequencies of IRA participation in 1982 (C82) and 1983 (C83) t + 1 IRA t + 1 No IRA 1982 IRA, t = 1 81 17 1983 IRA, t = 2
98 = C82
101
13
114 = C83
182
30
212 = N
Panel B: The expected cell frequencies of IRA participation in 1982 and 1983
1982 IRA, t = 1
t + 1 IRA (182 * 98) / 212 = 84
t + 1 No IRA (30 * 98) / 212 = 14
1983 IRA, t = 2 (182 * 114) / 212 = 98 (30 * 114) / 212 = 16 Panel C: The observed cell frequencies of no IRA participation in 1982 (~C82) and 1983 (~C83) t + 1 IRA t + 1 No IRA 1982 IRA, t = 1 33 375 408 = ~C82 1983 IRA, t = 2
43
349
392 = ~C83
76
724
800 = N
Panel D: The expected cell frequencies of no IRA participation in 1982 and 1983 t + 1 IRA 1982 IRA, t = 1 (76 * 408) / 800 = 39
t + 1 No IRA (724 * 408) /800 = 369
1983 IRA, t = 2 (76 * 392) / 800 = 37 (724 * 392) / 800 = 355
Journal of Scholastic Inquiry: Business
Volume 3, Page 26
Auction Theory for Multi-unit Reverse Auctions: Testing Predictions in the Lab William B. Holmes Georgia Gwinnett College Abstract This study adapts existing theory of bidding behavior in traditional multi-unit discriminatory price auctions to form predictions of sellers’ behavior in multi-unit sealed-offer reverse auctions. Laboratory experiments are used to test theoretical predictions when sellers’ costs are induced so that their strategic behavior is observable. Experiment results indicate that low cost sellers usually make offers that are less aggressive than those predicted by theory, whereas high cost sellers tend to be more aggressive than theory predicts. Although there are some limitations, the results of this study may provide a starting point for exploring how theory could be altered to improve prediction. Keywords: reverse auction, discriminatory price, multi-unit auction, sealed-bid Acknowledgements: The author wishes to thank the Georgia State University Research Services & Administration for supporting this research through a Dissertation Grant Award.
Introduction Multiple-unit discriminatory price auctions in which buyers bid to purchase multiple units have been used to allocate many goods and services including broadband spectrum licenses, electricity, and government debt. In these auctions, bidders with the highest bids are selected to buy units from a single seller at prices equal to their bids. The reverse form of multiunit auctions occurs when sellers offer to sell multiple units to a single buyer. In this case, sellers with the lowest offers are selected to sell units at prices equal to their offers. Reverse
Journal of Scholastic Inquiry: Business
Volume 3, Page 27
auctions are commonly used in government and business procurement situations in which the goods or services being purchased may be provided by a large number of prospective suppliers. The established theory of bidding behavior in a multi-unit discriminatory price auction was originally constructed by Vickrey (1962). Vickrey’s work was later extended by Harris and Raviv (1981) with a subsequent revision by Cox, Smith and Walker (1984). In general, this theory predicts that a bidder will weigh the amount at which they reduce their bid below their value of the object (the surplus from winning an object in the auction), against the probability that their bid will be accepted. A corollary to this theory is that in reverse auctions, a seller will weigh the amount at which they raise their offer above their cost (the surplus from winning a sale) against the probability that their offer will be accepted. This study uses controlled laboratory experiments to compare the theoretical prediction of sellers’ behavior in multi-unit discriminatory price reverse auctions to their actual behavior. In the experiments, the sellers’ costs are induced and therefore observable. Experiment results indicate that sellers’ behavior is significantly different from the prediction of theory in the environment and parameterization studied here. Understanding the way in which sellers form offers in multi-unit reverse auctions could translate into significant cost savings for the many businesses and governments that contract for goods and services using these mechanisms. The findings of this study may also help sellers in these auctions form more competitive offers. Additionally, this paper contributes to the business and economics literature by providing a detailed description of how to adapt a theory of bidding behavior in standard multi-unit auctions to make predictions of behavior in multi-unit reverse auctions. The next sections of the paper present discussions of the relevant literature, how to adapt theory to reverse auctions, experimental design, results, and conclusions.
A Review of the Theory of Multi-unit Auctions A large body of empirical work has attempted to determine how varying auction characteristics, design features, and environments result in different allocations of surplus between buyers and sellers. Paul Klemperer has provided an excellent guide to the auction literature, as well as a survey of design characteristics that are critically important to auction
Journal of Scholastic Inquiry: Business
Volume 3, Page 28
outcomes (1999, 2002). Several empirical studies of auctions for oil licenses, treasury bills, and timber have been reported by Laffont (1997). Other auctions for broadband spectrum licenses and electricity have been analyzed by researchers to determine how varying auction designs alter outcomes (Green & Newbery, 1992; McAffee & McMillan, 1996; McMillan, 1994). A common theme among these empirical studies is that the choice of auction rules, design, and environment can result in large cost (or revenue) differences for the buyer (or seller). In many cases, the stakes are high. For example, the relatively recent (since 1994) allocation of broadband spectrum rights by auction has raised hundreds of billions of dollars worldwide (Bichler, Goeree, Mayer, & Shabalin, 2014). A similar example is found in the U.S. Conservation Reserve Program which uses a reverse auction-like mechanism to purchase conservation services from private landowners. Kirwan, Lubowski, and Roberts (2005) estimate that participants in that program made offers from 10 to 40% higher than their costs. Because the conservation program has paid out tens of billions of dollars since its inception in 1985, any improvement in auction design that reduced the inflation of offers could be of large economic significance to the government and taxpayers. While empirical studies can analyze the outcomes of auctions conducted in the field, it is also important to consider and evaluate the existing economic theory of bidding behavior in these auctions in order to better understand auction results and inform participants’ strategies. Theoretical predictions for multi-unit auctions began with Vickrey’s formulations of Nash equilibrium bidding behavior in both single unit and multiple unit auctions for risk neutral economic agents (Vickrey, 1961, 1962). Harris and Raviv extended Vickrey’s model to explain behavior in multiple unit auctions where all bidders have the same individual concave utility function and values are drawn from a general distribution function (Harris & Raviv, 1981). The equilibrium bid function from Harris and Raviv for risk neutral agents was later converted into the form of computable finite polynomials by Cox et al. (1984). Their formulation of a Nash equilibrium bid function for a risk-neutral bidder in a standard buyers’ auction is as follows:
(1) k (v / v ) N Q k 1 (Q 1)! v ( N Q k 1)k!(Q 1 k )! bn (v) k Q01 (1) k (v / v ) N Q k (Q 1)! k 0 ( N Q k ) k!(Q 1 k )! Q 1
(1)
Journal of Scholastic Inquiry: Business
Volume 3, Page 29
Here, each bidder i = {1, …, N} submits a bid (b) to purchase a single unit, and the Q highest bidders win (note how this is similar to the case of a reverse auction in which each seller submits an offer to sell a single unit, and the Q lowest offers are accepted). The parameter v is the bidder’s underlying value, while reflects the bidder’s belief of the highest underlying value held among all auction participants. In the above formula, participants construct their bids by evaluating the probability that at least N-Q other bids will be lower than their own bids. The analog for a reverse auction is that sellers construct their offers by evaluating the probability that at least N-Q of the other offers will be higher than their own offer. The only known test of an approximation of this theory for reverse auctions was conducted by Cason and Gangadharan (2005). Cason and Gangadharan found that sellers’ costs were a significant determinant of their offers, there was an increase in offers over time, and that offers made by low cost sellers were below what was predicted by theory. In their paper, the authors referenced formula (1) from Cox et al. (1984). However, they did not describe how the bid function for a traditional multi-unit auction is transformed into an offer function for a reverse multi-unit discriminatory price auction. An illustration of the relationship between traditional and reverse multi-unit auction theory appears in Figure 1. The figure shows the theoretical prediction of bids (and offers) for the auctions conducted in this study. The auctions have N=11 bidders for Q=6 units with values (or costs) randomly drawn from the uniform distribution [3, 10]. Double arrows are drawn to indicate how the difference between bids and offers are related for traditional and reverse auction predictions. In the traditional auction, buyers at the lower end of the value distribution are predicted to shave their bids by the same amount that sellers at the higher end of the value distribution are predicted to inflate their offers in a reverse auction.
Experiment Design The experiments in this study allow for a comparison between the actual behavior of sellers in reverse multi-unit discriminatory price auctions and the theoretical predictions adapted from the theory derived by Cox et al. (1984). The experiments were programmed and conducted using the Ztree software (Fischbacher, 2007). All 44 subjects were recruited from the undergraduate student population at Georgia
Journal of Scholastic Inquiry: Business
Volume 3, Page 30
State University. Male and female subjects ranging in age from 19 to 29 participated in sessions held in the Experimental Economics Laboratory located within the Experimental Economics Center of the Andrew Young School of Policy Studies at Georgia State University. A group size of N=11 individuals was chosen to provide subjects with a complex environment in which to make their offers. In each of four sessions, a group of subjects participated in 14 rounds of a reverse auction. The 14 rounds of the auctions for this study were not continuous. Specifically, there were two groups of 7 rounds that were separated by 14 rounds of auctions in which the rules were different from those described in this study. A statistical comparison of sellers’ behavior in the first and last 7 rounds was conducted to determine whether or not to include the last 7 rounds. In each auction, the subjects were told that they were acting as sellers of a fictitious commodity in an auction in which the lowest 6 of 11 offers would be accepted. They were given a computer generated randomly drawn commodity cost between 3 and 10 inclusively. This generated an independent private values environment in which each seller knew their own cost with certainty, but only had a general idea of other auction participants’ costs. Next, the subjects were asked to enter offers between 1 and 20 into a computer to sell their commodities. The computer program ordered the offers from all sellers and determined which of the 11 offers were accepted. A review screen then appeared to provide feedback on the outcome of the auction. The sellers with the lowest 6 offers earned points equal to the amount of their offer, while the others earned points equal to their randomly drawn commodity costs. Each point earned in an auction period was exchanged for $1 at the end of the experiment if that period was chosen for final payoff using a bingo cage lottery. Data generated by these treatments consists of all offers and payoffs in each auction. Econometric analysis of the data will reveal sellers’ behavior in the auctions and allow for a comparison to theory. Learning effects may also be analyzed by comparing sellers’ behavior over time. One potential weakness of this design stems from the use of $1 offer increments. This may be too large of an increment to capture all of the subtlety in sellers’ behavior. Furthermore, in Cox et al. (1984), the theory to predict bidding behavior in these auctions is constructed in continuous space. In spite of these weaknesses, the $1.00 offer increment is a convenient
Journal of Scholastic Inquiry: Business
Volume 3, Page 31
simplification that may help to reduce subject confusion in the auctions, and the results obtained here may be viewed as being compared to a reasonable approximation of the theoretical predictions.
Results Statistical comparisons of the early and late auctions are reported in Table 1. This was necessary to determine whether or not to include the last 7 rounds of data. It is clear from the pvalues (all of which are greater than .10) that the early and late auction rounds were insignificantly different from one another. Regression tests reported later in Table 3 confirm that subject behavior did not vary significantly over time. Therefore, pooling the data from both groups is warranted. Although the analysis reported here includes all 14 rounds of auction data for completeness, the findings of this study are not altered when only the first 7 rounds are used as a robustness check. Actual and predicted offers are plotted against costs in Figure 2. Offers have been slightly dispersed with spherical noise so that they do not overlap, however each point in a cluster corresponds to the nearest integer value on the y-axis. In Figure 2, there are many more offers below (above) the prediction when the corresponding cost for the seller is low (high). Table 2 presents a statistical comparison of sellers’ offers with the predictions from theory and reveals that there is a significant difference between offers and predictions. The mean offer is larger than the prediction (9.35 versus 8.77) when all sellers are considered as one group. Separating the sellers into three types (low, medium, and high) based on their commodity costs confirms the observation from Figure 2 that sellers’ strategic behavior differs from that which is predicted by the theory as the sellers’ opportunity costs change. Low cost sellers make offers that are significantly lower than theory predicts (α=.01), whereas high and medium cost sellers make offers that are significantly greater (α=.01). Since actual behavior differs from the theoretical prediction, it may be useful to examine how subjects form their offers in these auctions. Equation 2 depicts a fixed effects panel regression model that tests whether subjects form their offers as linear or quadratic functions of their cost and whether or not their strategic behavior changes through repetition. The subject
Journal of Scholastic Inquiry: Business
Volume 3, Page 32
fixed effects are included in order to control for any unique characteristics of an individual participant that could bias his or her strategic behavior in the auctions. Additional random effects panel regressions were conducted as robustness checks and they confirmed the fixed effects results reported in this study.
Regression results are presented in Table 3. The offer function does not appear to be affected by repetition. A constant term of 6.68 is statistically significant at the α=.01 level. Sellers expect at least this much in return for their units. As their opportunity costs rise, sellers make higher offers. In particular, the square of the commodity cost is positively statistically significant at the α=.01 level. This means that sellers increase their offers at an increasing rate when their costs rise. Interestingly, the offer function predicted by theory also expresses this curvature, but it is much more pronounced in the laboratory data. From the perspective of the auctioneer, the revelation that sellers increase their offers at an increasing rate as costs rise implies that payments for units purchased from higher cost sellers may be significantly greater than the sellers’ costs relative to the payments made to lower cost sellers. This implies that buyers should pay close attention to the number of units (Q) purchased in each auction. If it is possible to conduct more frequent auctions which accept a smaller number of units from relatively low cost sellers, then the auctioneer can reduce instances in which she “overpays” the seller. However, this may not be possible in cases in which the low cost sellers are constrained in their capacity to provide a sufficient number of units to satisfy the buyer’s requirements in a given time period. From the perspective of the sellers, these results imply that there may be a wide range of lower offers that a high cost seller could profitably issue in order to gain a meaningful increase in the likelihood of having her offer accepted relative to other high cost sellers. This is because the high cost sellers have a tendency to make significantly higher offers than those predicted by theory. Because low cost sellers have a tendency to make offers that are closer to their actual cost than those predicted by theory, lower cost sellers should be wary of issuing offers that are much higher than their costs, as doing so may substantially reduce the likelihood of their offers gaining acceptance.
Journal of Scholastic Inquiry: Business
Volume 3, Page 33
Discussion In reverse multi-unit discriminatory price auctions, sellers weigh the amount by which they increase their offers above their opportunity costs against the likelihood that their units will be accepted for sale. Cox et al. (1984) refined the theory of behavior in these auctions into computable Nash equilibrium bid functions for risk neutral bidders. One contribution of this study is to show how these bid functions can form predictions of sellers’ offers in reverse auctions. In this study, sellers are more responsive to their underlying costs than predicted. Lower cost sellers made offers that were lower than predicted, and as values rose, medium and high cost sellers made offers that were higher than predicted. Although lower than predicted offers could be a result of risk aversion, that would not explain offers that were higher than predicted. Recommended modifications to the strategic behavior of sellers in these auctions differ based on whether the seller has a relatively high or low cost of providing a unit. Higher cost sellers may be able to profitably reduce their offers to increase the likelihood their offers will be accepted, whereas lower cost sellers may not. Furthermore, the auctioneer should consider increasing the auction frequency while reducing the total number of units purchased per auction to reduce contract prices. Comparing these findings to those of the discriminatory price reverse auctions reported in Cason and Gangadharan (2005), it seems as though subjects pursue a different strategy under each parameterization. The participants in Cason and Gangadharan’s auctions made offers that were a linear function of their cost, whereas the participants in this study behaved in accordance with a quadratic offer function. However, in both studies, the offers made by low cost sellers were lower than predicted, which reinforces the strategic implications for those bidders discussed above. The results here show a systematic and significant difference between actual behavior and that which is predicted by theory, thereby providing a useful starting point for exploring how theory might be modified to improve prediction. However, it is important to recognize that subject behavior may be sensitive to auction parameterization, information conditions, and
Journal of Scholastic Inquiry: Business
Volume 3, Page 34
experiment protocol. Future studies that vary these aspects would further improve our understanding of sellers’ behavior. Author Biography William Holmes has been an Assistant Professor of Economics at Georgia Gwinnett College for more than four years. Dr. Holmes received his PhD in economics from Georgia State University. His research, which focuses on areas of applied microeconomics including experimental and environmental economics, has been accepted for publication in the Journal of Applied Economics and Policy as well as Libertarian Papers. References Bichler, M., Goeree, J., Mayer, S., & Shabalin, P. (2014). Spectrum auction design: Simple auctions for complex sales. Telecommunications Policy, 38(7), 613-622. Cason, T. N., & Gangadharan, L. (2005). A laboratory comparison of uniform and discriminative price auctions for reducing non-point source pollution. Land Economics, 81(1), 51-70. Cox, J. C., Smith, V. L., & Walker, J. M. (1984). Theory and behavior of multiple unit discriminative auctions. The Journal of Finance, 39(4), 983-1010. Fischbacher, U. (2007). Z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171-178. Green, R. J., & Newbery, D. M. (1992). Competition in the British electricity spot market. Journal of Political Economy, 100, 929-953. Harris, M., & Raviv, A. (1981). Allocation mechanisms and the design of auctions. Econometrica, 49(6), 1477-1499. Kirwan, B., Lubowski, R. N., & Roberts, M. J. (2005). How cost-effective are land retirement auctions? Estimating the difference between payments and willingness to accept in the Conservation Reserve Program. American Journal of Agricultural Economics, 87(5), 1239-1247. Klemperer, P. (1999). Auction theory: A guide to the literature. Journal of Economic Surveys, 13(3), 227.
Journal of Scholastic Inquiry: Business
Volume 3, Page 35Â
Â
Klemperer, P. (2002). What really matters in auction design. The Journal of Economic Perspectives, 16(1), 169. Laffont, J. J. (1997). Game theory and empirical economics: The case of auction data. European Economic Review, 41(1), 1. McAffee, R. P., & McMillan, J. (1996). Analyzing the airwaves auction. Journal of Economic Perspectives, 10, 159-175. McMillan, J. (1994). Selling spectrum rights. The Journal of Economic Perspectives, 8(3), 145162. Vickrey, W. (1961). Counterspeculation, auctions, and competitive sealed tenders. The Journal of Finance, 16(1), 8. Vickrey, W. (1962). Auction and bidding games. Paper presented at the Recent Advances in Game Theory, Princeton.
Table 1 Summary statistics 616 Observations Variable Cost Offer Offer-Cost Predicted Offer
Period (1-7) Mean 6.43 9.29 2.86 8.80
Period (8Mean 6.53 9.41 2.87 8.74
t-test mean differences t p t(614) = -0.58 0.56 t(614) = -0.57 0.57 t(614) = -0.09 0.93 t(614) = 1.30 0.20
Table 2 Comparison of actual offers with predictions of theory Sellers types All sellers Low cost (3-5) Medium cost (6-7) High cost (8-10)
Number of
Mean
Mean
observations 616 228 168 220
predicted offer 8.77 8.30 8.43 9.51
actual offer 9.35 7.74 9.08 11.21
t-test mean differences t t(615) = 6.21 t(226) = -3.82 t(167) = 4.31 t(219) = 12.07
p .0001 .0002 .0001 .0001
Journal of Scholastic Inquiry: Business
Volume 3, Page 36
Table 3 Fixed effects panel regression results Dependent variable: Offer Cost Cost-squared Period Constant
β 0.038 (0.18) 0.052* (3.28) -0.004 (-0.25) 6.682* (10.49)
R-squared Note. * T-ratios in parentheses are statistically significant at the α=.01 level. CI = confidence interval
95% CI [-0.374 , 0.449] [0.021 , 0.083] [-0.038 , 0.029] [5.431 , 7.932] 0.49
Journal of Scholastic Inquiry: Business
Volume 3, Page 37Â
Â
Figures
Figure 1. Relationship of predictions in traditional versus reverse multi-unit auctions.
Journal of Scholastic Inquiry: Business
Figure 2. Actual versus predicted offer.
Volume 3, Page 38
Journal of Scholastic Inquiry: Business
Volume 3, Page 39
The Use of Computer-Aided Audit Tools in Fraud Detection Judy Ramage Lawrence Christian Brothers University Denny L. Weaver American National Diversified, Inc. Howard Lawrence University of Mississippi Abstract
This study examines the five most popular types of data analysis software: Microsoft Access, Microsoft Excel, Interactive Data Extraction and Analysis (IDEA), Audit Command Language (ACL), and Picalo in an attempt to determine how they are being used to combat intentional and unintentional misstatements in the area of internal control and fraud. The study finds that these tools have extensive use in auditing, data mining, and data manipulation. A number of statistical methods of analyzing the data are also readily available from this software. The study then examines the types of data analysis that can be used for fraud identification, detection, and prevention. Application of Benford’s Law is used to demonstrate the power of these products. Keywords: Fraud, Benford’s law, Computer-aided tools and techniques.
Introduction For over four decades the property of naturally occurring data has been used to check for anomalies in data sets (Goldacre, 2011). As an example, the Wall Street Journal recently reported that a major fraud had been uncovered through the use of enterprise software applications. What did the software find? There were too many 4’s in the data (McGinty, 2014). To combat this fraud, accountants, businesses, and fraud examiners are increasingly finding that
Journal of Scholastic Inquiry: Business
Volume 3, Page 40Â
Â
software applications allow them to store, process, manipulate, and display vast amounts of transactional data so that these anomalies can be uncovered. This software is employed by all business sizes, ranging from small private companies to the Fortune 500 companies, and by all industry types. Organizations utilize enterprise software for planning and strategic goals such as marketplace differentiation. The sheer volume of data and the amount of information consumed by these enterprises pose unique problems for audit professionals and fraud examiners, and they are continually looking for ways to handle this data.
Review of the Literature A review of the literature indicates that computer-aided audit tools and techniques (CAATTs) are comprised of four classifications: data analysis software (DAS), network security evaluation software and utilities, operating system and data base management system security evaluation of software and utilities, and software and code testing tools (Sayana, 2003). Enterprise software servers maintain a data base management system (DBMS) in order to organize, store, manage, and retrieve processed data. DAS exploits this structure. The exploitation of processed data by DAS allows the audit professional to automate and streamline the testing of data, thereby increasing the effectiveness and productivity of the audit engagement. However, in order for an auditor to engage DAS, the auditor must possess the following: connectivity and access to data, audit skills and identifying the concerns, and knowledge of business application and data (Sayana, 2003). The literature also indicates that business applications consist of two categories: transactional applications and support applications. Each application relates to a specific business function of the business operation. Transactional applications involve the processing and recording of business transaction by application software, while support applications support business activities but neither process nor record business transactions (Juergens, 2006). The audit function should be primarily focused on the transactional applications. Organizations process transactional data by recording the value of business transactions in terms of debits and credits, serving as repositories for financial, operational and regulatory data, and enabling various forms of financial and managerial reporting (Bellino, 2007). By utilizing CAATTs, an
Journal of Scholastic Inquiry: Business
Volume 3, Page 41
auditor can implement data analysis by extraction, querying, manipulation, and summarization of business activities. DAS, by analyzing transactional data, provides a platform to perform many audit and regulatory functions. This software can identify two major fraud categories, asset misappropriation and financial statement fraud, as well as human errors and anomalies due to the limitless number of analytical database relationships. In addition, DAS can (and should) be employed to satisfy the requirements of the Sarbanes-Oxley Act of 2002. Section 404 of this Act specifically requires that management effectively test and report on the adequacy and effectiveness of the issuer control structure and procedures of the company. This report is then part of the audit of financial reports in accordance with standards for attestation engagements (Kranacher, Riley, Jr., & Wells, 2011). Furthermore, the audit on internal controls could not be conducted without the benefit of CAATT’s support in order to facilitate the test required by the Public Company Accounting and Oversight Board’s (PCAOB) Auditing Standard number two (Alali & Pan, 2011). The data analytics provided by the DAS below ensure all transactions will be inspected to meet the above described requirements.
Methodology The methodology used in this paper was to examine a variety of papers and studies to determine the most commonly used types of analysis software, and then to discover how they are being used in the overall area of internal audit and fraud detection. This general literature review was conducted using the eight databases described in Table 1. The search was conducted using the terms fraud, Benford’s Law, Computer-aided audit tools and techniques, audit command language, Picalo, and IDEA. These searches were gradually refined until a compendium of studies were attained. The abstracts of these studies were reviewed carefully to determine if they appeared suitable for the study and, if so, were examined further. For those studies examined further, the bibliographies and references were also examined to determine if any were suitable. RefWorks was used to keep track of the research citations. This methodology found that the five most popular types of DAS are Microsoft Office Access, Microsoft Excel, Interactive Data Extraction and Analysis (IDEA) by CaseWare
Journal of Scholastic Inquiry: Business
Volume 3, Page 42
International, Audit Command Language (ACL) by ACL Services Limited, and Picalo by Partly Open Source. Two of these tools, Microsoft Access and Microsoft Excel, can be utilized for non-auditing functions, while IDEA, ACL and Picalo are primarily marketed toward the audit community. Where IDEA and ACL are generalized audit software (GAS) that contains specialized fraud detection programs, Picalo is a dedicated fraud detection program. In order for this information to be utilized by the software applications, the data must be converted into a file structure. The characteristics of each program such as capacity, limitations, ease of use, and analytical capabilities were gathered during the data collection period and are described below.
Microsoft Office Access Microsoft Office Access is a relational database management system developed by Microsoft Corporation with an initial release date of November 1992. Since Access was not written specifically for data analysis and extraction, the database capacity is limited to two gigabytes with two hundred and fifty-five fields (Harding, 2006). However, Access exceeds Excel’s capabilities in input validation, data sharing, reporting and security (Love, 2004). While training is required to utilize Access, Microsoft provides prebuilt templates and on-line training for its office suite applications. Like many relational database management systems, Access utilizes many built in functions such as creating and joining tables. Another Microsoft Office product used to perform analytical procedures is Excel.
Microsoft Excel Excel is a proprietary commercial spreadsheet developed by Microsoft Corporation in November 1987 for Windows. Like Access, Excel has limited capacity issues (Harding, 2006). Microsoft Excel 2007 comprises of 65,536 rows and 256 columns, while Microsoft Excel 2010 has a row limitation of 1,048,576 and column limitation of 16,384. While Excel is a popular software package among accountants and auditors, this software is not purpose driven solely for data base functionality (Bizarro & Garcia, 2011a). However, data analysis tools are inherent within Excel. Excel has nineteen data analysis tools and eighty built-in statistical functions.
Journal of Scholastic Inquiry: Business
Volume 3, Page 43
Besides the built in features, outside organizations have developed additional add-in programs. Whereas Access and Excel have many functions outside of fraud detection, IDEA and ACL are specific function software programs (Bizarro & Garcia, 2011a).
Interactive Data Extraction and Analysis CaseWare International acquired the rights to IDEA from The Canadian Institute of Chartered Accountants (CICA) in April 2000 to be globally marketed. The Office of the Auditor General of Canada created and developed IDEA; however, CICA thoroughly overhauled the software program. IDEA software package can be operated on a personal computer for small companies or enterprise server, IDEA Server, for large companies. This application permits for unlimited transactions to be analyzed and imported from multiple sources. IDEA supplies a comprehensive and easy to use 270 page tutorial which includes step-by step instructions, solutions, and screen shots (Kranacher, Riley, Jr., & Wells, 2011). IDEA maintains Microsoft Windows compatibility and functionality standards, creating a user-friendly environment which is the strength of this software. The analytical abilities of the DAS provides the auditor with tools by which to test samples, extract specific transactions, and identify unusual data gaps and duplicate entries (McCutchen, 2009). Another DAS program that can quickly retrieve specific transactional data is ACL.
Audit Command Language ACL is available on a desktop/network edition as well as a server edition. This software can analyze unlimited information from multiple sources such as Adobe’s portable document format (PDF), Microsoft Excel files, and Microsoft Access database files. ACL’s documentation contains a 76 page power point presentation which provides a comprehensive tutorial for all users (Kranacher, Riley, Jr., & Wells, 2011). ACL and IDEA offer similar menu based functional controls. To assist the auditor with simple to complex analytical tests¸ ACL includes pre-programmed commands for data analysis, providing a full range of analytical power. Unlike
Journal of Scholastic Inquiry: Business
Volume 3, Page 44
IDEA and ACL, Picalo is a dedicated fraud detection program that differentiates itself by the utilization of detectlets.
Picalo Picalo, created by Conan Albrecht, is a restricted open source program that allows the users to add to its functionality. Picalo’s architecture contains three routine levels by which it operates. Level one source routines allow for base analysis such as sort, search, select, stratify and summarize. Level two source routines incorporate detectlets through which a general user creates by the way of a wizard. The user processes these routines to search for specific red flags. Level three source routines are specific rule-based ontologies thru Python, a multi-paradigm programming language. In order to write detectlets, the user must have a functional understanding of Python. Picalo’s data analysis capacity offers a framework by which it can analyze virtually unlimited data. As previously described, this program requires some training when a user enters level three. By its unique framework, Picalo offers some advanced data analysis tools, such as the grouping together by a number of days and automatically group records to achieve smoothness in data (Kranacher, Riley, Jr., & Wells, 2011).
Results The results of the study confirm that DAS permits users to create a targeted risk assessment, which is consistent with PCAOB’s Auditing Standard number five (Alali & Pan, 2011). This targeted risk assessment must be completed so as to prevent, deter, and detect fraud in a digital environment. The study finds that the detection of fraud is comprised of a combination of techniques such as auditing tests, statistical analysis, and data mining and manipulation. In order for a targeted approach to be effective in fraud investigation and detection, auditors and fraud investigators must have knowledge of all available techniques, how to apply these techniques, and when to apply these techniques.
Journal of Scholastic Inquiry: Business
Volume 3, Page 45
Auditing Techniques The aging of the accounts payable, accounts receivable and inventory systems aging reports is a common auditing tool and is useful when detecting fraud and control inefficiencies. The number of days is calculated by determining the difference between the creation of the data entry of the financial transaction and the subsequent payment of the payable or the receipt of the receivable (Coderre, 2009). For example, the aging may indicate stale vendor purchases invoices or customer sales invoices due to the days outstanding. Also, during the submission phase of a sealed bid process, an employee may be committing fraud by accepting bribes in exchange for the acceptance of a late bid on a contract. The aging of accounts receivable is also a useful tool when determining the allowance for doubtful accounts. The verification of financial information by confirmation letters is a required audit procedure and can be automated by the use of DAS. The data extracted from the application system is utilized to determine the account balances of vendors and customers. By automating this process, the auditor gains efficiencies by reducing the time and effort to extract information, generate these letters, and create address labels. A program script can be created to further automate the confirmation letter process (Coderre, 2009). Digital analysis empowers the auditor by identifying transactional data in order to assess the operational effectiveness of internal control, to verify data reliability and comprehensiveness, and to identify fraud risk factors and activities. The fraud examiner must understand the transactional data in order to assess the symptoms of fraud, waste, and abuse in a digital environment. By identifying suspect data, the symptoms of fraud are expanded. Data profiling, ratio analysis, and Benford’s Law are three techniques applied for unidentified symptoms (Coderre, 2009). Digital analysis utilizing Benford’s law is discussed in a later section. The cross-analysis of data may also indicate substantial trends, frequencies, or patterns in the contingency table. Cross tabulation is a frequency distribution consisting of two or more variables. While pivot table is a standard term for cross tabulation, Microsoft Access and Excel use the trademark form, PivotTable (Cox, 2009). With Microsoft Excel 2010, a free add-in, PowerPivot, enables the user to load thirty gigabytes of data for relational analysis. By the use
Journal of Scholastic Inquiry: Business
Volume 3, Page 46
of cross tabulation, the data table creates an easier view in which to detect irregularities by highlighting invalid or uncommon combinations (Coderre, 2009). Parallel simulation is a system-orientated classified approach that is often utilized in an audit. System-orientated approaches test the application system controls in order to assess and assure that the application system is performing within specifications. This CAATT will evaluate any weakness of the application system and allow for inferences to be drawn about the data. The auditor utilizes enterprise transactions through the simulation program and compares the outputs to the application system. The feasibility to utilize this data analysis technique depends on the complexity of the targeted application system. Expressions are equations that allow an auditor to validate and to assess the accuracy and the reasonableness of the application system’s internal calculations (Coderre, 2009). DAS can easily confirm information and computations. Excel 2010’s PowerPivot contains Data Analysis Expressions language in which the user can define custom calculations in PowerPivot tables so as to execute dynamic aggregation relational analysis. The recalculation of significant values and balances confirms that the application system’s data stored values are valid. Manually recalculating sampled transactions is an inefficient use of time and manpower. Another valuable tool of DAS is that the auditor can display only selected information that meets the specified criteria by using filters (Bizarro & Garcia, 2011a). Accordingly, the conditions used limit the amount of data, reduce the review time, and bring attention to those records of interest. Once the criteria are filtered, the investigator can then copy, edit, format, and print this output. The criteria can be combined in order to drill down further into selected criteria. DAS provides an efficient means by which to view selected information.
Statistical Analysis Statistical analysis furnishes a quick overview of the data set in order to promptly detect any anomalies (Coderre, 2009). The statistical information provided by statistical analysis will provide a course of action for additional audit tests given the materiality involved. This highlighted information can be used to test the controls of the application system. Statistical
Journal of Scholastic Inquiry: Business
Volume 3, Page 47Â
Â
analysis calculates various statistics on selected numeric fields. DAS provides a platform in order to perform robust statistical analysis on large data sets. A number of statistical analysis tools are available to the fraud examiner using DAS. Regression analysis compares actual values to predicted values in order to detect relationships between data sets (Coderre, 2009). The predicted values of a regression analysis can be utilized as the basis of an audit investigation. This analysis technique focuses on a dependent variable and its relationship to one or more independent variables. Historical data provides a basis by which to predict future data values. Regression analysis employs a numerical methodology to analytical review procedures. Sampling is an effective tool for limiting the amount of data to be examined. Classical sampling models consist of three types of sampling plans: attributes sampling, discovery sampling, and variables sampling (Georgiades, 2007). Attribute sampling may estimate the specific occurrence (how many) and is primarily applied to test controls. Discovery sampling, also referred as exploratory sampling, is a sampling plan employed when a single error causes the population to be rejected as error free. Variable sampling is used to predict the values measuring a quantifiable conclusion for a given population (how much). Discover and variable samplings are utilized for substantive testing. While ACL and IDEA are often relied upon by internal auditors to plan stratified sampling, Excel Solver, an add-in program developed by Frontline Systems, Inc., can also be utilized for this same functionality (Hall, Pierce & Tsay, 2011). Trend analysis is the process by which information is compared over time. Information such as account balances from the same operational area can be utilized to detect anomalies that require further attention (Coderre, 2009). Trend analysis can also be employed to predict future or past events to justify these same account balances. The combining of files in order to establish relationships permits an auditor to examine trends and find anomalies for fraud analysis. GAS significantly reduces subjectivity and bias from the financial analysis as well as the complexity of statistical calculations from trend analysis (Alali & Pan, 2011).
Journal of Scholastic Inquiry: Business
Volume 3, Page 48Â
Â
Data Mining and Manipulation Data mining using DAS has many applications not normally thought of by the fraud examiner. For example, Chen et al. (2011), describes how mining of large global positioning system (GPS) traces can be used to detect the anomalous activities of taxi drivers. One kind of anomaly described by Chen et al. was that of greedy taxi drivers deliberately taking unnecessary detours in order to overcharge customers. Data mining of these traces allows the fraud examiner to compare these anomalous routes against historically normal routes in real time. Joining, relating, and defining relationships among different databases is a technique that can pinpoint unusual and fraudulent transactions (Coderre, 2009). The joining of different tables via more than one primary common field creates a third table of selected data from the original files. By combining matching data values from these different data files, the auditor can identify the differences between the combined data files. The relating of data files combines different data files from a primary field into a single data file. Defining allows an auditor to identify a relationship of importance or concern. Joining and relating data file serve distinct and separate purposes; therefore auditors must completely understand these differences. Sorting and indexing records are vital to data analysis. The examiner arranges the data in ascending or descending order to provide meaning to the selected data. The selected data may be sorted in alphabetic or numeric order. Sorting allows the auditor to view the specified data records in a quick and easy manner. Indexing refers to the creation of pointers to the original data in a specified order (Coderre, 2009). Stratification is a GAS technique that distributes the data set into specified homogenous layers to detect and identify anomalies. Once the population of the data set is divided into strata of a numeric field or expression and counted, a random sample is selected. The auditorâ&#x20AC;&#x2122;s attention should then be focused on anomalies such as unusually large transactions or account balances which can identify possible fraud symptoms (Coderre, 2009). Stratification provides an auditor with a high-level view of the data. Microsoft Access creates stratification by use of a nested select query (Bizarro & Garcia, 2011b).
Journal of Scholastic Inquiry: Business
Volume 3, Page 49
Summarization relates to the summarizing of the data by counting the records within each unique category of the selected key fields and then subtotaling the numeric fields of these unique value ranges (Bizarro & Garcia, 2011b). This technique permits the auditor to recognize and quantify all of the values for each selected key field. Some examples of this technique are sum, count, average, and maximum and minimum of data values. Summarization allows an auditor to quickly identify incorrect data for comparative purposes. Also, summarization permits an auditor to ensure completeness of the data set. The fraud examiner determines the completeness and accuracy of the data by highlighting duplicate values (Bizarro & Garcia, 2011a). Duplicate records point to a weak internal control structure because the computer system should not allow for records such as check numbers and invoice numbers to be duplicated in any manner. Once a duplicated item is discovered, the auditor must investigate as to the underlying causes of this occurrence. DAS contains numerous commands to identify duplicates within a system so as to detect fraudulent transactions. For example, when searching for a specific order or sequence, gaps in the sequential records indicate data that is missing. Inconsistent sequential transactions help the auditor to focus on erroneous information. The entire data fill must be searched in order to ensure that all transactions are properly recorded and accounted for. Missing data can point to an internal control weakness that needs to be addressed. Auditors should focus on gaps so as to determine the completeness of data and fraud analysis (Coderre, 2009).
Sample Application of DAS An application of data utilization can be seen in the use of Benford’s Law. This law, also known as the first-digit law, was named after physicist Frank Benford. This law utilizes the formula: P (d) = log10 (d+1) – log10 (d) = log10 (1+(1/d)) where d is the leading digit. Benford’s Law has been used in numerous situations both in fraud and other areas. McGregor (2009), for example, states that the Canada Revenue Agency uses Benford’s Law to help identify which tax returns to examine more carefully. Citing Nigrini (2006), McGregor points out that an analysis of U.S. tax returns shows that deductions for mortgage payments tend to follow Benford's Law
Journal of Scholastic Inquiry: Business
Volume 3, Page 50
closely, but claims for charitable contributions tend to be "very messy" when sorted by their leading digits indicating that charitable contribution deductions may be subject to greater fraud. In a different case, researchers examined data from all of the European Nations looking for macroeconomic data that deviated from what Benford’s Law would predict. The results showed that Greece had the largest deviation from Benford’s law of any country in the Union. Two years later, the economy in Greece fell dramatically (Goldacre, 2011). The expected frequencies of Benford’s Law are shown in Table 2, and these frequencies show that there is a predictable pattern to the position of numbers in such things as invoices, checks, purchase orders, and other documents. When fraudsters select numbers for a forged document they often try to arrange those numbers in a random manner. An examination of Table 2, however, shows that the first digit in a numbered document is a one approximately 30% of the time. It is a two approximately 17.6% of the time, and a three approximately 12.5% of the time. It is a nine only 4.5% of the time. By DAS processing of the entire population of a data set, Benford’s Law results are more reliable because when a set of numbers is significantly at odds with Benford’s Law, it can be a warning to the fraud examiner to look closer at those documents to see if there is evidence of fraud. Benford’s Law can be applied to digital analysis by simple Excel formulas (Simkin, 2010). The continuous audit process, by utilizing Benford’s Law, permits internal auditors to prevent and detect fraud by recognizing outliers and exceptions to the general rule (Ramaswamy & Leavins, 2007). There are many data manipulation techniques that can be used in addition to Benford’s Law, some simple and some very complicated. For example, Nigrini (2006) points out that when people invent fraudulent numbers, they tend to avoid numbers with two of the same digits following each other -- for example 155 or 773. A simple analysis of the data could show the absence or presence of such naturally occurring numbers; if the data does not have a sufficient set of numbers with two of the same digits following each other, the data can be more closely examined.
Journal of Scholastic Inquiry: Business
Volume 3, Page 51
Discussion It is often said that technology is a sword that can cut both ways. Technology can create the opportunity element of the fraud triangle through the ease of use and smooth access to the financial and non-financial data by internal and external parties. However, technology also offers the accounting and fraud detection community the tools to fight this fraud. Accordingly, accountants and others have learned that in order to prevent and detect fraud, they must employ DAS. While DAS such as ACL and IDEA is primarily used by external auditing firms, Microsoft Excel and Access are mainly utilized by small internal audit departments (Gray, 2006). This study is important because accounting professionals, Chief Executive Officers, and Chief Financial Officers at public firms are now required by Section 404(b) of the Sarbanes Oxley Act to assess the effectiveness of the internal control of issuers of financial statements. This section further requires these individuals to attest to, and report on, management’s assessment of its internal controls. Severe penalties are provided in the act for these individuals if effective internal controls are not maintained. Because of these requirements, accountants must rethink the monitoring of internal controls, which is only possible by adopting a continuous audit process. The Chief Financial Officer, with the support of other financial executives, must take a leadership role in the migration of internal audit to a continuous audit approach (Heffes, 2006). Author Biographies Dr. Judy Lawrence is professor of accounting at Christian Brothers University in Memphis, Tennessee. Her primary teaching responsibilities are tax and governmental accounting. Dr. Lawrence has published over 30 professional papers in a variety of business and accounting journals and has made numerous presentations at such organizations as the American Accounting Association and the Institute of Management Accountants. Her current research in the area of fraud detection. Denny L. Weaver is chief financial officer at American National Diversified, Inc. in Texas. Mr. Weaver holds a Masters of Accountancy from the University of Mississippi and has
Journal of Scholastic Inquiry: Business
Volume 3, Page 52Â
Â
several years of experience in the area of logistics. He is currently focusing his learning and research in the area of fraud detection. Dr. Howard Lawrence is a clinical professor of accounting at the University of Mississippi teaching financial accounting, and a variety of graduate courses. Dr. Lawrence has over 30 publications in such journals as Advances in Accounting Behavioral Research, the Soo Chow Journal of Economics, and The International Business & Economics Research Journal. His current research is in the area of international accounting and fraud detection. References Alali, F., & Pan, F. (2011, September). Use of audit software: Review and survey. Internal Auditing, 26(5), 29-36. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/898360418/fulltext/E 6D277EE65A74880PQ/1?accountid=14588. (Document ID: 2485162081). Bellino, C., & Hunt, S. (2007, July 1). Auditing application controls: Global Technology Audit Guide,. The Institute of Internal Auditors. Retrieved from http://www.theiia.org/bookstore/downloads/freetomembers/0_1033.dl_gtag8.pdf. Bizarro, P., & Garcia, A. (2011 A, September). Sequel, the other audit tool-part II: Data visualization and analysis. Internal Auditing, 26(5), 16-20. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/898360434/fulltext/8 9F1C8805EDC4C69PQ/1?accountid=14588. (Document ID: 2485162061). Bizarro, P., & Garcia, A. (2011 B, July). Sequel: the other audit tool-part 1: Basic completeness, and uniqueness procedures. Internal Auditing, 26(4), 10-16. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/892464166/fulltext/5 55FFE06178E4D17PQ/10?accountid=14588. (Document ID: 2460363871). Chen, C., Zhang, D., Castro, P. S., Li, N., Sun, L., & Li, S. (2011). Real-time detection of anomalous taxi trajectories from GPS traces. Retrieved from http://asc.di.fct.unl.pt/mpc/1314/teoricas/docs/iboat.pdf. Coderre, D. (2009). Computer-aided fraud prevention and detection: A step-by-step guide. Hoboken, NJ: John Wiley & Sons, Inc.
Journal of Scholastic Inquiry: Business
Volume 3, Page 53Â
Â
Cox, P. (2009). Cross tabulate your data. Strategic Finance, 91(4), 52-53. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/229826537/fulltext/3 816BE4DC53A433CPQ/2?accountid=14588. (Document ID: 1881106151). Georgiades, G. (2007, October). Practice issues and questions & answers relating to Statement on Auditing Standards No. 111, Amendment to Statement on Auditing Standards No. 39, "Audit Sampling". Miller GAAS Update Service, 7(20), 1-6. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/192391189/fulltext/5 6D03FDEF7B94267PQ/9?accountid=14588. (Document ID: 1768390181). Gray, G. L. (2006). An array of technology tools. The Internal Auditor, 63(4), 56-62. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/202734269/fulltextP DF/26FF2C9BB39C486FPQ/179?accountid=14588.. (Document ID: 1123228701). Goldacre, B. (2011, September 17).Bad science: The special trick that helps identify dodgy stats. The Guardian. Retrieved from http://www.theguardian.com/commentisfree/2011/sep/16/bad-science-dodgy-stats Hall, T., Pierce, B., & Tsay, J. (2011, May). How to improve audit sampling efficiency with the excel solver. Internal Auditing, 26(3), 23-31. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/872082681/fulltextP DF/63A6F7A272F24A2BPQ/10?accountid=14588. (Document ID: 2376371001). Harding, W. (2006, December). Data mining is crucial for detecting fraud in audits. Accounting Today, 20(22), 22-24. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/234361577/fulltextP DF/15D79EDCD72348F4PQ/20?accountid=14588. (Document ID: 1183608931). Heffes, E. M. (2006, September). Theory to practice: Continuous auditing gains. Financial Executive, 22(7), 17-18. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/208880123/fulltext/8 5E19445D93C4106PQ/732?accountid=14588. (Document ID: 1127257111). Juergens, M. (2006, March 1). Global Technology Audit Guide Management of IT Auditing. Retreived from http://www.theiia.org/bookstore/downloads/freetomembers/0_1012.dl_gtag4.pdf.
Journal of Scholastic Inquiry: Business
Volume 3, Page 54Â
Â
Kranacher, M. J., & Riley, R. A., Jr., & Wells, J. T. (2011). Forensic accounting and fraud examination. Hoboken, NJ: John Wiley & Sons, Inc. Love, W. J. (2004, September). Excel vs. Access: Another look. The Asset, 52(5), 23-25. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/208880123/fulltextP DF/85E19445D93C4106PQ/732?accountid=14588. (Document ID: 697131131). McCutchen, M. (2009, August). CaseWare IDEA. CPA Technology Advisor, 19(5), 13. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/232910981/fulltextP DF/17C36224ECC04619PQ/5?accountid=14588. (Document ID: 1824912161). McGinty, J. C. (2014, December 5). To find fraud, just do the math. The Wall Street Journal. Retrieved from http://www.wsj.com/articles/accountants-increasingly-use-data-analysisto-catch-fraud-1417804886. McGregor, G. (2009, April 30). Thinking about tricking the tax man? Beware the long arm of Benford's Law; Revenue agency uses first-digit rule to track cheaters. Ottawa. Retrieved from http://webmedia.newseum.org/newseum-multimedia/tfp_archive/2009-0430/pdf/CAN_OC.pdf. Nigrini, M. J. (1996). A taxpayer compliance application of Benford's Law. The Journal of the American Taxation Association, 18(1), 72. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/211023799/fulltextP DF/951E245236574D42PQ/1?accountid=14588. (Document ID: 9790469). Ramaswamy, V., & Leavins, J. (2007, July). Continuous auditing, digital analysis, and Benford's Law. Internal Auditing, 22(4), 25-31. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/214394940/fulltextP DF/FF21F78CB3EF49ABPQ/1?accountid=14588. (Document ID: 1321118491). Sayana, S. A. (2003, January). Using CAATs to support IS audit. Information Systems Control Journal, 1, 21-23. Retrieved from http://search.proquest.com.umiss.idm.oclc.org/accounting/docview/206892330/abstract/3 EDC0BD3CDED4ABAPQ/1?accountid=14588. (Document ID: 307678961).
Journal of Scholastic Inquiry: Business
Volume 3, Page 55Â
Â
Simkin, M. (2010, January). Using spreadsheets and Benford's Law to test accounting data. ISACA Journal, 1, 47-51. Retrieved from http://www.isaca.org/Journal/archives/2010/Volume-1/Pages/Using-Spreadsheets-andBenford-s-Law-to-Test-Accounting-Data1.aspx.
Appendix Table 1 Databases Examined in this Study 1 2 3 4 5 6 7 8
DATA BASE Accounting and Tax
DESCRIPTION A comprehensive coverage of accounting and tax topics from key industry publications. Bloomberg BNA: International, federal, and state tax news, court cases, Business School and analysis, plus tax and legal news and analysis in Resource the antitrust, corporate, and employment & labor. Wharton Research Data Access to important databases in the fields of finance, Service accounting, banking, economics, management, marketing and public policy. Business Source Comprehensive scholarly business database, providing Complete a collection of bibliographic and full text content for scholarly business journals back as far as 1886. Emerald Management This collection contains 120 double blind, peer 120 reviewed journals with a focus on Business and Management. Google Scholar Provides a search of scholarly literature across many disciplines and sources, including articles, theses, books, and abstracts. LexisNexis Academic Full text coverage of general news, business, legal, government and other topics. Regional Business Incorporates 75 business journals, newspapers and News newswires covering all metropolitan and rural areas within the United States.
Journal of Scholastic Inquiry: Business
Volume 3, Page 56
Table 2 Benford's Law: Expected Digital Frequencies POSITION IN NUMBER 1st Digit 0 1 2 3 4 5 6 7 8 9
0.30103 0.17609 0.12494 0.09691 0.07918 0.06695 0.05799 0.05115 0.04576
2nd
3rd
4th
0.11968 0.11389 0.10882 0.10433 0.10031 0.09668 0.09337 0.09035 0.08757 0.08500
0.10178 0.10138 0.10097 0.10057 0.10018 0.09979 0.09940 0.09902 0.09864 0.09827
0.10018 0.10014 0.10010 0.10006 0.10002 0.09998 0.09994 0.09990 0.09986 0.9982
Note. Adapted from “A Taxpayer Compliance Application of Benford's Law” by M.J. Nigrini, 1996, The Journal of the American Taxation Association,18, p. 72. Copyright 1996 by The American Accounting Association.
Journal of Scholastic Inquiry: Business
Volume 3, Page 57
U.S. Versus European Voluntary Earnings Forecasts…How Different Are They and Do They Vary by Economic Cycle? Ronald A. Stunda Valdosta State University Abstract This study provides empirical evidence regarding the credibility of management forecasts of earnings during differing economic cycles, namely, economic expansion, and economic contraction for both U.S. firms and a sample of firms from nine European countries. Results indicate that during periods of economic expansion, managers exert greater downwards earnings management on the forecast (relative to actual earnings) for both U.S. and European firms. However, during periods of economic contraction, managers exert greater upwards earnings management on the forecast (relative to actual earnings) for U.S. firms, while for European firms this is not observed. Information content results indicate that for U.S. firms during economic expansion, forecasts tend to exhibit a positive information-enhancing signal to users. However, during economic contraction, users interpret the forecast as being more noisy and potentially less informative. For European firms, forecasts tend to exhibit a positive enhancing signal to users during both times of economic expansion and economic contraction. Keywords: Accounting, Forecasts, Security Markets
Introduction Issues regarding the potential conversion of Generally Accepted Accounting Principles (GAAP) to International Financial Reporting Standards (IFRS), employed by the International Accounting Standards Board (IASB), continue within the Financial Accounting Standards Board (FASB). Although FASB continues to move away from full convergence of IFRS (Fitch Ratings
Journal of Scholastic Inquiry: Business
Volume 3, Page 58Â
Â
Report, 2014), there are areas where convergence continues on track. New revenue recognition standards, which converge with IFRS, are scheduled to take effect for accounting periods beginning on or after January 1, 2017, with early application permitted (Fitch Rating Report, 2014). For many companies, the new rules will affect the timing of revenue recognition, and have the potential to make earnings less consistent over time. Some speculate that this new approach to revenue recognition may even affect the composition and release of voluntary earnings reports (Fogarty & Rogers, 2014). Although U.S. GAAP may not be in line for full convergence with IFRS, the U.S. is taking small steps that bring the two much closer. As this happens, there is also renewed focus on the importance of earnings forecasts, i.e., what they contain, how often they are released, and if similarities exist between European IFRS-based voluntary earnings releases and those in the U.S., which are GAAP-based for now, and inching closer to IFRS-based each year. Because of the global economy in which American companies operate, and American investors invest, managers and investors alike are facing uncertainty and risk. A way of minimizing this risk is through voluntary forecast information. Since U.S. public companies are required to release earnings performance data within 45 days after their year-end, this data is old and often times not as meaningful as forward-looking information. It then behooves investors to search out forecast information to enhance the decision-making process. One way for the investor and manager to compare current U.S. GAAP-based companies with European IFRSbased companies is to assess information content associated with their respective voluntary earnings releases. In doing so, this analysis will attempt to show if similarities exist that make accounting convergence more palatable for U.S. firms and investors, or if significant differences have the potential to alarm the same parties. The purpose of this paper is to assess differences between U.S. and European voluntary earnings forecasts during expansion and contraction periods of the economic cycle. This is important since in the past decade the U.S. has experienced both economic contractions and economic expansions, and is likely to see similar economic volatility within the next decade as well.
Journal of Scholastic Inquiry: Business
Volume 3, Page 59Â
Â
Review of Literature There are many similarities in the economic conditions in which European and U.S. firms operate. All countries included in this study have developed economies and a high degree of economic interdependence. So there is broad homogeneity between economic and social conditions in which the firms conduct business (Arnold & Moizer, 1984). The justification for this type of research is, first and foremost, the importance of earnings forecasts to securities market practices. Forecasts are essentially produced for market participants. UK, German and Dutch studies have found that forecasts of earnings per share (EPS) are an important factor in share appraisal methods (Arnold & Moizer, 1984; Vergossen, 1993; Pike, 1993). In many cases, EPS forecasts are a crucial component of stock selection models. Further evidence of the value of EPS forecasts is the amount of time and effort dedicated to producing such forecasts by commercially oriented analysts and brokers (Capstaff, 1995). In their analysis of U.S. consensus forecasts from 1974-1991, Dreman and Berry (1995) argued that average forecast errors are too large for investors to rely on their predictions, and only a small percentage of forecasts fall within a range considered acceptable to investors. Brown (1996) countered this interpretation by citing the overwhelming evidence that forecasts almost always provide the best available estimates when they are quarterly point forecasts. Therefore, forecasts might be used to devise profitable trading strategies by investors. Accounting practices also affect the forecast information available. Rees (1998) reports a comparison of seven accounting measurement issues across 14 European countries. In only two pairs, Sweden and Norway, and Ireland and the UK, do countries use the same practices across the full set. It is apparent that even after the completion of the European Union (EU) harmonization effort, substantive differences in disclosure and measurement practices still exist within EU countries (a major reason why FASB is reluctant to commit to full convergence). Alford (1993) finds that only Ireland, the Netherlands, and the UK have accounting systems which are relatively free from the influence of taxation. In other European countries, managers have an incentive to manage earnings downward to minimize taxes.
Journal of Scholastic Inquiry: Business
Volume 3, Page 60Â
Â
The quality of disclosure in accounting statements can be expected to affect forecast accuracy. This has been demonstrated with regard to segment reporting (Baldwin, 1984; Hopwood, 1982), while Lang and Lundholm (1996) provide evidence that forecasts are more accurate for firms with more informative disclosure policies. Saudagaran and Biddle (2002) provide a ranking of effectiveness and rank the top nine European countries with the highest quality of disclosure. Basau, Hwang, & Jan (1998) confirm that forecasts emanating from these countries possess fewer forecast errors than other European countries with less informative financial disclosure. These nine countries will be used as the basis for assessing European earnings forecasts and they are listed in Table 1. Some extant research concludes that earnings forecasts may be less beneficial during unsettled economic periods (Miller, 2009), and as a result fewer may be issued during such periods. Other literature concludes that earnings forecasts help to cut through the fog of economic uncertainty (Anilowski, Feng, & Skinner, 2010) and are encouraged to assist users particularly during such periods. An analysis of the Institutional Brokers Estimate System (IBES) and the Dow Jones News Retrieval Service (DJNRS) was made for U.S firms, and IBES and Worldscope Data for European firms, for the years 2003-2012 in an attempt to determine the number of quarterly forecasts recorded during this time frame, which includes both periods of economic expansion (2003-2007), and periods of economic contraction (2008-2012). Results are shown in Table 2. As can be seen from Table 2, there appears to be no discernible drop-off in the number of voluntary earnings forecasts during the economic crisis of 2008-2012, versus the economic expansion period of 2003-2007. Having demonstrated this, the next step is to ascertain whether or not there are any inherent differences in the quality of the earnings forecast with respect to bias and information content during economic downturn periods (2008-2012) and economic growth periods (2003-2007). Prior research in the study of voluntary earnings forecasts finds that managers release information that is unbiased relative to subsequently revealed earnings and that tends to contain more bad news than good news (Baginski, Hassel, & Waymire, 1994; Frankel, McNichols, & Wilson, 1995). Such releases are also found to contain information content (Patell, 1976; Pownell & Waymire, 1989; Waymire, 1984). Although forecast release is costly, credible
Journal of Scholastic Inquiry: Business
Volume 3, Page 61Â
Â
disclosure will occur if sufficient incentives exist. These incentives include bringing investor/manager expectations in line (Ajinkya & Gift, 1984), removing the need for expensive sources of additional information (Diamond, 1985), reducing the cost of capital to the firm (Diamond & Verrechia, 1987), and reducing potential lawsuits (Lees, 1981).
Methods
Hypotheses Overview All of the aforementioned empirical studies have common characteristics, they assess voluntary earnings forecasts irrespective of economic climate (i.e., during both economic expansions and contractions), and they assess only U.S. firm forecasts. The research questions addressed in this study are: Do voluntary earnings forecasts differ depending upon whether or not they were issued in the U.S. versus the selected European countries, and, do they differ based upon the economic environment? These questions link earnings management to voluntary disclosures of earnings. For several years researchers have found that some degree of earnings management may exist in mandatory earnings disclosures. This study argues that incentives leading to earnings management may manifest in voluntary disclosures as well. If the potential exists for voluntary disclosures to be managed, then to what extent do investors rely upon the forecast information, and does this information content differ by entity (i.e., U.S. versus Europe)? In addressing these research questions, extant literature is relied upon that indicates potential earnings management during periods with differing incentive structures. DeAngelo (1988) shows that managers have incentives during management buyouts to manage earnings downward in an attempt to reduce buyout compensation. Collins and DeAngelo (1990) indicate that earnings management occurs during proxy contests, and market reaction to earnings during these contests is different than during non-contest periods. DeAngelo (1990) finds that managers have incentives during merger activities to manage earnings upward so as to convey to current stockholders that the potential merger will not adversely affect their investment. Perry and Williams (1994) find that management of accounting earnings occurs in the year preceding
Journal of Scholastic Inquiry: Business
Volume 3, Page 62
“going private” buyouts. Stunda (1996) finds that managers exert greater upward earnings management during mergers and acquisitions. And Stunda (2003) finds greater earnings management when a firm is under Chapter 11 protection. This study assesses any differences that economic environment may have on management forecast credibility. It also assesses any differences that may be present in information content on the voluntary forecast of earnings by U.S. firms versus selected European firms during these periods. In accomplishing this, the presence of earnings forecast management is tested by using bias measures along with the market reaction to the forecasts. The study focus is on firm forecasts during a period of relative economic expansion (2003-2007) versus firm forecasts during a period of relative economic contraction (2008-2012). Based upon statistical analysis, conclusions are reached that identify whether or not economic environment is a factor that has the potential for influencing voluntary earnings forecasts. The results have implications for all public firms during both periods of economic expansion and contraction, in addition to investors and potential investors in those firms both in the U.S. and Europe.
Hypotheses about Bias of Management Forecast (hypotheses 1 and 2) As previously noted, most past studies of voluntary earnings forecasts do not find evidence of bias in voluntary disclosures. These studies of management forecasts must be considered along with the earnings management literature. For instance, voluntary disclosures facilitate additional information to the investor at a lower acquisition cost. However, if only partial communication flows from management to investors and acquiring full information is costly, there exists asymmetric information and the potential for earnings management of the forecast. If the same degree of earnings management (whether positive or negative) exists in both the forecast of earnings and actual earnings, the expectation is that there would be no difference in forecast error. If, however, the ability to perform earnings management is anticipated but not realized, some difference of forecast error would be present. If greater upward earnings management of the forecast occurs (or less actual earnings management), a negative forecast error should exist. If greater downward earnings management of the forecast occurs (or less
Journal of Scholastic Inquiry: Business
Volume 3, Page 63
actual earnings management), a positive forecast error should result. Thus, the first hypothesis tests for the existence of forecast error. The null hypothesis tested is: H1:
Average management forecast error (actual EPS – management forecast of EPS) for U.S. firms equals zero for firms regardless of economic environment.
Applying this same logic to the firms representing the nine selected European countries, results in the second hypothesis, which also tests for the existence of forecast error. The null hypothesis tested is: H2:
Average management forecast error (actual EPS – management forecast of EPS) for European firms equals zero for firms regardless of economic environment. The management forecasts of earnings must be related to actual earnings in order to
determine if bias exists. McNichols (1989) analyzes bias through the determination of forecast error. Stated in statistical form, the hypothesis is represented in Equation 1 (see Appendix). In order to test hypotheses 1 and 2, firm forecasts included in the combined study samples (i.e., both economic expansion and economic contraction) were analyzed. Statistical analysis is performed on the samples in order to determine if the average forecast error is zero. McNichols (1989) and DeAngelo (1988) conducted a t-test on their respective samples in addition to a Wilcoxan signed rank test. Lehman (1975) reports that the Wilcoxan tests has an efficiency of about 95% relative to a t-test for data that are normally distributed, and that the Wilcoxan test can be more efficient than the t-test for non-normal distributions. Therefore, this analysis consists of performing a t-test and a Wilcoxan signed rank test on the average crosssectional differences between actual earnings per share and the management forecast of earnings per share.
Hypotheses Assessing State of the Economy (hypotheses 3 and 4) Introducing a firm-specific control (i.e., a forecast for the same firm during economic expansion versus economic contraction) allows a test of the relative forecast error in both economic environments. If firms display the same degree of earnings management in both periods, the expectation is that there will be no difference in forecast error. If, however, there
Journal of Scholastic Inquiry: Business
Volume 3, Page 64Â
Â
exist different incentives to manage earnings (either upward or downward) during times of economic fluctuation, then a positive or negative forecast error would result. Stated in null form: H3: The average forecast error for U.S. firms is not significantly different during periods of economic expansion and economic contraction. Applying the above reasoning to the selected European firms results in the following hypothesis, stated in the null form: H4: The average forecast error for European firms is not significantly different during periods of economic expansion and economic contraction. The second and third hypotheses introduce firm-specific and time-specific controls, namely, they assess potential bias of the management forecast by the two study periods, those made during economic expansion, and those made during economic contraction for the same firms. This permits a test of the relative forecast error in these two respective periods. Stated in statistical form the hypothesis is represented in Equation 2 (see Appendix).
Hypothesis about Information Content of Accounting Earnings and Management Forecasts (hypotheses 5 and 6) If mandatory disclosures of earnings contain some degree of earnings management, then voluntary disclosures may possess the potential for such earnings management as well. Investors may react to managed earnings in one of two ways; they may discount the information as additional noise, or they may view this information as enhancing the properties of the signal (i.e., in terms of amount or variance). Research during the past two decades has shown that accounting earnings possesses information content. Current literature finds that the information content of earnings announcements is different during non-routine periods (i.e. stock proxy contests, mergers and acquisitions, buyouts, Chapter 11 proceedings, etc.). If investors interpret managed earnings forecasts as just additional noise, the market would discount this information. If, however, investors view the managed earnings forecast as a positive (or negative) signal form management, the market would not discount the information. The expectation for information content of management forecasts in varying economic
Journal of Scholastic Inquiry: Business
Volume 3, Page 65
environments would revolve around these two notions. These alternative notions suggest the following null hypothesis: H5:
The information content of management forecasts during periods of economic expansion is not significantly different from the information content of management forecasts during periods of economic contractions for U.S. firms. Applying the above notions result in the following hypothesis for European firms, stated
in the null form: H6:
The information content of management forecasts during periods of economic expansion is not significantly different from the information content of management forecasts during periods of economic contractions for European firms. The purpose of these tests is to assess the relative information content of management
earnings forecasts during periods of economic expansions and economic contractions. The model in Equation 3 (see Appendix) is used to evaluate information content. Using the model in Equation 3, two separate regressions are run, one for U.S. firm forecasts and the other for European firm forecasts. The coefficient a measures the intercept. The coefficient b1 is the earnings response coefficient (ERC) for all firms during both periods of economic expansion and contraction. The coefficient b2 represents the incremental ERC for firm forecasts made during periods of economic expansion. The coefficient b3 represents the incremental ERC for firm forecasts made during periods of economic contraction. The coefficients b4, b5, and b6 are contributions to the ERC for all firms in the sample. To investigate the effects of the information content of management forecasts on ERC, there must be some control for variables shown by prior studies to be determinants of ERC. For this reason, the variables represented by coefficients b4, b5 and b6 are included in the study. Unexpected earnings (UEi) is measured as the difference between the management earnings forecast (MFi) and the security market participants’ expectations for earnings proxied by consensus analyst following as per Investment Brokers Estimate Service (IBES) (EXi). The unexpected earnings are scaled by the firm’s stock price (Pi) 180 days prior to the forecast. This is illustrated in Equation 4 (see Appendix). For each disclosure sample, an abnormal return (ARit) is generated for event days -1, 0, and +1, where day 0 is defined as the date of the forecast disclosure identified by the DJNRS for
Journal of Scholastic Inquiry: Business
Volume 3, Page 66Â
Â
U.S. firms and Worldscope for European firms. The market model is utilized along with the CRSP equally-weighted market index and regression parameters are estimated between days 290 and -91. Abnormal returns are then summed to calculate a cumulative abnormal return (CARit). Hypotheses 5 and 6 are tested by examining the coefficients associated with unexpected earnings during economic expansion (b2) and economic contraction (b3).
Data Sources The sample consists of quarterly management forecast point estimates made during two sample periods, 2003-2007 (representing economic expansion), and 2008-2012 (representing economic contraction). The sample met the following criteria: 1) The management earnings forecast was recorded by the Dow Jones News Retrieval Service (DJNRS) for U.S. firms and Worldscope for European firms; 2) Security price data was available from the Center for Research on Security Prices (CRSP) for U.S. firms and the AMADEUS database for European firms; 3) Earnings data was available from Compustat, and Compustat Global for U.S. and European firms respectively; 4) Analyst forecast information was available on the Institutional Brokers Estimate System (IBES); 5) The samples consist of firms which made at least one management earnings forecast in each sample period. Table 3 provides details on the samples.
Results
Hypotheses 1 and 2 Results Tests of hypotheses 1 and 2 were conducted on the combined two samples (i.e., forecasts made during periods of economic expansion, and forecasts made during periods of economic contraction), a total of 1,997 forecasts for U.S. firms, and 1,889 forecasts for European firms. Table 4 contains the results of this test. Table 4 indicates that the mean forecast error for U.S. forecasts is 0.04 with a p-value of .05. European firms have a mean forecast error of .03 with a p-value of .05. Using the distribution-free rank test, significance is observed at the .01 level for both groups. These results
Journal of Scholastic Inquiry: Business
Volume 3, Page 67Â
Â
are consistent with the preponderance of extant earnings forecast literature that indicates that management forecasts tend to reflect more bad news in the forecast relative to actual earnings. As a result, Hypotheses 1 and 2, which state that average management forecast error equals zero regardless of economic environment, is rejected for both U.S. and European firms since the forecasts in the sample, on average, exhibit downward bias of the management forecast.
Hypotheses 3 and 4 Results Tests of hypotheses 3 and 4 were conducted on two samples; one sample including firm forecasts between 2003-2007 (economic expansion), and the other sample including firm forecasts between 2008-2012 (economic contraction). Table 5 contains the results of this test. Panel A of Table 5 indicates results for the economic expansion sample of firm forecasts of earnings per share. Mean forecast error for U.S. firm forecasts is .03 with a p-value of .05. Using the distribution-free rank test, significance is observed at the .01 level. Mean forecast error for European firm forecasts is .04 with a p-value of .05. The distribution-free rank test is significant at the .01 level. As seen with hypotheses 1 and 2, these results are consistent with prior earnings forecast literature which indicates that management forecasts tend to reflect more bad news in the forecast relative to actual earnings. Panel B of Table 5 provides results for the economic contraction sample of firm forecasts of earnings per share. Mean forecast error for U.S. firm firms is -.12 with a p-value of .01. Using the distribution-free rank test, significance is observed at the .01 level. For European firms, however, mean forecast error is observed to be .03 with a p-value of .05. The distribution-free rank test provides significance at the .01 level. These results are inconsistent with those from Panel A for U.S. firms. For U.S. firm forecasts, results indicate that forecasts during economic contraction tend to reflect more good news in the forecast relative to actual earnings. For European firms forecasts, there appears to be no significant difference between forecasts released during either economic upturns or downturns, both reflect bad news content. Hypothesis 3 which states that there is no significant difference in forecast error between these two sample periods for U.S. forecasts must be rejected, while Hypothesis 4, which states the same for European firm forecasts, cannot be overturned.
Journal of Scholastic Inquiry: Business
Volume 3, Page 68Â
Â
Hypotheses 5 and 6 Results Hypotheses 5 and 6 tested information content of management forecasts during periods of economic expansion and economic contraction. Table 6 reports the results of this test. As indicated in Panel A of Table 6, for U.S. firms, the coefficient representing the overall ERC for all firm forecasts in both study periods (b1) has a value of 0.14 with a p-value of .01. This is consistent with prior management forecast literature regarding information content. The coefficient representing the incremental ERC for firm forecasts during economic expansions (b2) has a value of 0.10 with a p-value .01. The coefficient representing the incremental ERC for firm forecasts during economic contractions (b3) has a value of -0.03 with a p value of .01. All other control variables are not significant at conventional levels. These findings indicate that not only do forecasts contain information content, there is a difference between the information content of forecasts made during periods of economic expansion versus those made in economic contraction. Those made during economic expansion possess an information-enhancing signal to investors and other users while those made during economic contraction are interpreted by investors and other users as being noisy information that may or may not be usable. Hypothesis 5, therefore, is rejected.
Discussion This study provides empirical evidence regarding the credibility of management forecasts of earnings during differing economic cycles, namely, economic expansion, and economic contraction for both U.S. firms and a sample of firms from nine European countries. Past research on earnings forecasts assess the forecast over time periods which do not consider the effects of the economic cycle on the forecast. In addition, these studies focus almost entirely on U.S. firm forecasts. This study is the first to attempt to draw a distinction between U.S. firm forecasts and European firm forecasts during economic expansion and economic contraction periods. Earnings forecasts for U.S. and European firms were broken into two sample periods; an expansion period (2003-2007), and a contraction period (2008-2012). Firms that issued forecasts
Journal of Scholastic Inquiry: Business
Volume 3, Page 69Â
Â
in both of these sample periods were evaluated, by respective grouping (i.e., U.S. and European). The evaluation consisted on conducting a study of bias for all firms in both periods combined to assess if results are comparable to previous studies. In addition, a study of bias was conducted for each sample separately to assess any differences between expansion and contraction samples. Lastly, a regression analysis was made for each sample period in order to assess any differences in information content of the earnings forecasts between the two periods. Bias results indicate that during periods of economic expansion, managers exert greater downwards earnings management on the forecast (relative to actual earnings) for both U.S. and European firms. This is consistent with prior management forecast literature. However, during periods of economic contraction, managers exert greater upwards earnings management on the forecast (relative to actual earnings) for U.S. firms, while for European firms no significant difference is noticed from economic expansion periods. Information content results indicate the presence of information content in management forecasts during both economic expansion and contraction periods. For U.S. firms during economic expansion, forecasts tend to exhibit a positive information-enhancing signal to users. However, during economic contraction, users interpret the forecast as being more noisy and potentially less informative. For European firms, forecasts tend to exhibit a positive enhancing signal to users during both times of economic expansion and economic contraction. As U.S. GAAP aligns more closely with IFRS standards over time, the analysis of U.S. and European firms becomes more critical from the perspective of global management and investment. The findings of this study have significant implications for managers and investors with current or potential international holdings. Author Biography Ronald Stunda is Associate Professor of Accounting at Valdosta State University. He also serves as reviewer for several journals. His research deals mainly with market-based applied accounting research and has appeared in such journals as Academy of Business Journal, Journal of Business and Behavioral Sciences, Advances in Business Research, and the Journal of
Journal of Scholastic Inquiry: Business
Volume 3, Page 70Â
Â
Accounting and Financial Studies. He can be reached at Valdosta State University, 1500 North Patterson Street, Valdosta, GA, 31698, rastunda@valdosta.edu. References Ajinkya, B., & Gift, M. (1984). Corporate managers earnings forecasts and symmetrical adjustments of market expectations. Journal of Accounting Research, 22(2), 425-444. Alford, A. (1993). Informativeness of accounting information disclosure in different countries. Journal of Accounting Research, 31(2), 183-223. Anilowski, C., Feng, M., & Skinner, D. (2010). Does earnings guidance affect market returns? The Journal of Accounting and Economics, 44(1-2), 36-63. Arnold, J., & Moizer, P. (1984). A survey of the methods used by U.K. investment analysts. Accounting and Business Research, 14, 195-207. Baginski, S., Hassell, J., & Waymire, G. (1994). Some evidence on the news content of preliminary earnings estimates. The Accounting Review, 69(1), 265-271. Baldwin, B. (1984). Segment earnings disclosures. The Accounting Review, 59(3), 376-389. Basau, S., Hwang, L., & Jan, C. (1998). International variation in accounting measurement rules. Journal of Business Finance and Accounting 25(9), 1207-1247. Brown, L. (1996). Forecasting errors and their implication for security analysis. Financial Analysts Journal, 52(1), 40-47. Capstaff, J. (1995). The accuracy and rationality of earnings forecasts in the U.K. Journal of Business, Finance and Accounting, 22(1), 69-87. Collins, D., & DeAngelo, L. (1990). Accounting information and corporate governance. Journal of Accounting and Economics, 13, 213-247. DeAngelo, L. (1988). Managerial competition, information costs, and corporate governance. Journal of Accounting and Economics, 10(1), 3-36. DeAngelo, L. (1990). Equity valuations and corporate control. The Accounting Review, 65(1), 93-112. Diamond, D. (1985). Optimal release of information by firms. The Journal of Finance 40(4), 1071-1093.
Journal of Scholastic Inquiry: Business
Volume 3, Page 71Â
Â
Diamond, D., & Verrechia, R. (1987). Constraints on short-selling and asset price adjustments to private information. Journal of Financial Economics, 18(2), 277-311. Dreman, D., & Berry, M. (1995). Analyst forecasting errors and their implications. Financial Analysts Journal, 51(3), 30-41. Fogarty, F., & Rogers, J. (2014). IFRS adoption in the U.S.? Accounting Horizons Journal, 28(4), 28-45. Fitch Ratings Report, https://www.fitchratings.com/gws/en/fitchwire/fitchwirearticle/NewRevenue-Recognition?pr_id=832095, May, 2014. Frankel, R., McNichols, M. & Wilson, P. (1995). Discretionary disclosures and external financing. The Accounting Review, 70(1), 135-150. Hopwood, W. (1982). The potential gains in predictive ability through segmented annual earnings. Journal of Accounting Research, 20(2), 724-732. Lang, M., & Lundholm, R. (1996). Corporate disclosure policy and analysis behavior. The Accounting Review, 71(4), 467-492. Lees, F. (1981). Public disclosure of corporate earnings forecasts. New York, NY: The New York Conference Board. Lehman, E. (1975). Nonparametrics: Statistical methods based on ranks. San Francisco, CA: Holden-Day Press. McNichols, J. (1989). A non-random walk down Wall Street. Journal of Business and Finance, 8, 97-112. Miller, G. (2009). Should managers provide forecasts of earnings? Journal of Accounting Research, 40(1), 173-204. Patell, J. (1976). Corporate forecasts of earnings per share and stock price behavior. Journal of Accounting Research, 14(2), 246-276. Perry, S., & Williams, T. (1994). Earnings management preceding management buyout offers. Journal of Accounting and Economics, 18(2), 157-179. Pike, R. (1993). The appraisal of shares in the U.K. and Germany. Accounting and Business Research, 23(92), 480-499. Pownell, G., & Waymire, G. (1989). Voluntary disclosure choice and earnings information transfer. Journal of Accounting Research, 27, 85-105.
Journal of Scholastic Inquiry: Business
Volume 3, Page 72Â
Â
Rees, W. (1998). A valuation based test of accounting differences in Europe. American Accounting Association annual meeting presentation, August, New Orleans, LA. Saudagaran, S., & Biddle, G. (2002). Financial disclosures and foreign stock exchanges. Journal of International Financial Management, 106-148. Stunda, R. (1996). The credibility of management forecasts during mergers and acquisitions. American Academy of Accounting and Finance, annual meeting presentation, December, New Orleans, LA. Stunda, R. (2003). The effects of chapter 11 bankruptcy on earnings forecasts. Accounting and Financial Studies, 7(2), 75-84. Vergossen, R. (1993). The use of annual reports in the Netherlands. European Accounting Review, 2(2), 219-244. Waymire, G. (1984). Additional evidence on the information content of management earnings forecasts. Journal of Accounting Research, 22(2), 703-718.
Journal of Scholastic Inquiry: Business
Volume 3, Page 73Â
Â
Tables and Figures Table 1 List of European Countries Contained in the Study Country Belgium France Germany Ireland Italy Netherlands Spain Switzerland United Kingdom NOTE. Table 1 lists the European countries with the highest degree of disclosure information as determined by Saudagaran and Biddle (2002).
Journal of Scholastic Inquiry: Business
Volume 3, Page 74Â
Â
Table 2 Quarterly Firm Point Forecasts by Sample Group Year
U.S. Firms
European Firms
2003
504
318
2004
489
314
2005
517
389
2006
476
362
2007
530
371
2008
521
328
2009
482
340
2010
509
337
2011
473
352
2012
495
361
NOTE. Table 2 indicates the numbers of quarterly earnings forecasts made by U.S. firms from 2003 through 2012, as reported by IBES and the Dow Jones News Retrieval Service and for European firms for the same period as reported by IBES and Worldscope Data.
Journal of Scholastic Inquiry: Business
Volume 3, Page 75Â
Â
Table 3 Study Samples by Sample Period Economic Expansion Study Period Year
Number of U.S. forecasts
Number of
European forecasts 2003
215
188
2004
189
175
2005
207
190
2006
176
184
2007
218
192
Total
1,005
929
Economic Contraction Study Period Year
Number of U.S. forecasts
Number of
European forecasts 2008
204
199
2009
180
188
2010
212
194
2011
178
187
2012
218
192
Total
992
960
NOTE. Table 3 reflects the two study periods that are evaluated in this study. Years 2003-2007 reflect the years of economic expansion. Years 2008-2012 reflect the years of economic contraction. Forecasts reflect the firms selected in the sample after removing those eliminated for insufficient data as enumerated in the above methodology section. The information was obtained from the Dow Jones News Retrieval Service for U.S. firms and Worldscope for European firms.
Journal of Scholastic Inquiry: Business
Volume 3, Page 76
Table 4 Average Management Forecast Error Deflated by Firm’s Stock Price 180 Days Prior to Forecast Model:
∑ fei = 0 n
n U.S. forecasts Deviation
Mean
Medium
Minimum
Maximum
-0.127
0.229
Maximum
Standard
(t-statistic)
1,997
0.04
0.01 ***
0.0017
(2.25) ** n European forecasts Deviation 1,889
Mean
Medium
Minimum
0.03
0.02***
-0.138
Standard
(t-statistic) 0.322
0.0021
(2.27)** ** Significant at the .05 level (two-sided test). *** Significant at the .01 level using the non-parametric sign-rank test. fei = forecast error of firm i (actual eps – management forecast of eps) n = sample of 1,906 firm forecasts during 2003-2012 NOTE. Table 4 assesses the bias of voluntary earnings forecasts for all quarterly forecasts included in both samples. That is, forecasts from the expansion study period, and forecasts from the contraction study period. This analysis is made to determine a baseline measurement of all forecasts in this study to ensure that results are comparable with prior studies that assess forecast bias.
Journal of Scholastic Inquiry: Business
Volume 3, Page 77
Table 5 Average Management Forecast Error Deflated by Firm’s Stock Price 180 Days Prior to Forecast Model:
∑
fei
=
∑
ŋ expansion
fei ŋ contraction
Panel A- management forecasts during economic expansion (2003-2007) n U.S forecasts Deviation
Mean
Medium
Minimum
Maximum
-0.027
0.429
Standard
(t-statistic)
1,005
0.03
0.01 ***
0.0020
(2.26) ** n European forecasts Deviation
Mean
Medium
Minimum
Maximum
0.04
0.02***
-0.022
0.051
Standard
(t-statistic)
929
0.0029
(2.24)** ** Significant at the .05 level (two-sided test). *** Significant at the .01 level using the non-parametric sign-rank test. fei = forecast error of firm i (actual eps – management forecast of eps)
Panel B- management forecasts during economic contraction (2008-2012) n U.S. forecasts Deviation
Mean
Medium
Minimum
Maximum
-0.220
0.121
Minimum
Maximum
Standard
(t-statistic)
992
-0.12
-0.05***
0.0011
(-2.35) *** n European forecasts
Mean
Medium
Standard
Journal of Scholastic Inquiry: Business
Volume 3, Page 78
Deviation
(t-statistic)
960
0.03
0.02***
-0.019
0.048
0.0018
(2.27)** *** Significant at the .01 level (two-sided test). *** Significant at the .01 level using the non-parametric sign-rank test. fei = forecast error of firm i (actual eps – management forecast of eps) n= 1,005 firm forecasts during expansion periods and 901 firm forecasts during contraction periods NOTE. Table 5 Panel A reflects forecasts of U.S. and European firms during expansion periods (2003-2007). Panel B reflects forecasts of U.S. and European firms during economic contraction (2008-2012).
Journal of Scholastic Inquiry: Business
Volume 3, Page 79Â
Â
Table 6 Test of Information Content of Management Forecasts Model:
CARit = a + b1UEit + b2UEEit + b3UECit + b4MBit +b5Bit +b6MVit +eit
Where: CARit = Cumulative abnormal return forecast i, time t a
= Intercept term
UEit
= Unexpected earnings for forecast i, time t
UEEit
= Unexpected earnings for forecast i, time t during economic expansion
UECit
= Unexpected earnings for forecast i, time t during economic contraction
MBit
= Market to book value of equity as proxy for growth and persistence
Bit
= Market model slope coefficient as proxy for systematic risk
MVit
= Market value of equity as proxy for firm size
eit
= error term for forecast i, time t
Panel A U.S. Firm Forecasts Coefficients (t-statistics) a 0.20 (.78)
b1
b2
0.14
0.10
b3 -0.03
(2.35)*** (2.40)*** (2.42)***
b4
b5
0.11
-0.05
0.04
(0.32)
(-0.18)
(0.28)
***Significant at the .01 level (two-sided test) b1, b4, b5 and b6 sample = 1,997 firm forecasts b2 sample = 1,005 firm forecasts b3 sample = 992 firm forecasts
b6
Adjusted R2 0.189
Journal of Scholastic Inquiry: Business
Volume 3, Page 80Â
Â
Panel B
European Firm Forecasts Coefficients (t-statistics)
a 0.11 (.93)
b1 0.10
b2 0.15
b3
b4
b5
b6
0.08
0.08
0.09
0.10
(0.27)
(0.19)
(0.31)
(2.41)*** (2.38)*** (2.39)***
Adjusted R2 0.223
***Significant at the .01 level (two-sided test) b1, b4, b5 and b6 sample = 1,889 firm forecasts b2 sample = 929 firm forecasts b3 sample = 960 firm forecasts NOTE. Table 6 reflects the results of the assessment of information content through the running of the regression formula above for both U.S. firm forecasts (Panel A) and European firm forecasts (Panel B). This includes the total forecast sample (b1 variable), the economic expansion forecast sample (b2 variable), and the economic contraction forecast sample (b3 variable). Other variables assessed in the model (b4, b5, b6 )are variables shown in previous studies to provide some level of significance in the model.
Journal of Scholastic Inquiry: Business
Volume 3, Page 81
Appendix
Equation 1 ∑ fei = 0 n This equation describes how forecast error is determined: Where: fei = forecast error of firm i (forecast error = actual eps – management forecast of eps), Deflated by the firm’s stock price 180 days prior to the forecast.
Equation 2 ∑
=
fei ŋ expansion
∑
fei ŋ contraction
This equation reflects the hypothesis that in null form suggests that forecast errors in expansion periods are equal to forecast errors in contraction periods.
Equation 3 CARit = a + b1UEit + b2UEEit + b3UECit + b4MBit +b5Bit +b6MVit +eit Where: CARit = Cumulative abnormal return forecast i, time t a
= Intercept term
UEit
= Unexpected earnings for forecast i, time t
UEEit
= Unexpected earnings for forecast i, time t during economic expansion
UECit
= Unexpected earnings for forecast i, time t during economic contraction
MBit
= Market to book value of equity as proxy for growth and persistence
Bit
= Market model slope coefficient as proxy for systematic risk
MVit
= Market value of equity as proxy for firm size
eit
= error term for forecast i, time t
This equation indicates the regression model that is used to assess the information content of the earnings forecasts for both expansion and contraction study periods. In addition to assessing those two specific periods, (i.e., b2 and b3 variables), an assessment is also made for total forecast
Journal of Scholastic Inquiry: Business
Volume 3, Page 82
samples (b1 variable), and other variables that have shown significance in prior studies such as growth, risk and size (b4, b5 ,b6 variables).
Equation 4 (MFi - EXi) UEi =
Pi
This equation is used to assess unexpected earnings. Unexpected earnings is measured as the difference between the management forecast of earnings and the expected earnings level as determined by consensus analyst following per Investment Brokers Estimate Service. This value is then deflated by the firm’s stock price 180 days prior to the forecast.
Journal of Scholastic Inquiry: Business
Volume 3, Page 83
MANUSCRIPT SUBMISSION GUIDE These revised guidelines become effective with the fall 2015 publications. If you are submitting to a spring 2015 publication, please use the previous guide.
GENERAL FORMATTING
American Psychological Association (APA) Sixth Edition Publication Guidelines Microsoft-Word or compatible format (Do not send your manuscript as a PDF or it will be declined) Letter-size (8.5 x 11 inches) format 1.50 spaced text Times New Roman, 12-point font One-inch margins Two spaces following end punctuation Left justification Single column Portrait orientation First-person
MANUSCRIPT ORDER (Please Note: Do not add a running head or page numbers.)
Cover Page: (This page will be removed prior to peer review.) Manuscript Title o The first letter of each major word should be capitalized. o The title should be in font size 20 and bold. Author(s) Name o First Name, Middle initial(s), and Last name (omit titles and degrees) o The names should be font size 12. No bold Institutional Affiliation o Education affiliation – if no institutional affiliation, list city and state of author’s residence o This educational affiliation should be on the line directly under the author’s name. o If there are multiple authors, please place a space between them each set of information (name and affiliation). Author Biography o If there are multiple authors, please label this section Author Biographies
Journal of Scholastic Inquiry: Business
Volume 3, Page 84
o Please be sure to indent the paragraph before the biography begins. If there are multiple authors, please begin a new paragraph for each author. Manuscript: (From this point forward, please be sure your manuscript is FREE of any identifying information.)
Abstract o The abstract (150-word maximum) should effectively summarize your completed research and findings. o The word “abstract” should be bold. Keywords o This line should be indented. The word “Keywords” should be italicized and followed by a colon and two spaces. o Following the two spaces, list 3 or 4 keywords or key phrases that you would use if you were searching for your article online. o Only the first key word should be capitalized. The actual keywords are not italicized. Body of Paper (sections) ALL of the following sections MUST be present or your manuscript WILL be rejected. o Introduction o Literature Review o Methodology o Results/Findings o Discussion References –this heading is NOT bolded within the manuscript o Manuscripts should be thoroughly cited and referenced using valid sources. o References should be arranged alphabetically and strictly follow American Psychological Association (APA) sixth edition formatting rules. o Only references cited in the manuscript are to be included. Tables and Figures o If tables and figures are deemed necessary for inclusion, they should be properly placed at the end of the text following the reference section. o All tables and figures should be numbered sequentially using Arabic numerals, titled, acknowledged, and cited according to APA guidelines. o If graphs or tables are too wide for portrait orientation, they must be resized or reoriented to be included. Appendices (if applicable) o Must be labeled alphabetically as they appear in the manuscript. o Title centered at the top.
Journal of Scholastic Inquiry: Business
Volume 3, Page 85
WHY READ OUR JOURNALS? Continuing Education: Each of the CSI's peer-reviewed journals focuses on contemporary issues, scholarly research, discovery, and evidence-based practices that will elevate readers' professional development. Germane Reference: The CSI's journals are a vital resource for students, practitioners, and professionals in the fields of education, business, and behavioral sciences interested in relevant, leading-edge, academic research. Diversity: The CSI’s peer-reviewed journals highlight a variety of study designs, scientific approaches, experimental strategies, methodologies, and analytical processes representing diverse philosophical frameworks and global perspectives Broad Applicability: The CSI's journals provide research in the fields of education, business and behavioral sciences specialties and dozens of related sub-specialties. Academic Advantage: The CSI's academically and scientifically meritorious journal content significantly benefits faculty and students. Scholarship: Subscribing to the CSI's journals provides a forum for and promotes faculty research, writing, and manuscript submission. Choice of Format: Institutions can choose to subscribe to our journals in digital or print format.
Journal of Scholastic Inquiry: Business
Volume 3, Page 86
Markov Models and the Dissemination of Tax‐Incentive Choices: An Illustration Charles R. Enis, The Pennsylvania State University Auction Theory for Multi‐unit Reverse Auctions: Testing Predictions in the Lab William B. Holmes, Georgia Gwinnett College The Use of Computer‐Aided Audit Tools in Fraud Detection Judy Ramage Lawrence, Christian Brothers University Denny L. Weaver, American National Diversified, Inc. Howard Lawrence, University of Mississippi U.S. Versus European Voluntary Earnings Forecasts… How Different Are They and Do They Vary by Economic Cycle? Ronald A. Stunda, Valdosta State University
Published by: Center for Scholastic Inquiry, LLC 4857 Hwy 67, Suite #2 Granite Falls, MN 56241 855‐855‐8764
ISSN: 2330-6815 (online) ISSN: 2330-6807 (print)