Statistical Fallacies
Statistics fallacy occurs when statistics are used to trick an observer into believing something different from that the data represents. Falsification or misapplication of the statistical reason is, therefore, a statistical fallacy. Statistics can be misused to convince people who lack affinity with numbers through intimidation. However, people with affinity with numbers can also be persuaded because they believe claims that contain figures must be correct. Both situations reveal that the statistics could appeal to precision, but they lack accuracy. Several pitfalls may occur during statistical reporting; the pitfalls include a small sample, biased sample, biased method, biased reporting, and wrong conclusion. Other sources of fallacy include the use of incomparable groups, misuse of percentages, and over-reporting. This paper seeks to examine the fairness of presentation and statistical errors, if any, in the article ‘Justice Department Statistics on Terrorism Faulted Most Numbers Inaccurate, Audit Shows’
Firstly, there is the use of a small and biased sample. This is the use of a small part of the more significant population as a sample and using it to make hasty and sweeping generalizations. This is the case for the first paragraph of the excerpt, whereby the report states that most of the statistics by the justice department are highly inaccurate. Although there is data indicating that there is erroneous reporting of cases by federal prosecutors, there is no conclusive indication that other terrorism statistics are wrong in high measure. Therefore, there is hasty generalization by implying that inaccuracy by some federal prosecutors is the case for the whole justice department. This is not true since the justice department is constituted of a lot of other sub-sections. Additionally, wrongful recording of terrorism cases does not necessarily mean that most of the reports by the justice department are highly inaccurate. Listing, reporting, or classification of affairs in the department does not constitute most of the statistics. Labeling is only a small part of all the statistical data regarding terrorism.
Secondly, biased reporting through the use of the crafty methods of representing numbers; this is because some ways of representing data are more persuasive than others (Brewer & Venaik, 2014). This excerpt is biased in reporting since it uses selective reporting. A lot of information that would assist an observer in understanding what the numbers mean is left out. For example, the inclusion of a table ranking the total number of nonnatural deaths regionally is ambiguous. There is cherry-picking of information in the context; this is indicated in the claim that 53% of Americans account for the nonnatural deaths. The question is, what about the deaths caused by terrorism since not all nonnatural deaths are caused by terrorism. The focus should be on the actual figure of deaths of Americans that is caused by terrorists. Additionally, the causes of death for the last four reporting periods are reported, but the other periods are not included.
Another fallacy is that of incomparable grouping through lack of attention to the features of groups under consideration (Keener, 2015). The main idea for this excerpt is to showcase how the justice department reports of terrorism in a faulty manner. However, data presented does not ultimately relate to terrorism and the justice department; this is because there is the inclusion of deaths in a number of countries in the world. Consequently, the nations are ranked following the number of deaths recorded in each state over a given period. The grouping of the statistics if faulty because it is not related to the purpose of the research.
Finally, there is a misuse of percentages in the excerpt. Percentages will mislead observers if the calculations are based on a small sample or an inappropriate total. This occurs in this case because there is sampling fluctuation. There is a graphical representation of the percentage of nonnatural deaths by year from 200t to 2006. The individual portions are correct, but concluding the percent of fatalities has reduced by approximately 10% is wrong. The reason is that the percentage cannot be directly inferred because the yearly total is not known. The actual numbers are not presented; therefore, working with percentages without a base is wrong. These particular results are presented to magnify an effect without the required explanation regarding the base of calculation.
The above-stated examples show that the material was not fairly represented. The fundamental mistake in the article is over-reporting. The results in the report are not confined to the idea for which the research was originally intended. A lot of unintended findings are reported without caution. For example, the reporting of nonnatural deaths across the globe is misguided. Additionally, there is no point in presenting the number of American deaths abroad in the report unless they are related to terrorism. Most of the material in the report self-reported without sufficient statistical backup; this makes the report lack objective reasoning. The fallacies in the excerpt couple up to a statistical analysis that has a false conclusion. The material should have focused more on how the justice department reports statistics that lack accuracy.
References
Brewer, P., & Venaik, S. (2014). The ecological fallacy in national culture research. Organization Studies, 35(7), 1063-1086.
Keener, A. (2015). The arrival fallacy: collaborative research relationships in the digital humanities.