Math, asked by tr440636, 5 days ago

which of the following possibilities indicates 'the event is almost non existent' .. a) 0.0001 b) 0.00001 ​

Answers

Answered by Shubhampro112
0

Answer:

These days when I look at scientific research papers or review manuscripts, there seems to be almost a competition to have a smaller p value as a means to present more significant findings. For example, a quick Internet search using “p < 0.0000001” turned up many papers even reporting their p values at this level. Can and should a smaller p value play such a role? In my opinion, it cannot. The current statistical software making possible p value-centered statistical reporting, I believe, is leading scientific inquiry into a quagmire and dead end.

To fully understand why the p value-centered inquiry is the wrong approach, let's firstly understand what p value and hypothesis testing (HT) are and examine how statistical hypothesis testing (SHT) was run prior to the computer era. While p value and HT are both now used under the umbrella of SHT, they had different roots. The p value and its application in scientific inquiry is credited to the English statistician Sir Ronald Aylmer Fisher1 in 1925. In Fisher's inquiry system, a test statistic is converted to a probability, namely the p value, using the probability distribution of the test statistic under the null hypothesis and the p value was used solely as an aid, after data collection, to assess if the observed statistic is a simply random event or indeed belongs to a unique phenomenon fitting the researchers' scientific hypothesis.2 Furthermore, 0.05 or 0.01 are not the only p value cutoff scores for the decision. Thus, Fisher's p value inquiry system belongs to a posteriori decision system, which also features, “flexibility, better suited for ad-hoc research projects, sample-based inferential, no power analysis and no alternative hypothesis” (p. 4).3

HT, on the other hand, was credited to the Polish mathematician Jerzy Neyman and American statistician Egon Pearson4 in 1933, who sought to improve Fisher's method by proposing a system to apply repetition of experiments. Neyman and Pearson believed that a null hypothesis should not be considered unless one possible alternative was conceivable. In contrast to Fisher's system, Type I error or the error the researchers want to minimize, the corresponding critical region and value of a test must be set up first in the Neyman–Pearson's system, which, therefore, belongs to a priori decision system. In addition, the Neyman–Pearson's system is “more powerful, better suited for repeated sampling projects, deductive, less flexible than Fisher's system and defaults easily to the Fisher's system”  

Step-by-step explanation:

Similar questions