Draw the case when we will obtain smaller
Answers
In this cyberlecture, I'd like to outline a few of the important concepts relating to sample size. Generally, larger samples are good, and this is the case for a number of reasons. So, I'm going to try to show this in several different ways.
Bigger is Better
1. The first reason to understand why a large sample size is beneficial is simple. Larger samples more closely approximate the population. Because the primary goal of inferential statistics is to generalize from a sample to a population, it is less of an inference if the sample size is large.
2. A second reason is kind of the opposite. Small samples are bad. Why? If we pick a small sample, we run a greater risk of the small sample being unusual just by chance. Choosing 5 people to represent the entire U.S., even if they are chosen completely at random, will often result if a sample that is very unrepresentative of the population. Imagine how easy it would be to, just by chance, select 5 Republicans and no Democrats for instance.
Let's take this point a little further. If there is an increased probability of one small sample being unusual, that means that if we were to draw many small samples as when a sampling distribution is created (see the second lecture), unusual samples are more frequent. Consequently, there is greater sampling variability with small samples. This figure is another way to illustrate this:
Note: this is a dramatization to illustrate the effect of sample sizes, the curves depicted here are fictitious, in order to protect the innocent and may or may not represent real statistical sampling curves. A more realistic depiction can be found on p. 163.
In the curve with the "small size samples," notice that there are fewer samples with means around the middle value, and more samples with means out at the extremes. Both the right and left tails of the distribution are "fatter." In the curve with the "large size samples," notice that there are more samples with means around the middle (and therefore closer to the population value), and fewer with sample means at the extremes. The differences in the curves represent differences in the standard deviation of the sampling distribution--smaller samples tend to have larger standard errors and larger samples tend to have smaller standard errors.
3. This point about standard errors can be illustrated a different way. One statistical test is designed to see if a single sample mean is different from a population mean. A version of this test is the t-test for a single mean. The purpose of this t-test is to see if there is a significant difference between the sample mean and the population mean. The t-test formula looks like this:
HOPE THIS HELPS YOU
MARK AS BRAIN LIST
Answer:
In this cyberlecture, I'd like to outline a few of the important concepts relating to sample size. Generally, larger samples are good, and this is the case for a number of reasons. So, I'm going to try to show this in several different ways.
Bigger is Better
1. The first reason to understand why a large sample size is beneficial is simple. Larger samples more closely approximate the population. Because the primary goal of inferential statistics is to generalize from a sample to a population, it is less of an inference if the sample size is large.