Answer:
To determine the number of tests required to achieve a certain level of confidence with a given margin of error, we can use the formula for the confidence interval of the mean:
Margin of Error = (Z_(α/2) *σ) / √n
Where:
– Z is the z-score corresponding to the desired confidence level,
– σ is the standard deviation of the population,
– n is the sample size.
We want to find the sample size (n)) given the margin of error and the standard deviation.
Given that the error in the yarn count should be 2.56% and the standard deviation is 6’s Ne, we can first convert the error to Ne units:
Margin of Error (Ne) = 2.56% * Mean Count
Margin of Error (Ne} = 2.56% * 40
Margin of Error (Ne) = 1.024 Ne
Now, we need to find the z-score corresponding to a 95% confidence level. For a 95% confidence level, the z-score is approximately 1.96.
Now, let’s rearrange the formula to solve for sample size (n):
n = {(Z_(α/2) *σ) / (Margin of Error)} 2
Substituting the given values:
n = {(1.96 *6) / (1.024)} 2
n = (11.484) ^2
n = 131.94 132
Since you can’t have a fraction of a test, you’ll need to round up to the nearest whole number because you can’t have a fraction of a test. Therefore, you’ll need 132 tests to achieve a 95% confidence level with an error in yarn count of 2.56%.