'Why is the mean = 0 to calculate the confidence intervals of a distribution when using stats.norm?
I somewhat understand how to calculate the Confidence interval in this manner but why is it that in this code, they used the mean=0 within the stats.norm
se = np.std(data) / np.sqrt(len(data))
sample_mean = data.mean()
difference_means_distribution = stats.norm(0 <<<<<, se)
lower, upper = (sample_mean + difference_means_distribution.ppf(0.025),
sample_mean - difference_means_distribution.ppf(0.025))
Shouldn't it be stats.norm(sample_mean, se)
?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|