Hypothesis Testing I: Prerequisites

15th July 2018 | Basics | Ajay Kumar N

Introduction

In our day to day routine we come across many questions such as:

  1. How much liters of water should be alotted on an average to a household in a region.
  2. How much stock of apple juice should I order for my guests.
  3. By what amount should we increase credit card limits for a certain group of customers.
  4. Should we provide additional tuition for language courses to science students.

To figure out answers to these question at the very basic level, you don’t really need any hardcore statistics knowledge. For e.g., lets take first and last questions from the ones listed above.

To figure out how much water on an average a household in a region would need, we can simply ask every household regarding the same. But its much harder than said. There are too many households to cover. Resources and time required to do this will definitely not justify the utility of outcome. So what do we do then? Instead of connecting with each and every household, we can talk to handful of them. Or in other words, take a small sample. And consider their average consumption as the general answer. But we need to be careful about what all households we are selecting as sample. Water consumption needs might not be same across all regions, also there will be certain differences between residential and commercial entities. Our sample should contain observations from all these different segments [strata in population] in order to truly represent the entire region.

Taking up next question of whether science students need additional tuition for language courses. Underlying thought here is that science students perform poorly in language courses in comparison to students coming from other streams. To figure out if that is the case we’ll collect student performance data and see the average performance of students from both science and non-science streams. If science students are performing significantly worse than their counterparts, we’ll take the decision in favor of providing them with additional tuition.

Common theme in strategy of solving both these problem was to collect data and then use it to verify, refute our claims or estimate the value of parameter. What our methods lacked though, was rigorous statistical framework. We’d build the same as we progress in this post.

We’ll be discussing various concepts here onwards which might seem slightly disconnected in the beginning but everything will fall in place as we near the end. Starting with Population and Sample, we have already used these concepts above, its time to formally introduce them.

Population & Samples

Population is a huge collection of data points. This can be anything from age of all graduating engineers in last decade or number of pages in individual books published on topic statistics in last century. If we want to figure out a parameter [any characteristic] value for this “population” we can measure in theory each and every observation to find the average value.

population sample image

As we witnessed earlier, this is rarely feasible. And for all practical purposes to estimate value of this population parameter [such as average, standard deviation etc] we work with a sample instead. These “sample” observations need to be randomly chosen in order to avoid personal bias. Also they should represent all strata present within the population.

stratified_sampling

Estimates & Errors

Purpose of the sampling from the population was to estimate the population parameter such as average age of graduating engineers or average number of pages in statistics books. We don’t really know what the actual value of population parameter is. Estimates give us an idea what it might be. As your sample size grows bigger and bigger, estimate will be more close to real value of population parameter.

Since these samples contain randomly chosen observations, each new sample will give you a different estimate for population parameter. What this means is that an estimate from sample will always contain error. The term error is not same as mistake. Errors are inherent part of estimates by design. Hypothesis Testing is a frame work to quantify these errors.

Histograms, Probabilities, Cumulative Probabilities and Distributions

We all have seen those frequency bar charts at some point of our time while working with excel sheets. Lets consider the dataset of used cars. First few data points look like this:

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

%matplotlib inline
sns.set()
In [2]:
df = pd.read_csv("usedcars.csv")
df.head()
Out[2]:
year model price mileage color transmission
0 2011 SEL 21992 7413 Yellow AUTO
1 2011 SEL 20995 10926 Gray AUTO
2 2011 SEL 19995 7351 Silver AUTO
3 2011 SEL 17809 11613 Gray AUTO
4 2012 SE 17500 8367 White AUTO

We can convert this to frequency counts, which tells us which price is the most common. We can plot these counts as bar plot.

In [3]:
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
df["price"].plot(kind="bar")
Out[3]:
<matplotlib.axes._subplots.AxesSubplot at 0x144ac8bd6a0>

We can see that this chart however is not very informative as many frequencies are simply equal and many intervals remain blank. It doesn’t give us very good idea as to what kind of values are more frequent and so on. Since we have less number of data points we can bucket them into classes. Lets see how does the bar chart looks like once we do this.

In [4]:
df["price"].describe()
Out[4]:
count      150.000000
mean     12961.933333
std       3122.481735
min       3800.000000
25%      10995.000000
50%      13591.500000
75%      14904.500000
max      21992.000000
Name: price, dtype: float64
In [5]:
df["price"].plot(kind="hist")
Out[5]:
<matplotlib.axes._subplots.AxesSubplot at 0x144acbf6cc0>
In [6]:
# Divide by 5000 to limit the number of price categories
df["price_cat"] = np.ceil(df["price"] / 5000)
# Label those above 5 as 5
df["price_cat"].where(df["price_cat"] < 5, 5.0, inplace=True)
In [7]:
df["price_cat"].plot(kind="hist")
Out[7]:
<matplotlib.axes._subplots.AxesSubplot at 0x144ae93b320>

Lets convert these counts to frequency percentages by dividing them by total counts. We can use value_counts() with normalize=True.

In [8]:
frequency_pct = df["price_cat"].value_counts(normalize=True)
frequency_pct
Out[8]:
3.0    0.613333
2.0    0.180000
4.0    0.180000
1.0    0.013333
5.0    0.013333
Name: price_cat, dtype: float64
In [9]:
frequency_pct.plot(kind="bar")
Out[9]:
<matplotlib.axes._subplots.AxesSubplot at 0x144ae93d400>

By looking at this frequency percent chart and assuming that this sample represent entire population of used cars prices, I can say that 60% of used cars price are present in “bucket 3”. Or in other words, if I randomly pick a used cars price, probability of it being in bucket 3 is 60%.

If you ask me what is the probability that a used cars would have a price between 2 to 4. I will simply add all the probabilities associated with “bucket 2” to “bucket 4”.

$$P(2 \leqslant Price \leqslant 4) = P(Price = 2) + P(Price = 3) + P(Price = 4)$$

$$P(2 \leqslant Price \leqslant 4) = 0.613333 + 0.180000 + 0.180000$$

$$P(2 \leqslant Price \leqslant 4) = 0.973333$$

That teaches us, we can comment on probabilities of occurrence in an interval with cumulative probabilities [addition of probabilities]. Also by collecting more data we can make our “classes” finer and finer. Which gives us idea about frequency of occurrence of values at much finer level. We can keep on collecting data until we have practically infinite data points. Imagine that we drew a curve which joins top of all these fine bars now. We could still comment probabilities by looking at point on the curve associated with those values of $x$.

In [10]:
df["price_cat"].plot(kind="hist")
Out[10]:
<matplotlib.axes._subplots.AxesSubplot at 0x144ae0db358>
In [11]:
import scipy.stats as stats
fit = stats.norm.pdf(df["price"], np.mean(df["price"]), np.std(df["price"]))
In [12]:
plt.plot(df["price"], fit)
plt.hist(df["price"], normed=True) 
Out[12]:
(array([7.32922897e-06, 2.93169159e-05, 4.39753738e-05, 6.22984462e-05,
        6.59630607e-05, 1.90559953e-04, 9.89445910e-05, 4.03107593e-05,
        3.66461448e-06, 7.32922897e-06]),
 array([ 3800. ,  5619.2,  7438.4,  9257.6, 11076.8, 12896. , 14715.2,
        16534.4, 18353.6, 20172.8, 21992. ]),
 <a list of 10 Patch objects>)

Lets say this curve is represented by $y=f(x)$, if you pass a value of $x$, it gives probability of occurrence of that value. Now instead of adding the probabilities of occurrence to get interval probabilities we can use this:

$$P(a \leq x \leq b) = \int_a^b f(x)dx$$

[Integration is nothing but summation of infitiely small elements. No, I’m not going to ask you to learn integration now, you just need to know what goes on at the back end. Eventually all this will be done by a software for you.]

Other properties that it’ll follow are

$$0 \leq f(x) \leq 1$$

If you sum probabilities for all possible cases it’ll be equal to 1.

$$\int^{+\infty}_{-\infty} f(x)dx = 1$$

This $f(x)$ here is nothing but distribution curve of the population. By looking at this curve you can get an idea about probability of occurrence of the values from your population. Normal distribution is such a curve. It has following equation:

$$f(x) = \frac{1}{{\sigma \sqrt {2\pi } }}e^\frac{{{{ - \left( {x - \mu } \right)^2 } }}} {2\sigma^2}$$

Where $\mu$ and $\sigma$ are population mean and population standard deviations respectively. They are parameters of this functions. For different values of $\mu$ and $\sigma$, you’ll have a different Normal distributions. This is similar to general equation of lines $y=mx+c$. For different values of slope($m$) and intercept($c$), you’ll have different lines. Don’t let this weird equation intimidate you. Its just an equation associated with a particular distribution. It also satisfies conditions given above for such a function.

Standardization

We are jumping on to another topic, have patience, things will make sense eventually. Consider data points for variable $X$: $x_1$, $x_2$, $x_3$ .... $x_n$ . For these data points we can calculate average $\overline{X}$ and standard deviation $S_x$ as follows.

$$\overline{X}=\frac{\sum_{i=1}^{n} x_i}{n}$$ and $$S_x = \sqrt{\frac{\sum_{i=1}^{n}(x_i - \overline{X})^2}{n}}$$

Lets say now we create another variable $Y$ which takes values: $y_1$, $y_2$, $y_3$ .... $y_n$ such that:

$$y_i=\frac{x_i - \overline{X}}{S_x}$$

Lets calculate mean and standard deviation for this new variable $Y$.

$$\overline{Y}=\frac{\sum_{i=1}^{n}y_i}{n}$$

Putting in $y_i$ in terms of $x_i$.

$$\overline{Y}=\frac{\sum_{i=1}^{n}(x_i - \overline{X})}{S_xn}$$

but

$$\sum_{i=1}^{n}(x_i - \overline{X})=0$$

Therefore,

$$\overline{Y}=0$$

Now lets calculate standard deviation of $Y$.

$$S_y = \sqrt{\frac{\sum_{i=1}^{n}(y_i - \overline{Y})^2}{n}}$$

We have already found that $Y$ is zero. Putting $y_i$ in terms of $x_i$ in the equation above:

$$S_y = \sqrt{\frac{\sum_{i=1}^{n}(x_i - \overline{X})^2}{S_x^2 \times n}}$$

$$S_y = \frac{1}{S_x} \times \sqrt{\frac{\sum_{i=1}^{n}(x_i - \overline{X})^2}{n}}$$

We know that factor on the right side is same as formula for $S_x$. Putting in that gets us:

$$S_y=\frac{S_x}{S_x}=1$$

These two results $Y = 0$ and $S_y = 1$ are very important. Observe that, we did not use any hard coded numbers here. $X$ can be any variable, and if it is standardaized in the same manner [subtract mean and divide by standard deviation] then the resulting variable will have its mean zero and standard deviation as 1. Also if we are given standardized variable values $y_i$ and mean $\mu$ & $\sigma$ of non-standardize variable $X$. We can get values of $x_i$ by using this: $x_i=\sigma \times y_i +\overline{X}$

Standard Normal Distribution

We have seen general equation of normal distribution already with parameters $\mu$ and $\sigma$.

$$f(x) = \frac{1}{{\sigma \sqrt {2\pi } }}e^\frac{{{{ - \left( {x - \mu } \right)^2 } }}} {2\sigma^2}$$

Standard Normal Distribution is one specific distribution with $\mu = 0$ and $\sigma = 1$.

$$f(x) = \frac{1}{\sqrt{2\pi}}e^\frac{-x^2}{2}$$

Important Conclusions from Standard Normal Distribution

In literature elsewhere you’ll find standard normal varibale being represented as $Z$. We’ll follow the same notation. Following result is self explanatory given that mean of standard normal distribution is 0 and normal distributions in general are symmetric $P(Z \geqslant 0) = P(Z \leqslant 0) = 0.50$

Few other results which also can be broken in symmertical halves as above have been derived as follows:

$$P(-1 \leqslant Z \leqslant 1) = 0.682$$

$$P(-2 \leqslant Z \leqslant 2) = 0.954$$

$$P(-3 \leqslant Z \leqslant 3) = 0.996$$

Since the distribution is symmetric as mentioned before, one sided probabilities as well as remainder probabilites can be easily calculated . For example, using one of the above results you can calculate:

$$P(Z \leqslant 1) = \frac{0.682}{2} = 0.341$$

$$P(Z \geqslant 2) = 1 - P(Z \leqslant 0) - P(0 \leqslant Z \leqslant 2) = 1 - 0.5 - \frac{0.954}{2} = 0.023$$

Confidence Intervals

I’m listing few more results from standard normal distribution, similar to as we listed above.

$$P(-1.645 \leqslant Z \leqslant 1.645) = 0.90$$

$$P(-1.96 \leqslant Z \leqslant 1.96) = 0.95$$

$$P(-2.576 \leqslant Z \leqslant 2.576) = 0.99$$

These probability results are stratight forward. One way to interpret any of them is, if you randomly pick an observation from a population which follows standard normal distribution, there is 90% chance/probability that it’ll fall in the interval $[-1.645, 1.645]$ Formaly, $[-1.645, 1.645]$ is called 90% confidence interval for standard normal distribution. Of course there can be other arbitrary intervals $[a,b]$ such that: $P(a \leqslant Z \leqslant b) = 0.90$

But there is only one possible 90% interval which is symmetric about mean of distribution. These symmetric intervals are called Confidence Intervals. These are also written as CI in short notation.

Extrapolating Results for General Normal Distribution

From our discussion of standardization we can say that if we have a general normal distribution following variable $X$ such that $X \sim \mathcal{N}(\mu, \sigma^2)$. We can standardize this as follows $Z = \frac{X - \mu}{\sigma}$.

Where $Z$ will follow normal distribution but with mean 0 and standard deviation 1 which is standard normal distribution. So given any of the above probability results we can do following

$$P(-1.645 \leqslant Z \leqslant 1.645) = 0.90$$

$$P(-1.645 \leqslant \frac{X - \mu}{\sigma} \leqslant 1.645) = 0.90$$

$$P(-1.645 * \sigma + \mu \leqslant X \leqslant \mu + 1.645 * \sigma) = 0.90$$

Lets assume that for $X$, $\mu = 10$ and $\sigma = 3$. Putting in values of $\mu$ and $\sigma$ for $X$ in above, we get

$$P(5.065 \leqslant X \leqslant 14.935) = 0.90$$

Which tells us that 90% confidence interval for $X \sim \mathcal{N}(10, 9)$ is $[5.065, 14.935]$. If you are wondering that this CI doesn’t look symmetric. It is, about the mean 10. It doesn’t make sense to calculate these CI results for infinitely many possible normal distributions. However using the technique shown above we can calculate this for any general normal distribution, with having pre-calculated results for just standard normal distribution. But there are many other possible distributions out there, why are we so keen about exploring Normal distribution? The answer lies in Central Limit Theorem.

Central Limit Theorem

Averages of samples [of sufficient large size] follow normal distribution with mean $\mu$ and variance $\frac{\sigma^2}{n}$ where $(\mu, \sigma^2)$ are mean and variance of the population and $n$ is the sample size. If sample size is small, normal distribution is replaced by t-distribution. [This is irrespective of distribution being followed by the variable values]. This statement is backbone of what we are going to see in hypothesis testing framework. Lets try to understand what it is saying with some example.

In [13]:
import numpy as np
import pandas as pd

np.random.seed(1)

X = np.random.beta(a=2, b=5, size=20000)
df = pd.DataFrame()
df["X"] = X
df.head()
Out[13]:
X
0 0.578971
1 0.283641
2 0.613243
3 0.746286
4 0.574108
In [14]:
ax = sns.distplot(df["X"], kde=True)
ax.set(xlabel='X', ylabel='Frequency Percent', title="$\mu$ {:.2f}, $\sigma^2$ {:.2f}".format(df["X"].mean(), df["X"].std()))
Out[14]:
[Text(0,0.5,'Frequency Percent'),
 Text(0.5,0,'X'),
 Text(0.5,1,'$\\mu$ 0.29, $\\sigma^2$ 0.16')]

Clearly the distribution of the variable $X$ here doesnt look normal or symmetric. What we are going to do next is to test 10000 samples of size 100 each from this population and then plot histogram for their averages and see whether that looks like a normal distribution.

In [15]:
df_new = pd.DataFrame(index=list(range(10000)), columns=["X"])
for i in range(10000):
    sampled = df["X"].sample(100)
    sampled_mean = sampled.mean()
    df_new.iloc[i] = sampled_mean
    
df_new["X"] = df_new["X"].astype(float)
df_new.head()
Out[15]:
X
0 0.282529
1 0.283714
2 0.268907
3 0.300482
4 0.289315
In [16]:
ax = sns.distplot(df_new["X"])
ax.set(xlabel='Sample Averages', ylabel='Frequency Percent', title="$\mu$ {:.2f}, $\sigma$ {:.2f}".format(df_new["X"].mean(), df_new["X"].std()))
Out[16]:
[Text(0,0.5,'Frequency Percent'),
 Text(0.5,0,'Sample Averages'),
 Text(0.5,1,'$\\mu$ 0.29, $\\sigma$ 0.02')]

And it surprisingly does. Lets try this with some other more radical distribution. How about a parabolic distribution, lets see if sample averages still follow normal distribution.

In [17]:
t = np.random.uniform(size=20000)
k = np.random.uniform(size=20000)
df_3 = pd.DataFrame()
X = np.where(k > 0.5, np.sin(t), np.cos(t))
df_3["X"] = X
In [18]:
ax = sns.distplot(df_3["X"])
ax.set(xlabel='Sample Averages', ylabel='Frequency Percent', title="$\mu$ {:.2f}, $\sigma$ {:.2f}".format(df_3["X"].mean(), df_3["X"].std()))
Out[18]:
[Text(0,0.5,'Frequency Percent'),
 Text(0.5,0,'Sample Averages'),
 Text(0.5,1,'$\\mu$ 0.65, $\\sigma$ 0.28')]

This again is distribution which is no where close to a normal distribution, lets see how sample averages behave in this case.

In [19]:
df_4 = pd.DataFrame(index=list(range(10000)), columns=["X"])
for i in range(10000):
    sampled = df_3["X"].sample(100)
    sampled_mean = sampled.mean(skipna=True)
    df_4.iloc[i] = sampled_mean
    
df_4["X"] = df_4["X"].astype(float)
df_4.head()
Out[19]:
X
0 0.677860
1 0.672585
2 0.639776
3 0.621896
4 0.688103
In [20]:
ax = sns.distplot(df_4["X"])
ax.set(xlabel='Sample Averages', ylabel='Frequency Percent', title="$\mu$ {:.2f}, $\sigma$ {:.2f}".format(df_4["X"].mean(), df_4["X"].std()))
Out[20]:
[Text(0,0.5,'Frequency Percent'),
 Text(0.5,0,'Sample Averages'),
 Text(0.5,1,'$\\mu$ 0.65, $\\sigma$ 0.03')]

And again, here it is. Sample averages seem to follow normal distribution in this case as well. Without going into mathematical details of Central Limit Theorem, our take aways from here is that irrespective of underlying population distribution, samples averages follow normal distribution.

All the concepts which we saw above are prerequisite for Hypothesis Testing which we will see in next article. In next article, we will connect all the things discussed above and will present a formulized version of Hypothesis Testing. Until then, stay tuned.


About

Hi! I am Ajay. I try to contribute to society by striving to create great software products that make people's lives easier. I believe software is the most effective way to touch others' lives in our day and time. I mostly work in Python, I do not pigeonhole myself to specific languages or frameworks. A good developer is receptive and has the ability to learn new technologies. I also often contribute to open source projects and beta test startup products. I'm passionate about making people's lives better through software. Whether it's a small piece of functionality implemented in a way that is seamless to the user, or it's a large scale effort to improve the performance and usability of software, I'm there. That's what I do. I make software. Better.

Connections Are Good