Categories
Tutorial

Hypothesis Testing

If you have read my previous article on statistics, you must have read the term “Hypothesis Testing” under Inferential Statistics.

Hypothesis testing is a statistical technique in which an analyst verifies a hypothesis about a population parameter. In simpler terms, it’s a statistical test that determines whether or not the hypothesis stated for a sample of data holds true for the full population. It helps in comparing two or more groups. This test also provides evidence concerning the success or failure of the hypothesis for any data.

How Does It Work?

A data analyst evaluates a statistical sample by monitoring and assessing a random sample of the population being studied in hypothesis testing. All analysts use a random population sample to test two hypotheses, which I’ll go over in more detail later in this article.

Terms That You Should Be Familiar With Before Getting Into Hypothesis Testing

Population

A population is a group of items that we want to examine or evaluate. It refers to the total number of observations that are possible. For example, if we are researching the availability of hospitals in a particular location, the population will consist of all of the hospitals in that area.

Parameter

A parameter is any summary figure, such as an average or percentage, that represents the whole population. Two separate population parameters are the population mean (mu) and the population percentage (p). For example, suppose we wish to know the average height of the Chinese population. It would be called a parameter because it reveals information about the entire population of China.

Statistic

A statistic is any summary figure of a sample measure. It helps in calculating the population parameter.

Sampling Distribution

A sampling distribution displays every possible result a statistic can obtain from every potential sample from a population. It also shows how frequently each result occurs.

Standard Error

A standard error (SE) is the standard deviation of its sample distribution. It calculates how much difference one can expect there to be between a sample’s mean and the population mean. It aids in the presentation of sample data features as well as the explanation of statistical analysis results.

Two Important Types of Hypothesis for Hypothesis Testing

null hypothesis, alternative hypothesis, types of hypothesis for hypothesis testing

Null Hypothesis

The null hypothesis is a general claim that there is no link between two variables. One can test, evaluate, and even reject a null hypothesis. The sign H₀ represents a null hypothesis. We can pronounce H₀ as H-null, H-zero, or H-naught. Since we can either support or reject a null hypothesis, it is in connection to the ‘equals to’ sign. Although If we decide to accept the null hypothesis, we need not make any changes in hypothesis testing.

Alternative Hypothesis

The research hypothesis is another name for it. It indicates that two variables have a connection, implying that one influences the other and vice versa. The sign H₁ represents an alternative hypothesis. This hypothesis is basically an alternative to the null hypothesis. An example of an alternate hypothesis is believing that one of two variables in comparison is superior or inferior.

Different Types of Statistical Hypothesis Tests

One-tailed

A one-tailed test is a statistical hypothesis test in which the alternative hypothesis has only one end. It shows if the sample mean is more or less than the population mean. When performing a one-tailed test, the analyst is looking for the likelihood of a connection in only one direction. However, before conducting a one-tailed test, the analyst must construct a null hypothesis, an alternative hypothesis, and a probability value (p-value).

Two-tailed

We use the two-tailed test of the null hypothesis when the alternative hypothesis does not show direction. In other words, a two-tailed test has two endpoints for the alternative hypothesis. Here, the analyst is looking for the likelihood of a connection in either direction and not just one.

Test Statistic

A test statistic is a numerical value we get from a statistical test of a hypothesis. It provides information about the data that is important in determining whether or not to reject the null hypothesis. It determines how near the sample is to the null hypothesis. If the test statistic number is high, the p-value will be low which will increase the likelihood to reject the null hypothesis. Similarly, if the test statistic number is low, the p-value will be high increasing the likelihood to support the null hypothesis.

P-value

The p-value is the likelihood of getting test results on the assumption that the null hypothesis is true. It is a measure of the likelihood that a discrepancy might have occurred solely by chance. P-value tables or statistical tools help in calculating p-values. The lower the p-value, the more likely it is that a null hypothesis will be rejected.

Two Types of Errors

Assume you decide to get tested for Coronavirus since you’ve come across a few symptoms of the condition.

There are two errors that could possibly occur:

  1. The test result concludes that you have coronavirus when you actually don’t.
  2. The test result concludes that you don’t have coronavirus when you actually do.

Type I

A Type I error occurs when a false positive result is obtained. For example, you may assume that the medicine you took improved your health when, in reality, it did not; other variables excluding the medicine, improved your health.
Such error occurs when the null hypothesis is rejected when it is actually true. It involves believing that results are statistically significant when they were obtained completely by coincidence. The significance level, or alpha (), is the chance of making a Type I error. The significance level is often around 0.05 or 5%. This indicates that if the null hypothesis is true, your results have a 5% chance of happening.

Type II

A Type II error occurs when a false negative result is obtained. It occurs when the null hypothesis is supported when it is actually incorrect. Beta () denotes the likelihood of committing a Type II error. For example, you may assume that the medication you took had no effect on your health when, in fact, it did. We also learned from the above example that it essentially implies failing to recognize an impact when one exists. The chance of such error is inversely proportional to a study’s statistical power. The smaller the likelihood of committing this error, the greater the statistical power.

Different Types of Hypothesis for Hypothesis Testing

Different forms of hypothesis in data sampling help to determine if the examined samples are positive or negative for a hypothesis. We have already discussed both null and alternative hypotheses.

Other types of hypotheses include the following:

Non-Directional Hypothesis

The Non-directional hypothesis states that there is no direction to the relationship between two variables. It simply indicates that there is a link between the two variables, but there is no explanation for which variable influences which because there is no direction of effect.

Directional Hypothesis

In contrast, the Directional hypothesis emphasizes the direction of effect of the connection that exists between two variables. Here, we know which variable influences which.

Statistical Hypothesis

A statistical hypothesis is a hypothesis that can be statistically proven to be true using data sampling and statistical expertise.

Steps to Conducting Hypothesis Testing

Steps to conducting hypothesis testing

The following are the steps:

Generate the Hypotheses

Establishing both null and alternative hypotheses is the first step. It establishes the groundwork for hypothesis testing. These hypotheses are important because they kick off the testing procedure, which involves the analyst working with data samples. This study helps them in determining whether to accept or reject the hypothesis.

Choose an Appropriate Test

After you’ve established your hypothesis, the next step is to devise a testing strategy. This involves gathering data samples and selecting which statistical approach should be used.

Evaluate Data Samples

The third stage is to examine the data samples in order to extract a pattern from them.
An analyst decides the following when examining the samples:

  • Level of Significance: We already know that the level of significance relates to a Type-I error. It’s critical to strike a balance between the two sorts of mistakes.
  • Testing Technique: A testing technique uses sample distribution type and a test statistic, which contributes to hypothesis testing.
  • P-value: To reach a conclusion, an analyst compiles information on whether the p-value is high or low.

Interpret the Outcomes

The interpretation of outcomes from the study of data samples indicates whether the alternative hypothesis should be supported or rejected.

Leave a Reply

Your email address will not be published. Required fields are marked *