Multiple Group Tests
Tests for comparing more than two groups simultaneously.
F-Tests
FIndependentTest
FIndependentTest
Bases: BaseTest
Performs a custom F-test (ANOVA-style) for comparing multiple independent groups.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
Source code in aboba/tests/multiple/f_test.py
__init__
Independent F-test for comparing variances between groups.
This test compares the variances of two or more independent groups to determine if there are statistically significant differences between them.
| PARAMETER | DESCRIPTION |
|---|---|
value_column
|
Name of the column containing the values to test.
TYPE:
|
Examples:
import pandas as pd
import numpy as np
from aboba.tests.multiple.f_test import FIndependentTest
# Create sample data
np.random.seed(42)
group1 = pd.DataFrame({'target': np.random.normal(10, 1, 50)})
group2 = pd.DataFrame({'target': np.random.normal(10, 2, 50)})
group3 = pd.DataFrame({'target': np.random.normal(10, 1.5, 50)})
# Perform the test
test = FIndependentTest(value_column='target')
result = test.test([group1, group2, group3], {})
print(f"P-value: {result.pvalue:.4f}")
Source code in aboba/tests/multiple/f_test.py
test
Perform the independent F-test on the provided groups.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
List of DataFrames representing the groups to compare.
TYPE:
|
artefacts
|
Dictionary to store additional results.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
Object containing the p-value.
TYPE:
|
Source code in aboba/tests/multiple/f_test.py
FRelatedTest
FRelatedTest
Bases: BaseTest
Performs a custom F-test for comparing multiple related (paired) groups, akin to repeated-measures ANOVA.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
Source code in aboba/tests/multiple/f_test.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | |
__init__
Related (paired) F-test for comparing variances between groups.
This test compares the variances of two or more related groups (repeated measures) to determine if there are statistically significant differences between them.
| PARAMETER | DESCRIPTION |
|---|---|
value_column
|
Name of the column containing the values to test.
TYPE:
|
Examples:
import pandas as pd
import numpy as np
from aboba.tests.multiple.f_test import FRelatedTest
# Create sample paired data
np.random.seed(42)
subjects = 30
treatments = 3
data = []
for i in range(subjects):
base = np.random.normal(10, 2)
for j in range(treatments):
data.append({
'subject': i,
'treatment': j,
'target': base + np.random.normal(0, 0.5) + j * 0.5
})
df = pd.DataFrame(data)
# Split into groups
groups = [df[df['treatment'] == i][['target']] for i in range(treatments)]
# Perform the test
test = FRelatedTest(value_column='target')
result = test.test(groups, {})
print(f"P-value: {result.pvalue:.4f}")
Source code in aboba/tests/multiple/f_test.py
test
Executes an F-test for multiple related groups by computing an overall between-group and within-subject variability measure.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
A list of DataFrames, each representing a group/sample.
TYPE:
|
artefacts
|
A dictionary for storing or retrieving additional test information.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
A
TYPE:
|
Source code in aboba/tests/multiple/f_test.py
FOneWayIndependentTest
FOneWayIndependentTest
Bases: BaseTest
Performs a one-way ANOVA test using SciPy's built-in f_oneway function
for multiple independent groups.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
Source code in aboba/tests/multiple/f_test.py
test
Executes a one-way ANOVA on the provided groups using scipy.stats.f_oneway.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
A list of DataFrames, each representing a group/sample.
TYPE:
|
artefacts
|
A dictionary for storing or retrieving additional test information.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
A
TYPE:
|
Source code in aboba/tests/multiple/f_test.py
Variance Tests
BartletIndependentTest
BartletIndependentTest
Bases: BaseTest
Performs Bartlett's test to check if groups have equal variance.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
Source code in aboba/tests/multiple/bartlet.py
__init__
Bartlett's test for equal variances across multiple groups.
This test checks the null hypothesis that all input samples are from populations with equal variances. It is commonly used before performing ANOVA to verify the assumption of homoscedasticity.
| PARAMETER | DESCRIPTION |
|---|---|
value_column
|
Name of the column containing the values to test.
TYPE:
|
Examples:
import pandas as pd
import numpy as np
from aboba.tests.multiple.bartlet import BartletIndependentTest
# Create sample data with equal variances
np.random.seed(42)
group1 = pd.DataFrame({'target': np.random.normal(10, 2, 50)})
group2 = pd.DataFrame({'target': np.random.normal(12, 2, 50)})
group3 = pd.DataFrame({'target': np.random.normal(11, 2, 50)})
# Perform the test
test = BartletIndependentTest(value_column='target')
result = test.test([group1, group2, group3], {})
print(f"P-value: {result.pvalue:.4f}")
# Create data with unequal variances
group1_unequal = pd.DataFrame({'target': np.random.normal(10, 1, 50)})
group2_unequal = pd.DataFrame({'target': np.random.normal(12, 3, 50)})
group3_unequal = pd.DataFrame({'target': np.random.normal(11, 2, 50)})
result_unequal = test.test([group1_unequal, group2_unequal, group3_unequal], {})
print(f"P-value (unequal variances): {result_unequal.pvalue:.4f}")
Source code in aboba/tests/multiple/bartlet.py
test
Perform Bartlett's test for equal variances.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
List of DataFrames representing the groups to compare.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
Object containing the p-value.
TYPE:
|
Source code in aboba/tests/multiple/bartlet.py
Post-Hoc Tests
HSDTukeyTest
HSDTukeyTest
Bases: BaseTest
Performs Tukey's HSD (honestly significant difference) test for multiple comparison of group means.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
Source code in aboba/tests/multiple/hsd.py
__init__
Tukey's Honestly Significant Difference (HSD) test for multiple comparisons.
This post-hoc test is used to find means that are significantly different from each other after an ANOVA test indicates significant differences exist. It controls the family-wise error rate.
| PARAMETER | DESCRIPTION |
|---|---|
value_column
|
Name of the column containing the values to test.
TYPE:
|
Examples:
import pandas as pd
import numpy as np
from aboba.tests.multiple.hsd import HSDTukeyTest
# Create sample data with three groups
np.random.seed(42)
group1 = pd.DataFrame({'target': np.random.normal(10, 2, 50)})
group2 = pd.DataFrame({'target': np.random.normal(12, 2, 50)})
group3 = pd.DataFrame({'target': np.random.normal(11, 2, 50)})
# Perform the test
test = HSDTukeyTest(value_column='target')
result = test.test([group1, group2, group3], {})
print(f"Minimum p-value: {result.pvalue:.4f}")
Source code in aboba/tests/multiple/hsd.py
test
Perform Tukey's HSD test for multiple comparisons.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
List of DataFrames representing the groups to compare.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
Object containing the minimum p-value from all pairwise comparisons.
TYPE:
|
Source code in aboba/tests/multiple/hsd.py
PostHocDunnTest
PostHocDunnTest
Bases: BaseTest
Performs a post-hoc Dunn test, typically used following a Kruskal-Wallis test, for multiple comparisons between groups.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
p_adjust |
Method used for p-value adjustment (e.g., 'bonferroni', 'holm', etc.).
TYPE:
|
Source code in aboba/tests/multiple/dunn.py
__init__
Post-hoc Dunn's test for multiple comparisons.
This test performs pairwise comparisons between groups after an omnibus test (like Kruskal-Wallis) indicates significant differences exist. It is a non-parametric alternative to Tukey's HSD test.
| PARAMETER | DESCRIPTION |
|---|---|
value_column
|
Name of the column containing the values to test.
TYPE:
|
p_adjust
|
Method for adjusting p-values for multiple comparisons. Default is 'bonferroni'. Other options include 'holm', 'holm-sidak', 'simes-hochberg', 'hommel', 'fdr_bh', 'fdr_by'.
TYPE:
|
Examples:
import pandas as pd
import numpy as np
from aboba.tests.multiple.dunn import PostHocDunnTest
# Create sample data with three groups
np.random.seed(42)
group1 = pd.DataFrame({'target': np.random.normal(10, 2, 50)})
group2 = pd.DataFrame({'target': np.random.normal(12, 2, 50)})
group3 = pd.DataFrame({'target': np.random.normal(11, 2, 50)})
# Perform the test
test = PostHocDunnTest(value_column='target', p_adjust='bonferroni')
artefacts = {}
result = test.test([group1, group2, group3], artefacts)
print(f"Minimum p-value: {result.pvalue:.4f}")
print("Pairwise comparison results:")
print(artefacts['post_hoc_dunn_result'])
Source code in aboba/tests/multiple/dunn.py
test
Perform Dunn's post-hoc test for multiple comparisons.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
List of DataFrames representing the groups to compare.
TYPE:
|
artefacts
|
Dictionary to store additional results, including the full pairwise comparison matrix under the key 'post_hoc_dunn_result'.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
Object containing the minimum p-value from all pairwise comparisons.
TYPE:
|
Source code in aboba/tests/multiple/dunn.py
Non-Parametric Tests
KruskalIndependentTest
KruskalIndependentTest
Bases: BaseTest
Performs a Kruskal-Wallis H-test for multiple independent samples, a non-parametric alternative to one-way ANOVA.
| ATTRIBUTE | DESCRIPTION |
|---|---|
value_column |
Name of the column containing the values to test.
TYPE:
|
Source code in aboba/tests/multiple/kruskal.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | |
__init__
Kruskal-Wallis H-test for comparing distributions between independent groups.
This non-parametric test compares the distributions of two or more independent groups to determine if they come from the same distribution. It's an alternative to one-way ANOVA when the assumptions of normality are not met.
| PARAMETER | DESCRIPTION |
|---|---|
value_column
|
Name of the column containing the values to test.
TYPE:
|
Examples:
import pandas as pd
import numpy as np
from aboba.tests.multiple.kruskal import KruskalIndependentTest
# Create sample non-normal data
np.random.seed(42)
group1 = pd.DataFrame({'target': np.random.exponential(2, 50)})
group2 = pd.DataFrame({'target': np.random.exponential(3, 50)})
group3 = pd.DataFrame({'target': np.random.exponential(2.5, 50)})
# Perform the test
test = KruskalIndependentTest(value_column='target')
result = test.test([group1, group2, group3], {})
print(f"P-value: {result.pvalue:.4f}")
Source code in aboba/tests/multiple/kruskal.py
average_ranks
staticmethod
Average ranks with tie counts (for tie correction).
Source code in aboba/tests/multiple/kruskal.py
test
Executes the Kruskal-Wallis H-test on the provided groups.
| PARAMETER | DESCRIPTION |
|---|---|
groups
|
A list of DataFrames, each representing a group/sample.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestResult
|
A
TYPE:
|