I can see my two means are different. What information can a t test add?How to interpret and report eta...
Transposing from C to Cm?
Could George I (of Great Britain) speak English?
Obtaining the intermediate solutions in AMPL
Architectural feasibility of a tiered circular stone keep
How do I get toddlers to stop asking for food every hour?
moon structure ownership
Why doesn't the Bitcoin-qt client ask for the Wallet passphrase upon startup?
"Sorry to bother you" in an email?
Heyacrazy: No Diagonals
Two questions about typesetting a Roman missal
Why do banks “park” their money at the European Central Bank?
Papers on arXiv solving the same problem at the same time
What would make bones be of different colors?
Why is the UK so keen to remove the "backstop" when their leadership seems to think that no border will be needed in Northern Ireland?
How do I prevent other wifi networks from showing up on my computer?
Compelling story with the world as a villain
What is the best type of paint to paint a shipping container?
Are modern clipless shoes and pedals that much better than toe clips and straps?
Is MOSFET active device?
How do proponents of Sola Scriptura address the ministry of those Apostles who authored no parts of Scripture?
“T” in subscript in formulas
Does this VCO produce a sine wave or square wave
Is "The life is beautiful" incorrect or just very non-idiomatic?
New Math Formula?
I can see my two means are different. What information can a t test add?
How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses?Test to prove two means do not have a differenceNormalization in pairwise hypothesis testingANOVA or multiple t-tests when comparing pre-existing group means?t-test to compare two meansNull hypothesis of t-test and ANOVACan ANOVA be significant when none of the pairwise t-tests is?Difference between paired t-test and repeated measures ANOVA with two level of repeated measuresWhat statistical tests would you recommend for group difference for my data?Why do t-test and ANOVA give different p-values for two-group comparison?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
$begingroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
New contributor
$endgroup$
add a comment |
$begingroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
New contributor
$endgroup$
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
yesterday
add a comment |
$begingroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
New contributor
$endgroup$
Methods such as t-tests and anova find the difference between two means using a statistical method.
This seems strange to a beginner like me, one could simply find the mean of two groups and understand if they are different or not.
What is the intuition behind these methods ?
anova t-test mean
anova t-test mean
New contributor
New contributor
edited yesterday
Harvey Motulsky
11.3k5 gold badges46 silver badges86 bronze badges
11.3k5 gold badges46 silver badges86 bronze badges
New contributor
asked yesterday
YashYash
111 bronze badge
111 bronze badge
New contributor
New contributor
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
yesterday
add a comment |
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
yesterday
3
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
yesterday
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
yesterday
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423434%2fi-can-see-my-two-means-are-different-what-information-can-a-t-test-add%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
add a comment |
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
add a comment |
$begingroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
$endgroup$
This is a fundamental question about statistics, and every stats text will discuss. So I'll be brief.
The t test is not asking whether the means you observed in your sample are different. As you say, you can see those. The t test is more abstract. It asks about the population or distribution the data were sampled from. Given assumptions (normal distribution; equal standard deviations; random sampling...), the t test quantifies what can be known about the difference between the means of the populations (or distributions) the data were drawn from.
That may be too cryptic, but should point you to the right chapters in texts to read.
answered yesterday
Harvey MotulskyHarvey Motulsky
11.3k5 gold badges46 silver badges86 bronze badges
11.3k5 gold badges46 silver badges86 bronze badges
add a comment |
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
add a comment |
$begingroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
$endgroup$
You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.
answered yesterday
atamaluatamalu
455 bronze badges
455 bronze badges
add a comment |
add a comment |
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Yash is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423434%2fi-can-see-my-two-means-are-different-what-information-can-a-t-test-add%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
$begingroup$
Hi Yash, welcome to CrossValidated. If you calculate the means based on the observed data, you will observe that they are numerically different. However, when testing hypotheses we treat data as realizations of an underlying data generating process. Hence, it is more insightful to work with them as random variables. As a result of treating them as random variables, we need to find ways to determine whether the observed difference in means reveals a statistical difference in the underlying parameters. T tests and Anova tests are examples of such measures. I hope this helps.
$endgroup$
– MauOlivares
yesterday