Family-wise Type I error OLS regressionOverall type I error with dependent testsControlling for Type 1...

Is it advisable to inform the CEO about his brother accessing his office?

How can I get a file's size in C++ in 2019 with C++ 17?

Emphasize numbers in tables

What was the ASCII end of medium (EM) character intended to be used for?

Available snapshots for main net?

Non-inverting amplifier ; Single supply ; Bipolar input

How to extract coefficients of a generating function like this one, using a computer?

GFCI versus circuit breaker

Can you run PoE Cat6 alongside standard Cat6 cables?

German idiomatic equivalents of 能骗就骗 (if you can cheat, then cheat)

How might boat designs change in order to allow them to be pulled by dragons?

Can I deep fry food in butter instead of vegetable oil?

what does "$@" mean inside a find command

Can I hire several veteran soldiers to accompany me?

What is the point of using the kunai?

Why would a propellor have blades of different lengths?

Was Wolfgang Unziker the last Amateur GM?

What is the meaning of "it" in "as luck would have it"?

What is this white film on slides from late 1950s?

Why is quantum gravity non-renormalizable?

Why can't i use !(single pattern) in zsh even after i turn on kshglob?

Confusion in understanding the behavior of inductor in RL circuit with DC source

Are there advantages in writing by hand over typing out a story?

Why are symbols not written in words?



Family-wise Type I error OLS regression


Overall type I error with dependent testsControlling for Type 1 Errors: Post-hoc testing on more than 1 ANOVACorrecting for family-wise error rate with series of repeated measures ANOVA?A situation where ignoring clustering optimises the Type I and Type II error rates?Does the logic of “family-wise error” apply to effect size estimation?Type I/II errorPair-wise comparisons in non-parametric ANCOVA in R/SPSSIn using backward elimination procedure how to control for type I error?How to measure risk of a Type 2 error in A/B testsType 1 Error correction for multiple comparisons: ANOVA vs multiple regression













4












$begingroup$


Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?










share|cite|improve this question









$endgroup$

















    4












    $begingroup$


    Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?










    share|cite|improve this question









    $endgroup$















      4












      4








      4





      $begingroup$


      Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?










      share|cite|improve this question









      $endgroup$




      Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?







      regression multiple-comparisons type-i-and-ii-errors






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 8 hours ago









      ChernoffChernoff

      1538 bronze badges




      1538 bronze badges






















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



          However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



          Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:




          • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

          • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

          • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.


          For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



          Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






          share|cite|improve this answer











          $endgroup$
















            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "65"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f415451%2ffamily-wise-type-i-error-ols-regression%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3












            $begingroup$

            If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



            However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



            Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:




            • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

            • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

            • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.


            For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



            Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






            share|cite|improve this answer











            $endgroup$


















              3












              $begingroup$

              If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



              However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



              Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:




              • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

              • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

              • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.


              For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



              Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






              share|cite|improve this answer











              $endgroup$
















                3












                3








                3





                $begingroup$

                If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



                However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



                Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:




                • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

                • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

                • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.


                For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



                Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






                share|cite|improve this answer











                $endgroup$



                If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



                However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



                Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:




                • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

                • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

                • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.


                For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



                Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 4 hours ago

























                answered 8 hours ago









                Frans RodenburgFrans Rodenburg

                4,4681 gold badge5 silver badges30 bronze badges




                4,4681 gold badge5 silver badges30 bronze badges






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f415451%2ffamily-wise-type-i-error-ols-regression%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

                    Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

                    Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...