Why Normality assumption in linear regression












1












$begingroup$


My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?










share|cite|improve this question









$endgroup$












  • $begingroup$
    We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
    $endgroup$
    – AdamO
    1 hour ago












  • $begingroup$
    Because the math works out easily enough that people could use it before modern computers.
    $endgroup$
    – Nat
    1 hour ago


















1












$begingroup$


My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?










share|cite|improve this question









$endgroup$












  • $begingroup$
    We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
    $endgroup$
    – AdamO
    1 hour ago












  • $begingroup$
    Because the math works out easily enough that people could use it before modern computers.
    $endgroup$
    – Nat
    1 hour ago
















1












1








1





$begingroup$


My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?










share|cite|improve this question









$endgroup$




My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?







regression mathematical-statistics normal-distribution error linear






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 2 hours ago









Master ShiMaster Shi

161




161












  • $begingroup$
    We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
    $endgroup$
    – AdamO
    1 hour ago












  • $begingroup$
    Because the math works out easily enough that people could use it before modern computers.
    $endgroup$
    – Nat
    1 hour ago




















  • $begingroup$
    We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
    $endgroup$
    – AdamO
    1 hour ago












  • $begingroup$
    Because the math works out easily enough that people could use it before modern computers.
    $endgroup$
    – Nat
    1 hour ago


















$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
1 hour ago






$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
1 hour ago














$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
1 hour ago






$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
1 hour ago












1 Answer
1






active

oldest

votes


















4












$begingroup$

You can choose another error distribution; they basically just change the loss function.



This is certainly done.



Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.



Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).



Many other choices are possible and quite a few have been used in practice.



[Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "65"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395011%2fwhy-normality-assumption-in-linear-regression%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    4












    $begingroup$

    You can choose another error distribution; they basically just change the loss function.



    This is certainly done.



    Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.



    Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).



    Many other choices are possible and quite a few have been used in practice.



    [Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]






    share|cite|improve this answer











    $endgroup$


















      4












      $begingroup$

      You can choose another error distribution; they basically just change the loss function.



      This is certainly done.



      Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.



      Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).



      Many other choices are possible and quite a few have been used in practice.



      [Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]






      share|cite|improve this answer











      $endgroup$
















        4












        4








        4





        $begingroup$

        You can choose another error distribution; they basically just change the loss function.



        This is certainly done.



        Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.



        Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).



        Many other choices are possible and quite a few have been used in practice.



        [Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]






        share|cite|improve this answer











        $endgroup$



        You can choose another error distribution; they basically just change the loss function.



        This is certainly done.



        Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.



        Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).



        Many other choices are possible and quite a few have been used in practice.



        [Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 18 mins ago

























        answered 2 hours ago









        Glen_bGlen_b

        212k22409758




        212k22409758






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395011%2fwhy-normality-assumption-in-linear-regression%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Accessing regular linux commands in Huawei's Dopra Linux

            Can't connect RFCOMM socket: Host is down

            Kernel panic - not syncing: Fatal Exception in Interrupt