How to remove duplicate rows from a file?











up vote
0
down vote

favorite
1












File,



TABLE1  
-------
1234

TABLE1
-------
9555

TABLE1
-------
87676

TABLE1
-------
2344


I want the output like



TABLE1  
-------
1234
9555
87676
2344









share|improve this question




























    up vote
    0
    down vote

    favorite
    1












    File,



    TABLE1  
    -------
    1234

    TABLE1
    -------
    9555

    TABLE1
    -------
    87676

    TABLE1
    -------
    2344


    I want the output like



    TABLE1  
    -------
    1234
    9555
    87676
    2344









    share|improve this question


























      up vote
      0
      down vote

      favorite
      1









      up vote
      0
      down vote

      favorite
      1






      1





      File,



      TABLE1  
      -------
      1234

      TABLE1
      -------
      9555

      TABLE1
      -------
      87676

      TABLE1
      -------
      2344


      I want the output like



      TABLE1  
      -------
      1234
      9555
      87676
      2344









      share|improve this question















      File,



      TABLE1  
      -------
      1234

      TABLE1
      -------
      9555

      TABLE1
      -------
      87676

      TABLE1
      -------
      2344


      I want the output like



      TABLE1  
      -------
      1234
      9555
      87676
      2344






      shell shell-script






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited May 6 '16 at 9:47









      Rahul

      8,87412842




      8,87412842










      asked May 6 '16 at 9:16









      pmaipmui

      74861738




      74861738






















          5 Answers
          5






          active

          oldest

          votes

















          up vote
          2
          down vote













          Here is one liner, using sed and awk :



          sed '/^$/d' filename | awk '!a[$1]++' 


          Combination of grep and awk :



          grep . filename | awk '!a[$1]++'


          As @cas suggested, You can do that in single awk command also.



          awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename





          share|improve this answer






























            up vote
            1
            down vote













            You can use awk '!x[$1]++' file > file_new



            While trying this command, I was getting one extra new line in the file you have given.



            I modified this to awk '!x[$1]++' file | sed '/^$/d' > file_new, which should solve your problem for this case.






            share|improve this answer



















            • 1




              how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
              – cas
              May 6 '16 at 10:01












            • @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
              – Rahul
              May 6 '16 at 10:23










            • @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
              – deosha
              May 6 '16 at 10:45


















            up vote
            0
            down vote













            I usually use sort and uniq together to get rid of duplicates like this:



                cat file | sort | uniq


            However, with your input, it will end up like this:



                -------
            1234
            2344
            87676
            9555
            TABLE1


            This command removes all but the numbers and adds the header afterwards:



                cat file | grep '^[[:digit:]]*$'| grep -v '^$' |sed '1iTABLE1n-------'


            and gives you this result:



                TABLE1
            -------
            1234
            9555
            87676
            2344





            share|improve this answer




























              up vote
              0
              down vote













              Use the command uniq, you can remove duplicate entries. Like :



              cat file | sort -r | uniq


              But in this specific case is not producing exactly the expected result as the file must be sorted for uniq to work - it will only detect duplicate lines if they are adjacent.

              Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence) :



              count_t=0
              count_d=0
              while read line; do
              if [[ $line == "TABLE"* ]] ; then
              if [[ $count_t -eq 0 ]]; then
              ((count_t++))
              else
              continue
              fi
              fi
              if [[ $line == "-----"* ]] ; then
              if [[ $count_d -eq 0 ]]; then
              ((count_d++))
              else
              continue
              fi
              fi
              echo $line
              done < file


              The awk and sed solutions posted by others are better though.






              share|improve this answer























              • colud you please tell me what is the command? @mazs
                – pmaipmui
                May 6 '16 at 9:22










              • I have tried cat File | uniq > File1.. but its giving me the same output as original file
                – pmaipmui
                May 6 '16 at 9:25










              • yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                – pmaipmui
                May 6 '16 at 9:36










              • You can not write it on the same file. You need to change the output file name
                – Raghvendra
                May 6 '16 at 9:38










              • I cannot come up with a one liner, but i'll post a small script
                – mazs
                May 6 '16 at 9:40


















              up vote
              0
              down vote













              Even though this is an old thread, I would like to contribute this answer that uses only a single sed command:



              sed '1,2p;/^[[:digit:]]/!d;' file


              It keeps the two first lines (the heading and underline), then deletes every line that doesn't start with a digit.






              share|improve this answer








              New contributor




              mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.


















                Your Answer








                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "106"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                convertImagesToLinks: false,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f281472%2fhow-to-remove-duplicate-rows-from-a-file%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                5 Answers
                5






                active

                oldest

                votes








                5 Answers
                5






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes








                up vote
                2
                down vote













                Here is one liner, using sed and awk :



                sed '/^$/d' filename | awk '!a[$1]++' 


                Combination of grep and awk :



                grep . filename | awk '!a[$1]++'


                As @cas suggested, You can do that in single awk command also.



                awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename





                share|improve this answer



























                  up vote
                  2
                  down vote













                  Here is one liner, using sed and awk :



                  sed '/^$/d' filename | awk '!a[$1]++' 


                  Combination of grep and awk :



                  grep . filename | awk '!a[$1]++'


                  As @cas suggested, You can do that in single awk command also.



                  awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename





                  share|improve this answer

























                    up vote
                    2
                    down vote










                    up vote
                    2
                    down vote









                    Here is one liner, using sed and awk :



                    sed '/^$/d' filename | awk '!a[$1]++' 


                    Combination of grep and awk :



                    grep . filename | awk '!a[$1]++'


                    As @cas suggested, You can do that in single awk command also.



                    awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename





                    share|improve this answer














                    Here is one liner, using sed and awk :



                    sed '/^$/d' filename | awk '!a[$1]++' 


                    Combination of grep and awk :



                    grep . filename | awk '!a[$1]++'


                    As @cas suggested, You can do that in single awk command also.



                    awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename






                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Apr 13 '17 at 12:36









                    Community

                    1




                    1










                    answered May 6 '16 at 9:46









                    Rahul

                    8,87412842




                    8,87412842
























                        up vote
                        1
                        down vote













                        You can use awk '!x[$1]++' file > file_new



                        While trying this command, I was getting one extra new line in the file you have given.



                        I modified this to awk '!x[$1]++' file | sed '/^$/d' > file_new, which should solve your problem for this case.






                        share|improve this answer



















                        • 1




                          how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
                          – cas
                          May 6 '16 at 10:01












                        • @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
                          – Rahul
                          May 6 '16 at 10:23










                        • @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
                          – deosha
                          May 6 '16 at 10:45















                        up vote
                        1
                        down vote













                        You can use awk '!x[$1]++' file > file_new



                        While trying this command, I was getting one extra new line in the file you have given.



                        I modified this to awk '!x[$1]++' file | sed '/^$/d' > file_new, which should solve your problem for this case.






                        share|improve this answer



















                        • 1




                          how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
                          – cas
                          May 6 '16 at 10:01












                        • @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
                          – Rahul
                          May 6 '16 at 10:23










                        • @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
                          – deosha
                          May 6 '16 at 10:45













                        up vote
                        1
                        down vote










                        up vote
                        1
                        down vote









                        You can use awk '!x[$1]++' file > file_new



                        While trying this command, I was getting one extra new line in the file you have given.



                        I modified this to awk '!x[$1]++' file | sed '/^$/d' > file_new, which should solve your problem for this case.






                        share|improve this answer














                        You can use awk '!x[$1]++' file > file_new



                        While trying this command, I was getting one extra new line in the file you have given.



                        I modified this to awk '!x[$1]++' file | sed '/^$/d' > file_new, which should solve your problem for this case.







                        share|improve this answer














                        share|improve this answer



                        share|improve this answer








                        edited May 6 '16 at 9:59









                        cas

                        38.5k450100




                        38.5k450100










                        answered May 6 '16 at 9:50









                        deosha

                        213




                        213








                        • 1




                          how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
                          – cas
                          May 6 '16 at 10:01












                        • @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
                          – Rahul
                          May 6 '16 at 10:23










                        • @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
                          – deosha
                          May 6 '16 at 10:45














                        • 1




                          how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
                          – cas
                          May 6 '16 at 10:01












                        • @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
                          – Rahul
                          May 6 '16 at 10:23










                        • @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
                          – deosha
                          May 6 '16 at 10:45








                        1




                        1




                        how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
                        – cas
                        May 6 '16 at 10:01






                        how is that any different to @arzyfex's earlier answer? anyway, in both answers, neither sed nor grep is even necessary: awk '!x[$1]++ && ! /^[[:blank:]]*$/' filename.txt
                        – cas
                        May 6 '16 at 10:01














                        @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
                        – Rahul
                        May 6 '16 at 10:23




                        @cas Thank you. I had forgot to update, anyway updated my answer as per your suggestion.
                        – Rahul
                        May 6 '16 at 10:23












                        @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
                        – deosha
                        May 6 '16 at 10:45




                        @cas: I think, I was typing while arzyfex posted the answer :P Anyways, Yes, only one awk will be enough. Reason is Awk and Sed basically accomplish same task with awk having some deeper options for text processing, But if sed can accomplish your task, I will refrain from using awk as awk processes 69 million characters per second while sed processes 82 million character per second, hence sed being faster than awk.
                        – deosha
                        May 6 '16 at 10:45










                        up vote
                        0
                        down vote













                        I usually use sort and uniq together to get rid of duplicates like this:



                            cat file | sort | uniq


                        However, with your input, it will end up like this:



                            -------
                        1234
                        2344
                        87676
                        9555
                        TABLE1


                        This command removes all but the numbers and adds the header afterwards:



                            cat file | grep '^[[:digit:]]*$'| grep -v '^$' |sed '1iTABLE1n-------'


                        and gives you this result:



                            TABLE1
                        -------
                        1234
                        9555
                        87676
                        2344





                        share|improve this answer

























                          up vote
                          0
                          down vote













                          I usually use sort and uniq together to get rid of duplicates like this:



                              cat file | sort | uniq


                          However, with your input, it will end up like this:



                              -------
                          1234
                          2344
                          87676
                          9555
                          TABLE1


                          This command removes all but the numbers and adds the header afterwards:



                              cat file | grep '^[[:digit:]]*$'| grep -v '^$' |sed '1iTABLE1n-------'


                          and gives you this result:



                              TABLE1
                          -------
                          1234
                          9555
                          87676
                          2344





                          share|improve this answer























                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            I usually use sort and uniq together to get rid of duplicates like this:



                                cat file | sort | uniq


                            However, with your input, it will end up like this:



                                -------
                            1234
                            2344
                            87676
                            9555
                            TABLE1


                            This command removes all but the numbers and adds the header afterwards:



                                cat file | grep '^[[:digit:]]*$'| grep -v '^$' |sed '1iTABLE1n-------'


                            and gives you this result:



                                TABLE1
                            -------
                            1234
                            9555
                            87676
                            2344





                            share|improve this answer












                            I usually use sort and uniq together to get rid of duplicates like this:



                                cat file | sort | uniq


                            However, with your input, it will end up like this:



                                -------
                            1234
                            2344
                            87676
                            9555
                            TABLE1


                            This command removes all but the numbers and adds the header afterwards:



                                cat file | grep '^[[:digit:]]*$'| grep -v '^$' |sed '1iTABLE1n-------'


                            and gives you this result:



                                TABLE1
                            -------
                            1234
                            9555
                            87676
                            2344






                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered May 6 '16 at 9:35









                            feitingen

                            1




                            1






















                                up vote
                                0
                                down vote













                                Use the command uniq, you can remove duplicate entries. Like :



                                cat file | sort -r | uniq


                                But in this specific case is not producing exactly the expected result as the file must be sorted for uniq to work - it will only detect duplicate lines if they are adjacent.

                                Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence) :



                                count_t=0
                                count_d=0
                                while read line; do
                                if [[ $line == "TABLE"* ]] ; then
                                if [[ $count_t -eq 0 ]]; then
                                ((count_t++))
                                else
                                continue
                                fi
                                fi
                                if [[ $line == "-----"* ]] ; then
                                if [[ $count_d -eq 0 ]]; then
                                ((count_d++))
                                else
                                continue
                                fi
                                fi
                                echo $line
                                done < file


                                The awk and sed solutions posted by others are better though.






                                share|improve this answer























                                • colud you please tell me what is the command? @mazs
                                  – pmaipmui
                                  May 6 '16 at 9:22










                                • I have tried cat File | uniq > File1.. but its giving me the same output as original file
                                  – pmaipmui
                                  May 6 '16 at 9:25










                                • yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                                  – pmaipmui
                                  May 6 '16 at 9:36










                                • You can not write it on the same file. You need to change the output file name
                                  – Raghvendra
                                  May 6 '16 at 9:38










                                • I cannot come up with a one liner, but i'll post a small script
                                  – mazs
                                  May 6 '16 at 9:40















                                up vote
                                0
                                down vote













                                Use the command uniq, you can remove duplicate entries. Like :



                                cat file | sort -r | uniq


                                But in this specific case is not producing exactly the expected result as the file must be sorted for uniq to work - it will only detect duplicate lines if they are adjacent.

                                Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence) :



                                count_t=0
                                count_d=0
                                while read line; do
                                if [[ $line == "TABLE"* ]] ; then
                                if [[ $count_t -eq 0 ]]; then
                                ((count_t++))
                                else
                                continue
                                fi
                                fi
                                if [[ $line == "-----"* ]] ; then
                                if [[ $count_d -eq 0 ]]; then
                                ((count_d++))
                                else
                                continue
                                fi
                                fi
                                echo $line
                                done < file


                                The awk and sed solutions posted by others are better though.






                                share|improve this answer























                                • colud you please tell me what is the command? @mazs
                                  – pmaipmui
                                  May 6 '16 at 9:22










                                • I have tried cat File | uniq > File1.. but its giving me the same output as original file
                                  – pmaipmui
                                  May 6 '16 at 9:25










                                • yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                                  – pmaipmui
                                  May 6 '16 at 9:36










                                • You can not write it on the same file. You need to change the output file name
                                  – Raghvendra
                                  May 6 '16 at 9:38










                                • I cannot come up with a one liner, but i'll post a small script
                                  – mazs
                                  May 6 '16 at 9:40













                                up vote
                                0
                                down vote










                                up vote
                                0
                                down vote









                                Use the command uniq, you can remove duplicate entries. Like :



                                cat file | sort -r | uniq


                                But in this specific case is not producing exactly the expected result as the file must be sorted for uniq to work - it will only detect duplicate lines if they are adjacent.

                                Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence) :



                                count_t=0
                                count_d=0
                                while read line; do
                                if [[ $line == "TABLE"* ]] ; then
                                if [[ $count_t -eq 0 ]]; then
                                ((count_t++))
                                else
                                continue
                                fi
                                fi
                                if [[ $line == "-----"* ]] ; then
                                if [[ $count_d -eq 0 ]]; then
                                ((count_d++))
                                else
                                continue
                                fi
                                fi
                                echo $line
                                done < file


                                The awk and sed solutions posted by others are better though.






                                share|improve this answer














                                Use the command uniq, you can remove duplicate entries. Like :



                                cat file | sort -r | uniq


                                But in this specific case is not producing exactly the expected result as the file must be sorted for uniq to work - it will only detect duplicate lines if they are adjacent.

                                Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence) :



                                count_t=0
                                count_d=0
                                while read line; do
                                if [[ $line == "TABLE"* ]] ; then
                                if [[ $count_t -eq 0 ]]; then
                                ((count_t++))
                                else
                                continue
                                fi
                                fi
                                if [[ $line == "-----"* ]] ; then
                                if [[ $count_d -eq 0 ]]; then
                                ((count_d++))
                                else
                                continue
                                fi
                                fi
                                echo $line
                                done < file


                                The awk and sed solutions posted by others are better though.







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited May 6 '16 at 10:00

























                                answered May 6 '16 at 9:18









                                mazs

                                2,5821623




                                2,5821623












                                • colud you please tell me what is the command? @mazs
                                  – pmaipmui
                                  May 6 '16 at 9:22










                                • I have tried cat File | uniq > File1.. but its giving me the same output as original file
                                  – pmaipmui
                                  May 6 '16 at 9:25










                                • yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                                  – pmaipmui
                                  May 6 '16 at 9:36










                                • You can not write it on the same file. You need to change the output file name
                                  – Raghvendra
                                  May 6 '16 at 9:38










                                • I cannot come up with a one liner, but i'll post a small script
                                  – mazs
                                  May 6 '16 at 9:40


















                                • colud you please tell me what is the command? @mazs
                                  – pmaipmui
                                  May 6 '16 at 9:22










                                • I have tried cat File | uniq > File1.. but its giving me the same output as original file
                                  – pmaipmui
                                  May 6 '16 at 9:25










                                • yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                                  – pmaipmui
                                  May 6 '16 at 9:36










                                • You can not write it on the same file. You need to change the output file name
                                  – Raghvendra
                                  May 6 '16 at 9:38










                                • I cannot come up with a one liner, but i'll post a small script
                                  – mazs
                                  May 6 '16 at 9:40
















                                colud you please tell me what is the command? @mazs
                                – pmaipmui
                                May 6 '16 at 9:22




                                colud you please tell me what is the command? @mazs
                                – pmaipmui
                                May 6 '16 at 9:22












                                I have tried cat File | uniq > File1.. but its giving me the same output as original file
                                – pmaipmui
                                May 6 '16 at 9:25




                                I have tried cat File | uniq > File1.. but its giving me the same output as original file
                                – pmaipmui
                                May 6 '16 at 9:25












                                yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                                – pmaipmui
                                May 6 '16 at 9:36




                                yes I want this actually..."Another solution would be to read file and skip the lines containing TABLE or ---- (except first occurrence)" ... then what will be the command? I have trie 'cat file | egrep -v "TABLE|-------"
                                – pmaipmui
                                May 6 '16 at 9:36












                                You can not write it on the same file. You need to change the output file name
                                – Raghvendra
                                May 6 '16 at 9:38




                                You can not write it on the same file. You need to change the output file name
                                – Raghvendra
                                May 6 '16 at 9:38












                                I cannot come up with a one liner, but i'll post a small script
                                – mazs
                                May 6 '16 at 9:40




                                I cannot come up with a one liner, but i'll post a small script
                                – mazs
                                May 6 '16 at 9:40










                                up vote
                                0
                                down vote













                                Even though this is an old thread, I would like to contribute this answer that uses only a single sed command:



                                sed '1,2p;/^[[:digit:]]/!d;' file


                                It keeps the two first lines (the heading and underline), then deletes every line that doesn't start with a digit.






                                share|improve this answer








                                New contributor




                                mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                Check out our Code of Conduct.






















                                  up vote
                                  0
                                  down vote













                                  Even though this is an old thread, I would like to contribute this answer that uses only a single sed command:



                                  sed '1,2p;/^[[:digit:]]/!d;' file


                                  It keeps the two first lines (the heading and underline), then deletes every line that doesn't start with a digit.






                                  share|improve this answer








                                  New contributor




                                  mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                  Check out our Code of Conduct.




















                                    up vote
                                    0
                                    down vote










                                    up vote
                                    0
                                    down vote









                                    Even though this is an old thread, I would like to contribute this answer that uses only a single sed command:



                                    sed '1,2p;/^[[:digit:]]/!d;' file


                                    It keeps the two first lines (the heading and underline), then deletes every line that doesn't start with a digit.






                                    share|improve this answer








                                    New contributor




                                    mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    Even though this is an old thread, I would like to contribute this answer that uses only a single sed command:



                                    sed '1,2p;/^[[:digit:]]/!d;' file


                                    It keeps the two first lines (the heading and underline), then deletes every line that doesn't start with a digit.







                                    share|improve this answer








                                    New contributor




                                    mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    share|improve this answer



                                    share|improve this answer






                                    New contributor




                                    mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    answered yesterday









                                    mrbrich

                                    101




                                    101




                                    New contributor




                                    mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.





                                    New contributor





                                    mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






                                    mrbrich is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






























                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Unix & Linux Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        To learn more, see our tips on writing great answers.





                                        Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                        Please pay close attention to the following guidance:


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f281472%2fhow-to-remove-duplicate-rows-from-a-file%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        サソリ

                                        広島県道265号伴広島線

                                        Accessing regular linux commands in Huawei's Dopra Linux