Why doesn't grep remove lines of terminal output from find command by default? [duplicate]
up vote
0
down vote
favorite
This question already has an answer here:
Why doesn't grep using pipe work here?
4 answers
I am constantly frustrated by this simple command:
find / | fgrep somestuff.ext
When I don't use sudo
, I get line after line of permission denied - which is fair enough, but why isn't this output ignored when grep reads it from pipe?
Why is this form of output sent straight to the terminal window and not passed into the pipe (what I suspect must be happening) and subsequently ignored by grep, while the same lines produced by cat (say I had permission denied messages stored in a text file) would correctly go into the pipe and be ignored by my grep pattern?
I feel like there is something about the STDIN/STDOUT process I'm not understanding here
bash shell grep find pipe
New contributor
marked as duplicate by muru, Jeff Schaller, mosvy, roaima, JigglyNaga 2 days ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
up vote
0
down vote
favorite
This question already has an answer here:
Why doesn't grep using pipe work here?
4 answers
I am constantly frustrated by this simple command:
find / | fgrep somestuff.ext
When I don't use sudo
, I get line after line of permission denied - which is fair enough, but why isn't this output ignored when grep reads it from pipe?
Why is this form of output sent straight to the terminal window and not passed into the pipe (what I suspect must be happening) and subsequently ignored by grep, while the same lines produced by cat (say I had permission denied messages stored in a text file) would correctly go into the pipe and be ignored by my grep pattern?
I feel like there is something about the STDIN/STDOUT process I'm not understanding here
bash shell grep find pipe
New contributor
marked as duplicate by muru, Jeff Schaller, mosvy, roaima, JigglyNaga 2 days ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
This question already has an answer here:
Why doesn't grep using pipe work here?
4 answers
I am constantly frustrated by this simple command:
find / | fgrep somestuff.ext
When I don't use sudo
, I get line after line of permission denied - which is fair enough, but why isn't this output ignored when grep reads it from pipe?
Why is this form of output sent straight to the terminal window and not passed into the pipe (what I suspect must be happening) and subsequently ignored by grep, while the same lines produced by cat (say I had permission denied messages stored in a text file) would correctly go into the pipe and be ignored by my grep pattern?
I feel like there is something about the STDIN/STDOUT process I'm not understanding here
bash shell grep find pipe
New contributor
This question already has an answer here:
Why doesn't grep using pipe work here?
4 answers
I am constantly frustrated by this simple command:
find / | fgrep somestuff.ext
When I don't use sudo
, I get line after line of permission denied - which is fair enough, but why isn't this output ignored when grep reads it from pipe?
Why is this form of output sent straight to the terminal window and not passed into the pipe (what I suspect must be happening) and subsequently ignored by grep, while the same lines produced by cat (say I had permission denied messages stored in a text file) would correctly go into the pipe and be ignored by my grep pattern?
I feel like there is something about the STDIN/STDOUT process I'm not understanding here
This question already has an answer here:
Why doesn't grep using pipe work here?
4 answers
bash shell grep find pipe
bash shell grep find pipe
New contributor
New contributor
edited Nov 28 at 0:55
George Vasiliou
5,57531028
5,57531028
New contributor
asked Nov 26 at 22:44
MJHd
62
62
New contributor
New contributor
marked as duplicate by muru, Jeff Schaller, mosvy, roaima, JigglyNaga 2 days ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by muru, Jeff Schaller, mosvy, roaima, JigglyNaga 2 days ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
up vote
0
down vote
accepted
While the nice answer of choroba cures your problem, the reason for the behavior you have noticed is the default pipeline behavior in bash (and i suppose in most of the shells as well).
As described in man bash
pipelines section:
The standard output of command is connected via a pipe to the standard
input of command2. This connection is performed
before any redirections specified by the command (see REDIRECTION below).
Meaning that stderr
of command1
is not by default fed to command2
through the pipe but is driven to your tty, the default stderr link.
Bash manual also says:
If |& is used, command's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |.
So in your case, if you want to pipe to grep command, the find command errors which by default are sent to /dev/stderr
you need to use one of these two forms:
find / |& fgrep somestuff.ext
find / 2>&1 | fgrep somestuff.ext
Your queston could be also titled like "Why stderr is ignored by pipes".
And the answer is because this is how bash and linux are made by default; to treat stdout
differently than stderr
, in order user to be capable to log/treat those two outputs differently.
For example you can pipe stdout
of command1 to stdin
of command2 and on the same time you can send stderr
of command1 to a log file using 2>errorlog.txt
.
Actually when you run a command without any redirections specified like
find /
It is equivallent to
find / 1>/dev/stdout 2>/dev/stderr
Which is finally resolved to:
find / 1>/dev/tty1 2>/dev/tty1 #assuming that you are logged in tty1
as can be verified by a single ls
:
ls -all /dev/st*
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdout -> /proc/self/fd/1
ls -all /proc/self/fd/2
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/2 -> /dev/tty1
ls -all /proc/self/fd/1
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/1 -> /dev/tty1
If for any reason you want to "join" stdout
and stderr
of a command, then you need to explicitly declare your purposes to bash
using |&
(for pipelines) or 2>&1
(for any kind of output redirection)
add a comment |
up vote
7
down vote
The permission denied messages are not sent to stdout from find
but to stderr. You can redirect the whole stderr to the bit bucket:
find 2>/dev/null | fgrep somestuff.ext
Also, to find the given file, you don't need any grepping:
find . -name somestuff.ext
to which you can still apply the 2>/dev/null
.
To only suppress the permission denied messages, you can use
2> >(grep -v 'Permission denied' >&2)
in bash.
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
find . | fgrep somestuff.ext
looks forsomestuff.ext
anywhere in the line (which means it's broken for multiline file paths) whilefind . -name somestuff.ext
only matches the filename portion exactly.find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).
– Stéphane Chazelas
Nov 26 at 23:04
@StéphaneChazelas: I guess.ext
means extension, so searching for.ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.
– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
1
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
|
show 3 more comments
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
While the nice answer of choroba cures your problem, the reason for the behavior you have noticed is the default pipeline behavior in bash (and i suppose in most of the shells as well).
As described in man bash
pipelines section:
The standard output of command is connected via a pipe to the standard
input of command2. This connection is performed
before any redirections specified by the command (see REDIRECTION below).
Meaning that stderr
of command1
is not by default fed to command2
through the pipe but is driven to your tty, the default stderr link.
Bash manual also says:
If |& is used, command's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |.
So in your case, if you want to pipe to grep command, the find command errors which by default are sent to /dev/stderr
you need to use one of these two forms:
find / |& fgrep somestuff.ext
find / 2>&1 | fgrep somestuff.ext
Your queston could be also titled like "Why stderr is ignored by pipes".
And the answer is because this is how bash and linux are made by default; to treat stdout
differently than stderr
, in order user to be capable to log/treat those two outputs differently.
For example you can pipe stdout
of command1 to stdin
of command2 and on the same time you can send stderr
of command1 to a log file using 2>errorlog.txt
.
Actually when you run a command without any redirections specified like
find /
It is equivallent to
find / 1>/dev/stdout 2>/dev/stderr
Which is finally resolved to:
find / 1>/dev/tty1 2>/dev/tty1 #assuming that you are logged in tty1
as can be verified by a single ls
:
ls -all /dev/st*
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdout -> /proc/self/fd/1
ls -all /proc/self/fd/2
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/2 -> /dev/tty1
ls -all /proc/self/fd/1
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/1 -> /dev/tty1
If for any reason you want to "join" stdout
and stderr
of a command, then you need to explicitly declare your purposes to bash
using |&
(for pipelines) or 2>&1
(for any kind of output redirection)
add a comment |
up vote
0
down vote
accepted
While the nice answer of choroba cures your problem, the reason for the behavior you have noticed is the default pipeline behavior in bash (and i suppose in most of the shells as well).
As described in man bash
pipelines section:
The standard output of command is connected via a pipe to the standard
input of command2. This connection is performed
before any redirections specified by the command (see REDIRECTION below).
Meaning that stderr
of command1
is not by default fed to command2
through the pipe but is driven to your tty, the default stderr link.
Bash manual also says:
If |& is used, command's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |.
So in your case, if you want to pipe to grep command, the find command errors which by default are sent to /dev/stderr
you need to use one of these two forms:
find / |& fgrep somestuff.ext
find / 2>&1 | fgrep somestuff.ext
Your queston could be also titled like "Why stderr is ignored by pipes".
And the answer is because this is how bash and linux are made by default; to treat stdout
differently than stderr
, in order user to be capable to log/treat those two outputs differently.
For example you can pipe stdout
of command1 to stdin
of command2 and on the same time you can send stderr
of command1 to a log file using 2>errorlog.txt
.
Actually when you run a command without any redirections specified like
find /
It is equivallent to
find / 1>/dev/stdout 2>/dev/stderr
Which is finally resolved to:
find / 1>/dev/tty1 2>/dev/tty1 #assuming that you are logged in tty1
as can be verified by a single ls
:
ls -all /dev/st*
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdout -> /proc/self/fd/1
ls -all /proc/self/fd/2
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/2 -> /dev/tty1
ls -all /proc/self/fd/1
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/1 -> /dev/tty1
If for any reason you want to "join" stdout
and stderr
of a command, then you need to explicitly declare your purposes to bash
using |&
(for pipelines) or 2>&1
(for any kind of output redirection)
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
While the nice answer of choroba cures your problem, the reason for the behavior you have noticed is the default pipeline behavior in bash (and i suppose in most of the shells as well).
As described in man bash
pipelines section:
The standard output of command is connected via a pipe to the standard
input of command2. This connection is performed
before any redirections specified by the command (see REDIRECTION below).
Meaning that stderr
of command1
is not by default fed to command2
through the pipe but is driven to your tty, the default stderr link.
Bash manual also says:
If |& is used, command's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |.
So in your case, if you want to pipe to grep command, the find command errors which by default are sent to /dev/stderr
you need to use one of these two forms:
find / |& fgrep somestuff.ext
find / 2>&1 | fgrep somestuff.ext
Your queston could be also titled like "Why stderr is ignored by pipes".
And the answer is because this is how bash and linux are made by default; to treat stdout
differently than stderr
, in order user to be capable to log/treat those two outputs differently.
For example you can pipe stdout
of command1 to stdin
of command2 and on the same time you can send stderr
of command1 to a log file using 2>errorlog.txt
.
Actually when you run a command without any redirections specified like
find /
It is equivallent to
find / 1>/dev/stdout 2>/dev/stderr
Which is finally resolved to:
find / 1>/dev/tty1 2>/dev/tty1 #assuming that you are logged in tty1
as can be verified by a single ls
:
ls -all /dev/st*
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdout -> /proc/self/fd/1
ls -all /proc/self/fd/2
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/2 -> /dev/tty1
ls -all /proc/self/fd/1
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/1 -> /dev/tty1
If for any reason you want to "join" stdout
and stderr
of a command, then you need to explicitly declare your purposes to bash
using |&
(for pipelines) or 2>&1
(for any kind of output redirection)
While the nice answer of choroba cures your problem, the reason for the behavior you have noticed is the default pipeline behavior in bash (and i suppose in most of the shells as well).
As described in man bash
pipelines section:
The standard output of command is connected via a pipe to the standard
input of command2. This connection is performed
before any redirections specified by the command (see REDIRECTION below).
Meaning that stderr
of command1
is not by default fed to command2
through the pipe but is driven to your tty, the default stderr link.
Bash manual also says:
If |& is used, command's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |.
So in your case, if you want to pipe to grep command, the find command errors which by default are sent to /dev/stderr
you need to use one of these two forms:
find / |& fgrep somestuff.ext
find / 2>&1 | fgrep somestuff.ext
Your queston could be also titled like "Why stderr is ignored by pipes".
And the answer is because this is how bash and linux are made by default; to treat stdout
differently than stderr
, in order user to be capable to log/treat those two outputs differently.
For example you can pipe stdout
of command1 to stdin
of command2 and on the same time you can send stderr
of command1 to a log file using 2>errorlog.txt
.
Actually when you run a command without any redirections specified like
find /
It is equivallent to
find / 1>/dev/stdout 2>/dev/stderr
Which is finally resolved to:
find / 1>/dev/tty1 2>/dev/tty1 #assuming that you are logged in tty1
as can be verified by a single ls
:
ls -all /dev/st*
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Nov 25 15:36 /dev/stdout -> /proc/self/fd/1
ls -all /proc/self/fd/2
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/2 -> /dev/tty1
ls -all /proc/self/fd/1
lrwx------ 1 root root 64 Nov 28 02:46 /proc/self/fd/1 -> /dev/tty1
If for any reason you want to "join" stdout
and stderr
of a command, then you need to explicitly declare your purposes to bash
using |&
(for pipelines) or 2>&1
(for any kind of output redirection)
answered Nov 28 at 0:48
George Vasiliou
5,57531028
5,57531028
add a comment |
add a comment |
up vote
7
down vote
The permission denied messages are not sent to stdout from find
but to stderr. You can redirect the whole stderr to the bit bucket:
find 2>/dev/null | fgrep somestuff.ext
Also, to find the given file, you don't need any grepping:
find . -name somestuff.ext
to which you can still apply the 2>/dev/null
.
To only suppress the permission denied messages, you can use
2> >(grep -v 'Permission denied' >&2)
in bash.
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
find . | fgrep somestuff.ext
looks forsomestuff.ext
anywhere in the line (which means it's broken for multiline file paths) whilefind . -name somestuff.ext
only matches the filename portion exactly.find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).
– Stéphane Chazelas
Nov 26 at 23:04
@StéphaneChazelas: I guess.ext
means extension, so searching for.ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.
– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
1
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
|
show 3 more comments
up vote
7
down vote
The permission denied messages are not sent to stdout from find
but to stderr. You can redirect the whole stderr to the bit bucket:
find 2>/dev/null | fgrep somestuff.ext
Also, to find the given file, you don't need any grepping:
find . -name somestuff.ext
to which you can still apply the 2>/dev/null
.
To only suppress the permission denied messages, you can use
2> >(grep -v 'Permission denied' >&2)
in bash.
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
find . | fgrep somestuff.ext
looks forsomestuff.ext
anywhere in the line (which means it's broken for multiline file paths) whilefind . -name somestuff.ext
only matches the filename portion exactly.find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).
– Stéphane Chazelas
Nov 26 at 23:04
@StéphaneChazelas: I guess.ext
means extension, so searching for.ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.
– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
1
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
|
show 3 more comments
up vote
7
down vote
up vote
7
down vote
The permission denied messages are not sent to stdout from find
but to stderr. You can redirect the whole stderr to the bit bucket:
find 2>/dev/null | fgrep somestuff.ext
Also, to find the given file, you don't need any grepping:
find . -name somestuff.ext
to which you can still apply the 2>/dev/null
.
To only suppress the permission denied messages, you can use
2> >(grep -v 'Permission denied' >&2)
in bash.
The permission denied messages are not sent to stdout from find
but to stderr. You can redirect the whole stderr to the bit bucket:
find 2>/dev/null | fgrep somestuff.ext
Also, to find the given file, you don't need any grepping:
find . -name somestuff.ext
to which you can still apply the 2>/dev/null
.
To only suppress the permission denied messages, you can use
2> >(grep -v 'Permission denied' >&2)
in bash.
edited Nov 27 at 1:35
answered Nov 26 at 22:47
choroba
26k44570
26k44570
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
find . | fgrep somestuff.ext
looks forsomestuff.ext
anywhere in the line (which means it's broken for multiline file paths) whilefind . -name somestuff.ext
only matches the filename portion exactly.find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).
– Stéphane Chazelas
Nov 26 at 23:04
@StéphaneChazelas: I guess.ext
means extension, so searching for.ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.
– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
1
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
|
show 3 more comments
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
find . | fgrep somestuff.ext
looks forsomestuff.ext
anywhere in the line (which means it's broken for multiline file paths) whilefind . -name somestuff.ext
only matches the filename portion exactly.find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).
– Stéphane Chazelas
Nov 26 at 23:04
@StéphaneChazelas: I guess.ext
means extension, so searching for.ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.
– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
1
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
I see - and to further clarify, the pipe then normally connects the fd of 2 processes, in my case I'm connecting standard output produced by find / to grep, which reads from standard input. So maybe it's the need for pipe I don't get here, if grep just blindly reads standard input anyway, why do I need a pipe? Why couldn't I say: "find / & grep stuff" or "find / > grep stuff" instead? (To be clearer yet, I understand why those examples SPECIFICALLY will fail, but conceptually I still don't understand) Why do I pipe output to grep if it only cares about the global standard input anyway?
– MJHd
Nov 26 at 23:00
find . | fgrep somestuff.ext
looks for somestuff.ext
anywhere in the line (which means it's broken for multiline file paths) while find . -name somestuff.ext
only matches the filename portion exactly. find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).– Stéphane Chazelas
Nov 26 at 23:04
find . | fgrep somestuff.ext
looks for somestuff.ext
anywhere in the line (which means it's broken for multiline file paths) while find . -name somestuff.ext
only matches the filename portion exactly. find . -path '*somestuff.ext*'
would be a closer equivalent (and fix the problems with multiline file paths but introduce one with filenames containing sequences of bytes not forming valid characters).– Stéphane Chazelas
Nov 26 at 23:04
@StéphaneChazelas: I guess
.ext
means extension, so searching for .ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.– choroba
Nov 26 at 23:09
@StéphaneChazelas: I guess
.ext
means extension, so searching for .ext*
is doing more than they need. But the edge cases are important to consider, especially when you don't manually check the results and the script does something important to the selected files.– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
@MJHd: Why you pipe output to grep is something you should know. Probably because you don't know how to process the stderr?
– choroba
Nov 26 at 23:09
1
1
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
Thank you so much for all the help, again, I really wasn't looking for a solution to this specific problem as I don't want a fish - I want to learn to fish. George's answer explains that it is the behavior of pipe I was misunderstanding, and how it behaves vs how I thought it was behaving... Thank's for taking the time though! Cheers :)
– MJHd
2 days ago
|
show 3 more comments