[sf-lug] comments/explanations/context (was: much can be said for ... for i in /usr/share/man/man*; do man $i/*; done
Michael Paoli
Michael.Paoli at cal.berkeley.edu
Tue Aug 23 22:26:30 PDT 2016
> From: GoOSSBears <acohen36 at linuxwaves.com>
> To: <sf-lug at linuxmafia.com>
> Subject: [sf-lug] much can be said for ... for i in
> /usr/share/man/man*; do man $i/*; done
> Date: Tue, 23 Aug 2016 08:27:12 -0700
Well ... :-)
> First quoting Michael Paoli (Michael.Paoli at cal.berkeley.edu):
>>> I like looking over stuff in the directories on PATH, for anything
>>> in them that I don't recognize or not all that sure what it does or
>>> what it's for - and having a look at the man page. :-)
>>>
>>> Have found some really cool stuff by such, or similar means.
>>> E.g. tac(1) - so very handy.
>
> Then quoting Rick Moen (rick at linuxmafia.com)
>> Oh yeah. Looking at the man pages for new discoveries
>> can be a heady and revelatory experience!
>
> Well "RTFM ;)" certainly applies for tac(1).
> I also don't think simple command line pipelines such as 'for i in
> /usr/share/man/man*; do man $i/*; done' would require much in the
> way of RTFM or further comments.
>
> OTOH, it would certainly help readers a great deal if slightly more
> complex command line pipelines are better described elsewhere, e.g.
> clearing up what the lines of the pipeline found at
> http://linuxmafia.com/pipermail/sf-lug/2016q2/011832.html actually
> perform:
> ~~~~~~ quoting ~~~~~~
> $ (for ns in $(dig -t NS github.com. +short | sort); do echo $(dig \
>> @"$ns" +noall +answer github.com. A github.com. AAAA) "[$ns]"; done)
Might and/or might not do or get around to such. General typical
essence of (Linux) User Groups (LUGs), is volunteering, sharing
information, learning, ... oooh, and maybe even having *fun*! :-)
So ... I certainly don't feel inclined to always be explaining
everything to everybody. It's often more like *share*, *cooperate*, and
"meeting in the middle" - or at least somewhere between. As, at least
typically, some other(s) have oft described it, doing (most) all the
work for someone else ... that's what one pays a consultant or employee
or contractor or the like to do. With LUGs, it's generally more along
the lines of cooperative - share some information - generally/typically
in some public forum (e.g. this list), so folks can build upon it.
It's often more like to whet the appetite - not necessarily providing
several days full course meals, including desserts, and luxury
accommodations to go with it. Probably closer to a cook book, some
basic ingredients, a few sample dishes - maybe even appetizers, or
dessert, and directions to the nearest store, and maybe a video or live
example of making a dish.
Let's see, at most recent count, we've got ... 311 folks subscribed on
this list ... I'd certainly think at least some fair number could also
potential describe the various snippets of code and examples and such
... hopefully even some of which could probably describe it better, more
accurately, and more eloquently than myself. Might also be well more
interesting/useful to see some alternative descriptions,
interpretations, etc. ... perhaps along with that some folks would even
chime in with ways to improve the code - in its function and/or clarity.
So, yeah, ... I certainly don't always feel need to explain - and
certainly not in great detail, any and all examples.
Context also matters. E.g. when I'm giving an example showing some DNS
analysis/troubleshooting, I may be less inclined to go to (great, or
possibly even any) lengths to explain the relatively minimal shell bits
where it's used. In other contexts, I may give quite a bit of
commenting, e.g. ...
http://www.rawbw.com/~mp/unix/sh/examples/
... largest file I have there ...
$ wget -q -O - http://www.rawbw.com/~mp/unix/sh/examples/shrink2fs | wc
319 1126 7730
... 319 lines, 1126 "words", 7730 bytes
And ... comments? Let's see how I did ... not that I wrote that one
specifically for example, but did use it for such ...
wget -q -O - http://www.rawbw.com/~mp/unix/sh/examples/shrink2fs
... 33 lines of comments (ignoring blank lines and first #! line,
and other false positive non-comment lines containing #)
And, for 319 lines, if we ignore blank lines ...
$ wget -q -O - http://www.rawbw.com/~mp/unix/sh/examples/shrink2fs |
> sed -e '/^[ ]*$/d' | wc -l
294 lines, so, we have ...
$ echo 'scale=2; 33/294*100' | bc -l
11.00 - about an 11% comment ratio ... not too shabby - especially since
it wasn't written specifically to be an example. But wait, there's more
...
http://www.rawbw.com/~mp/unix/sh/examples/README.txt
... another 9 lines documenting shrink2fs
so, adding those, we have:
$ echo 'scale=2; (33+9)/(294+9)*100' | bc -l
13.00 - so about 13% documentation ratio ... not too horrible.
And if one looks where I gave shell syntax examples
for:
http://www.rawbw.com/~mp/unix/sh/
under:
http://www.rawbw.com/~mp/unix/sh/syntax_with_examples/
We see ... well, let's again pick longest of those ...
$ wget -q -O - \
> http://www.rawbw.com/~mp/unix/sh/syntax_with_examples/while_syntax |
> wc -l
66
So, we have 66 lines ...
if we ignore #! invocation and blank lines ... 47 lines
And of that, how many lines are comments ...
well, we have 13 lines of pseudo-comment - which is actually data used
in here document which is written to /dev/null ... plus another 11 lines
of comments. Pretty substantial ratio of comments. But again, context,
there it's to illustrate and explain the syntax of shell itself,
rather than some incidental use of teensy bit (like e.g. 2 lines or
less) of shell to illustrate some DNS testing/diagnosis.
Also, a whole lot of those are written-on-the-fly one (or two or three
...) liners that are basically 'throw away scripts' - sufficiently
trivial (at least for me) I write them on the fly to do a very specific
task at hand, then they're "discarded" (well, go on shell history and
eventually rotate out). Typically not worth saving, not gonna bother to
comment them as I'm using (and discarding) them. Now, ... when it's
sufficiently useful and non-trivial, that I find I want to use it again,
and it's enough bit of work I wish it hadn't rotated out of my history
and I find myself recreating it ... that's about when I typically create
an actual script for that ... and I'll generally structure that so it's
much more readable, and may even often comment it - depending on how
non-trivial and/or unclear it might otherwise be.
So ... let's look at some cited example(s) ... *and context* ...
http://linuxmafia.com/pipermail/sf-lug/2016q2/011832.html
Hmmm, I don't find the cited code snippet at that URL ...
Ah, I found it in here:
http://linuxmafia.com/pipermail/sf-lug/2016q2/011875.html
$ (for ns in $(dig -t NS github.com. +short | sort); do echo $(dig \
> @"$ns" +noall +answer github.com. A github.com. AAAA) "[$ns]"; done)
So ... context is DNS(/resolver?) issue/investigation
it's pretty simple shell bit - two lines - and I only made it two for
better readability in email clients - I'd actually execute that as one
(long) logical line - as a "throw away". Anyway, fairly basic shell,
were I to explain it, something like this:
() since I did it from CLI, did in a subshell, to not "pollute" my shell
with extraneous shell variables once I'd completed executing it.
Basic for variable in word(s) do ... done loop
$() command substitution - stdout substituted to it, with some slight
bits of further interpolation (see sh(1)) - notably newlines become
blanks, trailing newlines stripped.
| - pipe - effectively connect stdout of command before to stdin of
command after
echo - echos arguments
\ escapes following newline, to treat it like a blank, so I continue
the command (done for readability)
leading > on 2nd line is PS2.
dig(1) - hey, has its own man page 'n all, anyway
within the command substitution I got the NS records for the domain, no
more, no less, and sorted those.
I then iterate over those in loop, setting ns to each NS record (just
the name) in sequence.
Within the loop, I do the lookup against that specific nameserver,
I tell dig to give me nothing (+noall) then add to that just the
answer data (+answer), and I have it look for A and AAAA records,
I then use echo and tag those results at the end with the NS I looked it
up against, so I can see which results came from which nameserver.
Essentially no more, no less.
But that was over 4 months ago, about 310 folks besides myself on the
list, if anyone else wanted to describe it for others, they certainly
could - no need/reason I need to do so, and I'm sure there certainly
must be others that probably could've described that better and more
eloquently than I just did. Anyway, "throw away" script - not highly
probable I'm going to care about those details of that domain anytime
soon - if ever again - and if I had need to, easier to reconstruct such
a command than try and figure out what I called it and where I saved it
if I'd so much as bothered to do so earlier.
#!/bin/sh
exec strace -fv -eall -s2048 ${1+"$@"}
Oh, that's *highly* trivial, just exec the sucker, passing along
argument(s). And ${1+"$@"} is a quite useful common construct in
shell. One can go over the man page, and work out piece-by-piece
exactly what it does and why/how. Once you know, you'll recognize its
usefulness, and one might often spot its use in shell scripts (I
probably first saw it in some other shell script, figured out what it
did, and realized, "Oh, that's exactly what I need [in certain common
scenario]" - and noted it, and used it relatively regularly ever since.)
$ 2>&1 Strace ping -c 3 github.com. | fgrep github | tail -n 1
Oh, going back to the content:
http://linuxmafia.com/pipermail/sf-lug/2016q2/011832.html
I find that pretty dang well described ... again context was
DNS/resolver.
And the shell bit is pretty dang trivial - just a pair of pipes
that use what I'd just shown before that (which is also very
trivial trace of shell).
fgrep, tail - perfectly fine man pages, I don't feel need to
explain those.
strace(1) has perfectly fine man page - more complex, but I think
for context I sufficiently explained what was being done - notably
tracing system calls and displaying at least one of particular
note/interest.
Anyway, likewise on that one, been over 4 months X about 310 other folks
on the list besides myself. Others can jump in and explain it if they
feel so inclined. Were there specific questions, I might've been
inclined to answer those. There certainly were lots of
questions/responses on the thread in general, but I don't recall much
(if any?) question specifically regarding shell - and there were quite
a number of list postings on the thread - I'm certainly not about to go
through and reread 'em all to determine if there might be some shell
question that was asked somewhere within there that none of the 311 or
so folks on the list have yet addressed or responded to.
Anyway, yes, good to have comments in code. Especially *why* things
were done a certain way, as opposed to other possibility(/ies),
potential "gotcha"s of note, and for bits non-trivial or otherwise not
quite immediately clear/obvious, what the code is overall designed to do
or attempt to do. I generally figure in programs, those reading the
program should be reasonably competent (or be able to figure it out with
the relevant references) - at least insofar as the language itself. So,
for the most part, putting in comments about what the code is doing is
relatively useless "noise" ... notable exception is when some chunk of
code isn't very clear as to what it's doing (or attempting to do) and/or
why. E.g. sometimes I'll have a comment that looks roughly like:
The code block below is the logical equivalent of the more intuitive:
[alternative example code]
but instead we use the more efficient equivalent:
... essentially use comments to explain most notably what's otherwise
confusing or counter-intuitive.
Oh, and remember, "comments lie" ... uhm, well, hopefully not, but
comments don't always correlate to reality - so do always keep in mind
that comments and code may not properly match.
And I tend to do many (e.g. not uncommonly up to 20 or even many more
per day) "throw-away" bits of shell code on CLI ... and/or using or
with other utilities and programs/languages, e.g. sed, awk, perl, much
etc. Those are typically used, not commented, and discarded in
relatively short order ... that's why when I show a similar example,
it's pretty much as I'd execute it on-the-fly on the command line ...
and I might not bother to comment it. Hey, if I can put it together on
CLI that quickly and easily, it's "trivial enough" to me, I'm generally
not finding any need to comment it - certainly at least not for myself,
anyway. I suppose I could show it as if it were written to a shell
script file ... but why? In the vast majority of cases that's not how
I'm actually doing and using it - needs be fair bit more complex and
non-trivial - and likely to be useful again - before I'll typically
bother to put in an actual script or other program file.
More information about the sf-lug
mailing list