[conspire] The first rule ... & DNS & SOA serial numbers (everything(?) you [n]ever wanted to know about DNS SOA serial numbers, but were afraid to ask ; -))

Michael Paoli Michael.Paoli at cal.berkeley.edu
Mon Mar 7 06:10:09 PST 2016


Very good explanations, Rick, as always.  :-)

However, to cover a few finer details that are sometimes overlooked (and
can bite one, and are among at least some of the more common errors).

Anyway, reference excerpts below, and I add a bunch more detail and
comments, and some related references.

> Date: Tue, 1 Mar 2016 04:08:28 -0800
> From: Rick Moen <rick at linuxmafia.com>
> To: conspire at linuxmafia.com
> Subject: [conspire] The first rule
> Message-ID: <20160301120827.GB12323 at linuxmafia.com>
> Content-Type: text/plain; charset=utf-8

> Being diligent, I also watch for signs of trouble, though.  Some days
> ago, a major Linux guy's domain triggered an ongoing error in secondary
> nameserver because suddenly his domain zonefile started trying to send
> mine a S/N value _lower_ than the one in my secondary nameserver.

Yes, a whole lot 'o ways to screw up zone serial numbers, and you well
covered much of the common booboos, and most of the relevant
explanatory bits.  But to cover it a fair bit more fully ... :-)

> It was trying to offer allegedly updated S/N '21150228' when my
> nameserver already had '2013040202'.  Which, you will perceive, is a
> higher number.

Yes, common booboo - forgetting to update ("increase") a serial number
- miss that and the data mostly doesn't propagate to slaves.  That's
probably the most common serial number blunder - but also probably the
simplest to correct.

Probably next most common (I'm guestimating a bit), is a serial number
"decrease" - an update has a "lower" serial number, and generally
doesn't propagate to slaves.

What's with all the "generally" and explicit quoting ("")?  Don't we
all know what "lower" and "higher" and "increase" and "decrease" and
such are?  And "generally"?  What, are there exceptions?  Well, on the
"generally", yes, there are some (fairly limited) exceptions.  And as
to lower/higher/increase/decrease/greater ... those are a teensy bit
counter-intuitive ... or more like not quite *fully* intuitive, when it
comes to DNS SOA serial numbers.  If the serial numbers allowed for and
used any and all real numbers, then increase, decrease, etc., would be
quite clear.  But, as one may well even guess, and computers being
digital and all that ... they don't.  The don't take irrationals, they
don't even take all rationals, or even all integers, or all whole
numbers (non-negative integers).  So ... okay, a fixed range of
integers.  Yes, quite that.  So, ... always increase, ... uhm, ... what
happens when one runs out or hits the end?  They well anticipated that
long ago.  They effectively "wrap around" ... but not in an exceedingly
intuitive way, but ...  pretty close.  First of all, the range - 32
bits, unsigned integer, so that's 0 through 4294967295 (2 through
2^32-1, ... -1 since it's 0 based, 2^32 (4294967296) distinct values.
So, as I often explain it (and some others do too), think of it like a
clock wrapping around.  12 hour clock, except let's replace the 12 with
0, so we have 0 through 11 for hour values.  So, what's "higher" and
"lower", as time passes - given 2 different hour values on such a
clock.  But you're not given the day or AM or PM.  So, 2 and 4, which
is "higher"/later on our clock?  What about 1 and 11?  Well, with the
clock analogy it goes like this.  Which is the shorter distance between
the two, clockwise, or counterclockwise?  If the distance is shorter
going clockwise, then the starting number is considered "lower", or
"older" if you will, and the ending, "higher" or "newer".  If the
shorter distance is counterclockwise, then the relationship is reversed
- the starting number in that shorter direction is then
"higher"/"newer", and the ending number in that shorter
counterclockwise direction is "lower"/"older" ... and here by "newer"
and "older", I'm not saying that's when it happened or was created, but
that's how the algorithm treats it and presumes it to be.  So ...  what
about 3 and 9?  Precisely same difference either way around!  In that
case it's undefined which is "newer"/"higher", and "older"/"lower".
For DNS, it's kind'a like our 0 1 2 3 ... 11 for our 12 hour clock,
except for DNS it's 0 1 2 3 ... 4294967295, for our 4294967296 possible
DNS serial numbers.  For DNS, this was very well clarified (from
earlier descriptions) in
RFC1982 https://www.ietf.org/rfc/rfc1982.txt

> Now, this was some sort of hapless editing screwup.  The recommended
> value for a zonefile S/N is based on the current day's date, and is
> YYYYMMDDnn, where nn starts at 00 and end at 99, for a total of 1000
> revisions you can make to a domain in 24 hours.  So, today, March 1st,
> the first S/N value for a zonefile would be 2016030100, then 201603101,
> etc.

So ... among all kinds of possibilities, what could possibly go wrong
with SOA serial numbers?  So, ... RFC1982 - well covers, using
"sequence space arithmetic"/"serial number arithmetic" how it defines,
in that context, "greater" - like our clock analogy, it wraps around -
and we only have a finite set of integers to work with.

So, yes, various RFC(s), etc., *recommend* (but not *require*) serial
number format ... YYYYMMDDnn, where YYYYMMDD is 4 digit year, 2 digit
month number 2 digit day of month, and nn is sequence 00 through 99.
That allows up to 100 serial numbers for any given date, and is pretty
intuitive to read and have an idea of at least approximately when it
was last updated (but is also ambiguous as to timezone).  What if one
wants more, e.g. I've seen folks - more than one suggest: YYYYMMDDHHMM
or YYYYMMDDHHMMSS, where instead of nn, one uses 4 or 6 digits, for
hour and minute, or that plus seconds.  But there's a problem there.
Can you see it? (or maybe more easily if you're viewing with a fixed
width font?):
     YYYYMMDDnn
     4294967295=2^32-1
   YYYYMMDDHHMM
YYYYMMDDHHMMSS
Yes, the two latter forms are too large, and are integers that exceed
what can be used.  And, what if one uses those in a configuration
anyway?  It really depends on one's DNS software.  If one's relatively
lucky, the software will throw an error and refuse to load the zone
entirely.  But some DNS software does otherwise, e.g. takes the number
mod 2^32 and uses that result - sometimes silently altering it, and
quite surprising the unwary unsuspecting DNS admin (oops) ... and
perhaps the admins of their slaves too.  So, yes, if one suddenly sees
serial number on master(s) of one's slave go from  2013040202 to
21150228 - which is *not* "greater" than 2013040202 per the relevant
RFCs, then that's a problem.  I'm guessing perhaps 21150228 may have
been intended to be 20150228 or 20160228, but in any case, it's not a
valid straight jump from 2013040202.

Yes, they may have majorly typoed it, or changed scheme (in a broken
manner to get to where they may want to be), or ... they may have put
in a value that's out of range, and their DNS software may have altered
the data it's serving ... e.g. they may look and see in their master
zone file that they've got something like: 201603020217 and wonder why
the slave is seeing it as: 21150228 or some other value (because,
201603020217 isn't a possible SOA serial number, so dear knows what the
software might serve up if it actually takes that in a configuration).
But if they use, e.g. dig(1) to query the data on their master, they'll
see what serial number it's actually serving up - which may or may not
match what they put in the master zone file - most notably if they put
in some invalid value in that file.

Also, I'll mention, as I occasionally do:
YYYYMMDDnn
is *recommended* format, it's not required.  Many use other schemes.
E.g. just start with 1, and increment by 1 each time it's updated.  Or
use seconds since the Unix/Linux epoch (or for indefinite lifetime,
those seconds since epoch mod 2^32, and just update at least once every
2^31-1 seconds ... well, a bit more frequently than that, to cover
other various relevant TTLs and expirations configured).  There are
other schemes, or one can do pretty arbitrary ones, as long as one
conforms to the mandatory bits - that's most notably the "sequence
space arithmetic"/"serial number arithmetic" "greater" ...  and other
relevant timing/timeout considerations (TTLs, expirations, etc.).

> The iron rule of S/Ns is that they always must ascend or you get serious
> trouble, because your secondaries will think they already have a later
> zone version than what the primary is trying to send to replace it:
> The secondary always says 'Ah, newer S/N; I seem to need to accept new
> data.'  If not, no revision.

One other slight exception - and this might also vary by DNS server
software.  Serial number of exactly 0 may be treated specially.
Some/many take that as "update now" ... but I don't know that such is
specified as being mandatory in the relevant RFC(s) ... and if not, one
can't rely upon such a (mis?)feature.  Other than that, no other serial
number is treated special (and that bit with 0 may not be
universal/required in that behavior).

> So, the domain owner made a hapless edit error.  Bad, but it happens.  I
> sent him mail saying what was wrong and that he needed to fix.  A few
> hours later, he said it was fixed.

Hapless edit error or value out of range - those would be my guesses.

> This is where the First Rule comes in.  My nameserver kept showing the
> same problem (reported to me by logcheck).  Which meant, no, he didn't
> fix it at all.

> What did he fix?  He's a little unclear on this, but it seems likely he
> made some local edit, but _never queried DNS_.

Yes, excellent point, need to *check*, and *verify*.  That means, e.g.,
using dig(1).  Just because one thinks the data is or looks right in
the zone file doesn't mean it's being served up, or served up as
expected.  E.g. most DNS server software, when given a misconfigured
zone file, will generally continue to serve the data they got from the
zone file last time they successfully loaded it, and will ignore a bad
zone file (and will typically complain about it to logging facility or
log file).  If the DNS admin isn't paying attention, they could miss
the fact that the updated zone file wasn't loaded at all due to some
error it contains.  Often the facilities used to reload the zone file
might not at all directly report that there was an error, one may need
to look at log(s) and/or test to ensure a successful load (that still
doesn't ensure the data is as desired, but would at least confirm that
it got loaded).

> Problem.
>
> If the task is 'DNS must be serving up a correct S/N over the network',
> then the task isn't complete until you've demonstrated that DNS is
> serving a correct S/N over the network.
>
> The correct tool for this is /usr/bin/dig (or nslookup if that's all you
> have, as is true by default for Windows users -- but nslookup is buggy
> and deprecated).  'dig' queries the public DNS.

nslookup *was* deprecated.  It *was* going to go away (and I say good
riddance - dig is so much better/nicer - though it does take a wee bit
of getting used to when first switching from nslookup).  But sometime
subsequent to that, nslookup got a reprieve (darn), and was no longer
slated to go bye-bye forever.  Not sure if it's changed again since
then, but that's last I recall reading on the matter some moderate
number of years ago.  (Likely there's answer on Wikipedia ... let's see
...) Yup, ... *was* deprecated ... then changed to not deprecated in
2004
https://en.wikipedia.org/wiki/Nslookup
https://lists.isc.org/pipermail/bind-announce/2004-September/000155.html
If one knows of more recent change in its status than that, certainly
feel free to update Wikipedia and include relevant authoritative
reference(s) there.  :-)

> Editing a zonefile and staring at it is _not_ querying the public DNS.
> It is notoriously common for people to edit such a zonefile and, e.g.,
> fail to reload it.  The only _relevant test_ is to query it exactly like
> a public user of the DNS -- e.g., using dig.

Yup ... gotta look at the actual DNS data and DNS server behavior, not
just the data in master zone file(s).

> This guy didn't, so he was completely unaware that he'd totally failed
> to address his problem.

> It also turns out, this guy's idea of how to update a zone's record was
> to edit it and then _restart the nameserver software_ (BIND9).  I
> pointed out that restarting BIND9 just to reload a single zone is like
> rebuilding an automobile engine just to change the oil.  I mean, it
> works, but it's awesomely slow and inefficient.  Turns out, this guy had
> never heard of 'rndc', the BIND9 tool that can (among other things)
> signal the BIND9 daemon to reload into memory a zonefile revised
> on-disk.

Whatever service manager and/or init system one has on one's operating
system, usually also includes a reload capability - so most of the time
one needn't even need to know the capabilities of rndc (but for large
DNS site administrators, recommended to reasonably well learn rndc - it
has, for example, capabilities to only reload one single zone - thus
skipping the rereading of other zone files - which can be a huge factor
for large DNS sites).

Yes, unnecessarily restarting a nameserver is generally a bad idea.
Sure, sometimes there are good reasons to (e.g. security vulnerability
that remains vulnerable until it's restarted so it's then executing the
newer code (executable(s) and/or library(/ies)).  E.g. I actually did
that on two wee 'lil DNS servers within the past 24 hours (well, at
least when I was first drafting this up) - but probably the first time
in months or more since I'd done any explicit restart of those DNS
servers.

Some of the very good reasons to generally *not* restart a DNS server:
Most notably, if one mucks up a zone file, in many cases, the DNS
server will (if it's data/config error that's invalid) typically not
load the zone file, and will log the error (or at least failure to load
the zone), and will continue to serve the older valid zone data it
loaded.  Contrast that with restart.  With restart, there's at least
some small bit of outage, and then for that zone - a much larger one -
if upon (re)start the zone file is bad such that the DNS server won't
load it, now one's DNS server is no longer serving that zone's data.
The DNS server should generally be authoritative for that zone, and now
there's a problem, as it can't serve that zone's data, but the DNS
server is probably up and running after the (re)start.  "Oops".  That's
significantly worse scenario than if one had done a reload rather than
restart.  Another reason not to restart.  With restart, all the
non-persistent cached data the DNS server has is lost.  This basically
makes for more work and traffic, and loss of efficiency.  I say
non-persistent, as I'd call slave data (saved to files) persistent -
and one can call that a kind of cache ... though when talking about
DNS, slave data isn't generally what's being talked about when one is
referring to DNS cached data.

So, in summary, reasons to not restart:
o outage (even if brief) with restart
o bad zone file: restart: not served, reload: older data still served
o loss of cached data with restart
and reason(s) to restart:
o good need/reason (e.g. security) to use newer updated binary(/ies)

> So, he didn't know the basics of the tools that underlie his entire
> Internet presence.
>
> But the far worse problem is that he had a screwball notion of what
> determines whether a task is completed.
>
> And anyone can get _that_ part right.

More bits that can and do go wrong.  And even with - or quite closely
related to SOA serial numbers.  Notably timing.  Let's say one has been
using a certain scheme for serial numbers, and one wants to change it.
Or, other case, a booboo was made with serial numbers.  If it's not the
case that serial number(s) where we soon need or want to be are not
greater (per RFC) than those that may still be cached and/or have been
loaded by one or more slaves, then it can be at best a bit tricky to
fully get the DNS data to where we want/need it to be.  For merely
cached DNS data, we have TTLs (and sometimes also
MINIMUM / Negative Cache TTL) to be concerned with on timing, and for
slaves, we need also be concerned additionally with various SOA timings
(and typically worst case, most notably EXPIRY).  We also have to
potentially worry about unlisted slaves and the like ... e.g. if we
allow more than just slaves to do AXFR on our zone, then any IPs that
may have done so may have outdated data "stuck" on bad/old serial
numbers ... so if one allows "any" to do AXFR, then, and least in
theoretical potential, most any and all IPs could potentially be slaves
to one's DNS server(s) - even if they're not listed/documented as such.
  Depending when caching and/or slaves may have gotten their data -
including slaves potentially being brand new slaves (or otherwise
restarted and their old slave data removed or lost or whatever), any
potentially unlisted/undocumented/non-delegated slaves, etc., some
slaves may be "stuck" on the older data, as the newer data has a serial
number that's not greater, whereas some other slaves - e.g.  let's say
a slave relaunched anew, and didn't preserve the older slave data that
would otherwise normally be saved - that slave, not having already
saved the zone data, would grab whatever zone the master was serving
up, as it had no older zone saved to compare it to.  Now slaves have at
least two different sets of zone data - one of which will track
(henceforward) proper updates on master relative to current master
serial number, and one of which is likely "stuck" for a while - at
least without proactive corrective action on the master.  But even then
... sure, slaves ...  might even be able to get any known slave DNS
servers to take explicit action to pick up new zone (by having their
admins manually do so ... while they grumble about one's mess up with
the master data and they ought not be having to fix a mistake that's
yours, and that their fix may not correct any and all DNS data from the
zone that may otherwise be "out there" on The Internet) - but even that
would only cover those slaves.  One can't likewise have any and all DNS
data cached anywhere and everywhere likewise flush out those caches and
pick up the newer data ... "oops".  Anyway, to fix a "broken" serial
number, and/or to change scheme, there are ways to properly and safely
do so (I think one of the RFCs even explicitly gives some example(s) on
that).  So, we covered the
"sequence space arithmetic"/"serial number arithmetic" "greater"
bits ... that's half (well, maybe more like 3/4) of the battle.  But
where there was serial number booboo, or one wants to change scheme,
where going from old existing, to desired new, the new is not "greater"
than the old, then one needs to proceed carefully and in appropriate
manner to ensure all gets properly updated.  Notably, in DNS SOA, and
cahches of DNS, there are various timeout parameters.  First of all, in
general, to get from "old" to "new", where "new" isn't "greater", we
can solve that by picking some number (or range) between "old" and
"new", we'll call it "between", where per the "greater" requirements,
we have a selected "between", such that:
"between" is "greater" than "old"
"new" is "greater" than "between"
and yes, that's the case, even if/when it's the case that
"new" is not greater than "old"
... again
"sequence space arithmetic"/"serial number arithmetic"
... or think back to our clock analogy and how things wrap around.
With our clock analogy, going from "old" to "new" traveling clockwise
would *not* be shorter than going counterclockwise.  We add a "between"
on that clockwise route from "old" to "new", and picking a "between"
such that the shorter route from "old" to "between" is clockwise, and
likewise from "between" to "new", the shorter route again is clockwise.
  Want example numbers on our 0 to 11 clock of 12 hours?  We have 3, we
want to get to 1.  We pick a "between" of 8 going from 3 to 8 is 5
hours - shorter than counterclockwise, and likewise from 8 to 1 is 5
hours - which is also shorter than going counterclockwise.  So we do
the same kind of thing, with DNS serial numbers, but over the 0 to
2^32-1 range.  Ah, but timing and caches, etc.  You don't know when
slaves, or other DNS queries, may read or have read the DNS data.  And
they generally cache that data, and slaves also hold onto zone data.
But for how long?  Well, the good news is not forever.
In the SOA data ... well, let me show a commented example ...:

# sed -ne '1,/)/p' sf-lug.com
$TTL 86400; 24H
@       IN      SOA     (
                         ns1             ; MNAME
                         jim.well.com.   ; RNAME
                         1456841404      ; SERIAL ; date '+%s'
                         10800           ; 3H REFRESH
                         3600            ; 1H RETRY
                         1209600         ; 2W EXPIRY
                         10800           ; 3H MINIMUM; Negative Cache TTL
                 )
#

TTL shown is default for the zone - entries not specifying a TTL will
use that default for the zone - that's essentially how long most DNS
data will (or at least may at maximum) be cached by anything and
everything receiving that DNS data - it starts ticking down the seconds
once it's received it, and will drop that data from cache not later
than at its TTL expiration.  This TTL also includes things like our SOA
data, e.g we don't have an explicit TTL on the SOA record, so anything
caching that may presume that SOA data - and serial number - may be
presumed to remain unchanged for that TTL amount of time.  But
individual TTL data within the zone may have explicitly set TTLs that
may be different.

REFRESH, is, minimally, how often slaves should check to see if there's
updated data (they might check at other times, e.g. if they're
restarted, or possibly if they receive a NOTIFY - but no guarantees
they'll check earlier) - they check the SOA SERIAL, and if it's
"greater", they try to grab updated copy of that zone file (AXFR).  If
it's not "greater" they ignore it (they may possibly log that bit about
the serial number(s)).  And why don't they grab it?  Because they're
properly presuming it's older data, and they already have newer data -
because it's the serial number that tells the slaves that - nothing
else.  Slaves may have multiple masters, and even with the same master
IP address, it may in fact have multiple servers behind it - if they
don't all get the same data at the same time, some may have older data,
with a serial number that's not "greater".  The slaves are smart about
that, only updating their data if the serial number is "greater" ...
that's all fine and good ... until someone screws up zone serial
number(s).

RETRY tells the slaves, if they fail to get the zone, they should wait
that long before reattempting (there are also some exceptions to that).

EXPIRY, that's the maximum time the slaves consider the data to still
be valid after they got the zone and haven't been able to revalidate
SOA serial number with the master.  If a slave ever gets to that
EXPIRY, they'll then cease to serve that outdated data.  If after that
point master offers them zone data they can pull, they will *then* pull
the zone data, regardless of the serial number (as at that point they
have no saved data for the zone, as they've dropped/expired it).  But
that EXPIRY can be a quite long time, and typically is - it's usually a
tradeoff between having slaves continue to serve data if master(s) are
dead/down/unreachable for extended time (e.g. buildings flattened by
hurricane, or what have you), yet eventually completely dropping that
older data to force picking up newer data or force no longer using the
old data.  Very long EXPIRY can seriously complicate fully and assuredly
changing serial number schemes or fixing broken serial numbers - and
especially so if similar has been done before within EXPIRY time.

MINIMUM / Negative Cache TTL - that's not really a "minimum" per se.
It's more like a maximum.  Negative cache is for when an answer
basically says that doesn't exist.  E.g. one asks for an A record of a
name, there's a successful response, but that response essentially says
we're authoritative, and there's no such record for that item.  So,
there's the negative caching, or DNS clients cache the result that it
doesn't exist (mostly to avoid repeatedly and soon asking the same
thing yet again).  And, since the item doesn't exist within the zone,
it's necessarily setting for the entire zone - no way to specify
different negative cache TTLs for different non-existent items within
the zone.

So ... now that you know all that :-)
one can see it's not *quite* as simple as merely sequencing the serial
numbers in a manner that's valid.  As one must also account for any
slaves and/or caching communicating with - or failing to communicate
with - the DNS server(s)/master(s) at any given point in time along the
way - including also caching and/or slave DNS servers possibly dropping
all of their saved data about the zone or entries with the zone, or not
... or Murphy's Law, caching/saving the absolutely worst, least
opportune set of data, and hanging onto it as long as they can per
RFCs.  I'll leave full timing algorithms taking that into account as an
exercise (hint, it's pretty well covered in relevant RFC(s)), but as
additional hint, I'll give example from not too long ago, where I
changed serial number scheme.  This example doesn't show all the data
and changes, but just those of particular relevance.  Oh, and where you
see a regular expression character class ([]) with two or more
consecutive whitespace characters within, that's generally a space and
a tab (and may be converted to to spaces before emailed, or by email
clients).

# 2>>/dev/null co -p1.25 sf-lug.com | sed -ne '1,/)/p'
$TTL 86400
@       IN      SOA     (
                         ns1.sf-lug.com. ; MNAME
                         jim.well.com.   ; RNAME
                         2015123000      ; SERIAL
                         10800           ; 3H REFRESH
                         3600            ; 1H RETRY
                         1209600         ; 2W EXPIRY
                         10800           ; 3H MINIMUM; Negative Cache TTL
                 )
# (o=25;n=$(expr "$o" + 1); while [ "$n" -le 28 ]; do echo ";;;;; 1.$o  
--> 1.$n ;;;;;"; 2>&1 rlog -r1."$n" sf-lug.com | sed -ne '/^date:  
/,${/^date: /d;$d;p}'; 2>&1 rcsdiff -r1."$o" -r1."$n" sf-lug.com | sed  
-ne '/^diff/,$p'; o="$n"; n=$(expr "$n" + 1); done) | sed -e 's/[       
  ][      ]*/ /g'
;;;;; 1.25 --> 1.26 ;;;;;
begin change of serial number scheme, adjust format
diff -r1.25 -r1.26
5c5
< 2015123000 ; SERIAL
---
> 4162605648 ; SERIAL ; DO NOT EXCEED 4162606647 before  
> 2016-01-16T14:38:43-0800; migrating towards: perl -e  
> 'print(time%(2**32));'
;;;;; 1.26 --> 1.27 ;;;;;
bumped serial number - checking possible issue with he.net DNS slaves  
- apparently pulling zones, but otherwise not updating?
diff -r1.26 -r1.27
5c5
< 4162605648 ; SERIAL ; DO NOT EXCEED 4162606647 before  
2016-01-16T14:38:43-0800; migrating towards: perl -e  
'print(time%(2**32));'
---
> 4162605649 ; SERIAL ; DO NOT EXCEED 4162606647 before  
> 2016-01-16T14:38:43-0800; migrating towards: perl -e  
> 'print(time%(2**32));'
;;;;; 1.27 --> 1.28 ;;;;;
diff -r1.27 -r1.28
5c5
< 4162605649 ; SERIAL ; DO NOT EXCEED 4162606647 before  
2016-01-16T14:38:43-0800; migrating towards: perl -e  
'print(time%(2**32));'
---
> 1456820360 ; SERIAL ; date '+%s'

And in case one was wondering or doesn't know, In perl:
time%(2**32)
will be the same as GNU's
date +%s
until at least:
2038-01-19T03:14:07+0000
Hence I changed the reference to date +%s to make it a bit more
straight-forward/clear for a typical Linux systems administrator / DNS
admin.  ... In/by 2038 or so, I might change the reference back ... or
so I can hope.  ;-)

> Date: Tue, 1 Mar 2016 04:16:15 -0800
> From: Rick Moen <rick at linuxmafia.com>
> To: conspire at linuxmafia.com
> Subject: Re: [conspire] The first rule
> Message-ID: <20160301121615.GC8471 at linuxmafia.com>
> Content-Type: text/plain; charset=utf-8
>
> Because it was 4am, I typoed:
>
>> Now, this was some sort of hapless editing screwup.  The recommended
>> value for a zonefile S/N is based on the current day's date, and is
>> YYYYMMDDnn, where nn starts at 00 and end at 99, for a total of 1000
>                                                                   ^^^^
>> revisions you can make to a domain in 24 hours.  So, today, March 1st,
>> the first S/N value for a zonefile would be 2016030100, then 201603101,
>> etc.
>
> '100'

Typos happen.  Sometimes in SOA serial numbers.  8-O

Anyway, will also mention, what I covered is within the realms of the
more typical.  There are also yet more exceptions.  E.g. some DNS
software may behave a fair bit differently, but key bit is it needs to
play by the same RFC standards - if they're doing DNS, it's *mandatory*
they do what the RFCs say they *must*.





More information about the conspire mailing list