[conspire] COVID-19, 100,000, ...
paulz at ieee.org
paulz at ieee.org
Thu Jun 18 10:37:45 PDT 2020
Exponential curve is probably not the ideal formula now.
The math of exponential curve assumes that all past cases still contribute to growth. This is exact for compound interest and close for bacteria in a Petri dish.
With COVID-19, cases more than a couple weeks old are no longer carriers. Epidemiologists have more complex models that estimate how many cases are active.
Now we have efforts to track cases and isolate them before they spread.
The data presented by Michael shows the number of cases grew by 100,000 : 1 from the beginning of Feb to the end of May.
Since then it has grown by 16 %.
If the growth was really exponential in the last couple of weeks, the time to double would be several months. 3 weeks of data isn't enough to accurately compute the time.
In any event, the re-opening process will add more variables to the mix. And it takes 2-3 weeks for a change in policy to be seen in the change in rate of new cases.
On Thursday, June 18, 2020, 3:08:12 AM PDT, Michael Paoli <michael.paoli at cal.berkeley.edu> wrote:
Good news, bad news.
So, let's look some more at US numbers,
official COVID-19 deaths. Sure, probably not 100% accurate, probably
undercounted to some fair/significant percentage, but regardless, let's
work with those numbers, as an at least rough/crude approximation.
Good news ... doesn't look like 1,000,000 dead by 2020-06-18.
Bad news - the numbers continue to go up.
Let's look a bit closer.
So, from earlier:
http://linuxmafia.com/pipermail/conspire/2020-May/010796.html
we have date and US COVID-19 total deaths:
2020-02-06 1
2020-05-27 100,000
And most current:
2020-06-17 115,980
So, what's that give us - for a rough/gross approximation for ...
exponential bases covering these periods?:
period
2020-02-06--2020-05-27
2020-05-27--2020-06-17
2020-02-06--2020-06-17
Ddiff(){
echo \($(TZ=GMT0 date +%s -d "$1"'T12:00:00+00:00') - \
$(TZ=GMT0 date +%s -d "$2"'T12:00:00+00:00')\)/3600/24 |
bc -l | sed -e 's/\.00*$//'
}
Ddiff 2020-05-27 2020-02-06
111
Ddiff 2020-06-18 2020-05-27
22
Ddiff 2020-06-18 2020-02-06
133
period days (increment)
2020-02-06--2020-05-27 111
2020-05-27--2020-06-17 22
2020-02-06--2020-06-17 133
So ... how far into future (at what dates) to hit these numbers?:
c='200000 500000 1000000 2000000'
echo "base $c"
Dpredictor(){
(
d2="$1"
d1="$2"
c2="$3"
c1="$4"
# future c #s to predict for:
set -- $c
# elapsed days:
d=$(
echo \($(TZ=GMT0 date +%s -d "$d2"'T12:00:00+00:00') - \
$(TZ=GMT0 date +%s -d "$d1"'T12:00:00+00:00')\)/3600/24 |
bc -l | sed -e 's/\.00*$//'
)
b=$(
# leading TABs in the actual code, NOT spaces
bc -l <<- __EOT__
scale=50
define p(x,y){
return e(y*l(x))
}
p(($c2/$c1),(1/$d))
__EOT__
)
printf "%1.10f" $b
for c3; do
dp=$(
# leading TABs in the actual code, NOT spaces
bc -l <<- __EOT__
scale=50
define p(x,y){
return e(y*l(x))
}
l($c3/$c1)/l($b)
__EOT__
)
# round:
case "$dp" in
*.[5-9]*)
dp=$(echo "$dp" | sed -e 's/\.[0-9]*$//')
dp=$(expr "$dp" + 1)
;;
*)
dp=$(echo "$dp" | sed -e 's/\.[0-9]*$//')
;;
esac
echo -n " $(date -I -d "${d1}T12:00:00-07:30 + $dp days")"
done
echo
)
}
Dpredictor 2020-05-27 2020-02-06 100000 1
Dpredictor 2020-06-17 2020-05-27 115980 100000
Dpredictor 2020-06-17 2020-02-06 115980 1
base 200000 500000 1000000 2000000
1.1092898649 2020-06-03 2020-06-12 2020-06-18 2020-06-25
1.0070843848 2020-09-02 2021-01-10 2021-04-18 2021-07-25
1.0923618862 2020-06-23 2020-07-04 2020-07-11 2020-07-19
So ... first of all, we have our various base numbers,
based on our data, and crudely presuming same base rate throughout
each period. We see higher base for earlier period, lowest for most
recent, and something between for period covering the full range.
Reality is likely somewhere between the most recent period,
and overall range, so excluding the other, that would leave us with:
base 200000 500000 1000000 2000000
1.0070843848 2020-09-02 2021-01-10 2021-04-18 2021-07-25
1.0923618862 2020-06-23 2020-07-04 2020-07-11 2020-07-19
Most recent would almost certainly be artificially low on the base,
as the mortality numbers lag behind current spreading - probably by
roughly 2 weeks (probably somewhere between 5 and 15 days, anyway),
as it covers much of shelter-in-place/lockdown close to its maximum/peak
use, before things started opening up more - and particularly when
also considering the spread --> death lag factor.
Likewise, overall period likely shows somewhat artificially high rate,
as it covers much of period before any lockdown / shelter-in-place.
And this still doesn't cover exponential --> logistic curve - as the
latter becomes more accurate as the no longer vulnerable to being
infected (immune, or dead) numbers go up. Realizing too, "immune"
may be partial or temporary - but "enough" and "long enough" to
take as effectively so over the periods examined. But we're
still just showing simple exponential here, not logistic.
So, if we drop out the period that's only the earlier, ...
base 200000 500000 1000000 2000000
1.0070843848 2020-09-02 2021-01-10 2021-04-18 2021-07-25
1.0923618862 2020-06-23 2020-07-04 2020-07-11 2020-07-19
It looks like we're still quite tracking towards ominous
milestones of 200,000, 500,000, etc. - though for the
latter ones, the model eventually breaks down (need to go
to logistic, or data greatly changes when safe effective
vaccine becomes widely available and deployed).
So ... between Summer, late Summer, early next year and
into Spring next year, we'll much better know what
numbers we're tracking on and towards.
Some folks may quite think we're done with this virus,
but alas, it's definitely not done with us.
references/excerpts:
http://linuxmafia.com/pipermail/conspire/2020-May/010796.html
https://en.wikipedia.org/wiki/COVID-19_pandemic_deaths
> From: "Michael Paoli" <Michael.Paoli at cal.berkeley.edu>
> Subject: 100,000, bc(1), & miscellaneous
> Date: Thu, 28 May 2020 00:41:34 -0700
> 2020-05-27 100,000 US dead; less than 4 months ago we were at 1 US dead
> Rate may not have been consistent (variations in
> lockdown/shelter-in-place/reopening/...), and theoretically more
> logistic than exponential, but still far from saturation / herd
> immunity, so if we approximate using exponential,
> and do gross approximation of consistent rate, we have ...
>
> First US confirmed fatality ...
> was earlier thought to be 2020-02-28 in Seattle, WA, but (autopsies)
> later confirmed first was:
> 2020-02-06 in in Santa Clara County, CA
>
> So ...
> $ echo \($(TZ=GMT0 date +%s -d '2020-05-27T12:00:00+00:00') - \
> $(TZ=GMT0 date +%s -d '2020-02-06T12:00:00+00:00')\)/3600/24 |
> bc -l
> 111.00000000000000000000
> 111 days ago - or
> 111 days from from 2020-02-06 (1) to 2020-05-27 (100,000)
>
> So, ... we'll make the (gross) approximation of consistent
> exponential growth rate throughout.
>
> So, then what per-day growth rate?
> b^111=100,000 = e^(111*ln(b))
> b=100,000^(1/111) (or 111th root of 100,000) = e^((1/111)*ln(100,000))
> ~=
> 1.10928986489522280772, e.g.: $ echo 'e((1/111)*l(100000))' | bc -l
> Remember, exponentials get big/small fast - unless the base is 1.
> if the base is >1 they grow, if the base is <1 they get shrink.
> So, at ~1.11, we go from 1 to 100,000 in 111 days
>
> If we presume same exponential, when would we hit 1,000,000?
> b^x=1,000,000
> x=log-base-b of 1,000,000 = ln(1,000,000)/ln(b) =
> ln(1,000,000)/ln(e^((1/111)*ln(100,000))) =
> ln(1,000,000)/((1/111)*ln(100,000)) ~=
> 133.20000000000000014454, e.g.:
> $ echo 'l(1000000)/((1/111)*l(100000))' | bc -l
> So, 10x growth in ~22 (133-111) days
> $ TZ=GMT0 date -I -d '2020-02-06T12:00:00+00:00 + 133 days'
> So,
> 2020-06-18 for 1,000,000 (gross approximation of continuous unchanged
> exponential)
>
> And, when for 10,000,000, and 100,000,000 ...?
> The model would break down by/before then, logistic curve would be
> much more appropriate fit (and even then, if we do gross approximation
> of no factors changing along the way).
> Exponential doesn't take into account immune / no longer vulnerable
> to infection or any limits on pool available to be infected, whereas
> logistic does.
> Essentially no longer vulnerable to infection (at least to sufficient
> model approximation) happens one of two ways:
> o immune (vaccine, infected and recovered ... "immune" not necessarily
> permanent immunity, but "long enough" to cover the period under
> examination).
> o deceased (not the best way to get removed from pool of vulnerable to
> infection)
>
> So, if we've got ~10x growth in ~22 days, and again, our crude gross
> approximation exponential modeling (presuming consistent rate of
> spread), how many days for, e.g. 2x (doubling), 10x, 100x, 1024x,
> ...?
> Let's call x our multiplier (e.g. for 2x, 10x, etc. growth factor).
> Let's use d for number of days.
> From our earlier, we have base (call it b) of exponent,
> for our daily growth factor (b^1 = our 1 day multiplier)
> b=100,000^(1/111) (or 111th root of 100,000) = e^((1/111)*ln(100,000))
> ~=
> 1.10928986489522280772, e.g.: $ echo 'e((1/111)*l(100000))' | bc -l
> We'll use (relatively) standard notation, and convert to bc(1)
> syntax/format:
> b=e((1/111)*l(100000))
> b^d=x
> d=log-base-b of x
> d=ln(x)/ln(b)
> d=l(x)/l(b)
> And, wee bit 'o shell:
> b='e((1/111)*l(100000))'
> for x in 2 10 100 1024 1048576; do
> echo -n "x=$x d="
> echo "l($x)/l($b)" | bc -l
> done
> And we have:
> x=2 d=6.68286590374038254157
> x=10 d=22.20000000000000002616
> x=100 d=44.40000000000000005242
> x=1024 d=66.82865903740382541642
> x=1048576 d=133.65731807480765083285
>
> "Of course" ... "reality" ... the base continually changes, and may be
> quite region/locality/country/... specific, altered by factors such as:
> o physical/social distancing
> o shelter-in-place / lockdown / reopenings / social and other
> gatherings
> o hygiene and other relevant practices, especially as regards
> SARS-CoV-2 --> COVID-19 infection/spread pathways
> Anyway, the more and longer the base can be pushed and further pushed
> below 1, the quicker the infections get to 0. Until then, it
> spreads/grows, notwithstanding immunity(/deaths) numbers becoming so
> large the model starts to substantially differ from exponential and
> better approximates logistic.
>
> Bit more on bc(1) ... want a handy power (x^y) function in bc(1)?
> bc(1) goes relatively minimal in some ways, it give one the basic
> necessary functions needed to define most other functions/capabilities
> that may be needed. E.g. it has natural logarithm/exponentiation, but
> no other bases, as those can be derived from natural base e.
> Likewise trig, it has sine and cosine functions, but no tangent
> function, as tangent can be calculated from sine and cosine.
> So, arbitrary base to arbitrary exponent:
> x^y = e^(y*ln(x))
> So, we can define a function, call it p (for power) in bc:
> define p(x,y){
> e(y*l(x))
> }
> e.g.:
> $ bc -lq
> define p(x,y){
> return e(y*l(x))
> }
> p(2,10)
> 1023.99999999999999992594
> p(144,1/2)
> 11.99999999999999999988
> quit
> $
> bc(1) also well handles relatively arbitrary precision, and other
> number bases. E.g.:
> $ bc -lq
> scale=66
> 4*a(1)
> 3.141592653589793238462643383279502884197169399375105820974944592304
> obase=16
> 4*a(1)
> 3.243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA3
> ibase=2
> 110001011001001101101000010110100110000000000001011100000010011100000000
> C593685A6001702700
> quit
> $
>
> references/excerpts:
> http://linuxmafia.com/pipermail/conspire/2020-May/010792.html
> http://linuxmafia.com/pipermail/conspire/2020-March/010315.html
> bc(1)
> sh(1)
> https://www.youtube.com/playlist?list=PLIOESHELJOCnqaaUqq7AzTOGp-k-2KzKY
>
> And now for slightly cheerier note(s):
> France Musique Le Boléro de Ravel par l'Orchestre national de France en
> https://youtu.be/Sj4pE_bgRQI
_______________________________________________
conspire mailing list
conspire at linuxmafia.com
http://linuxmafia.com/mailman/listinfo/conspire
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://linuxmafia.com/pipermail/conspire/attachments/20200618/ae2582a1/attachment-0001.html>
More information about the conspire
mailing list