[conspire] BALE software, upcoming SVLUG presence at CABAL, upcoming Rick downtime

Rick Moen rick at linuxmafia.com
Fri Mar 13 14:23:01 PDT 2009


Quoting Christian Einfeldt (einfeldt at gmail.com):

> I am grateful for Rick Moen's BALE calendar, because it runs on Free
> Software.

Hey, if you'd ever like to play with the setup, I can easily make the
database setup, Python scripts, and PHP available.  That's mostly 
Deirdre's work based on my original design, and I'm 99.9% sure she
wouldn't mind, and I'm pretty sure she's already said "of course"
(though I'll verify).

The physical machine required to run BALE's setup is, um, pathetically
minimal.  As a reminder, uncle-enzo.linuxmafia.com is still a 1988-era
VA Research model 500 PIII/500 with 256 MB RAM and pair of 9GB SCSI
drives.  That particular machine's RAM is its limiting factor:  It keeps
bottoming out because both Karsten Self and I are greedy users of RAM at
the server command line, each of us leaving a bunch of processes running
all the time under GNU screen.

Here's what Karsten's running at the moment, and y'all might remember
the command line from an earlier post about how to undrerstand memory
usage:


 $  ps -eo pid,user,group,%mem,rss,vsz,args | grep karsten
 9908 karsten  utmp      0.3   824  10308 SCREEN
 9909 karsten  karsten   0.1   484   4976 /bin/bash
 9917 karsten  karsten   0.5  1520  10956 mutt -f Mail/Inbox/
 9923 karsten  karsten   0.3   796  18076 irssi -c irc.freenode.net
 9945 karsten  karsten   3.9 10176  17028 mutt -f Mail/noend/
 8205 karsten  karsten   0.1   476   4976 /bin/bash
 8214 karsten  karsten   0.3  1008  14412 mutt -f Mail/sent/
 4673 karsten  karsten   0.9  2540  16908 irssi -c irc.freenode.net
22875 karsten  utmp      0.1   476   4372 SCREEN
22876 karsten  karsten   0.1   476   4972 /bin/bash
22888 karsten  karsten   0.1   476   5696 vi yahd.html
10144 root     root      0.7  1908   8472 sshd: karsten [priv]
10153 karsten  karsten   0.7  2028   8472 sshd: karsten at pts/2
10156 karsten  karsten   0.4  1212   4968 -bash
10943 karsten  karsten   0.2   728   4080 screen -rd 9908

The key memory-usage column, as always, is "RSS" the Resident Set Size.
Karsten is using up 25128 kilobytes of it (25 MB).  But _I'm_ a good bit
worse:

~ $  ps -eo pid,user,group,%mem,rss,vsz,args | grep rick   
27637 rick     utmp      0.7  2000   5812 SCREEN
27638 rick     rick      0.1   476   4916 /bin/bash
16204 rick     rick      0.1   456   4912 /bin/bash
17995 rick     rick      0.1   424   4924 /bin/bash
24345 rick     rick      2.2  5712  14352 mutt -f inboxes/lists
23140 rick     rick      1.4  3828  11984 mutt
30074 rick     rick      0.2   584   4928 /bin/bash
32729 rick     rick      1.3  3352   9496 mutt -f inboxes/svlug
 8005 root     root      0.6  1684   8448 sshd: rick [priv]
 8011 rick     rick      0.7  1816   8612 sshd: rick at pts/0 
 8012 rick     rick      0.4  1172   4916 -bash
14106 root     root      0.7  1828   8472 sshd: rick [priv]
14112 rick     rick      0.7  2016   8472 sshd: rick at pts/9 
14113 rick     rick      0.4  1204   4896 -bash
14120 rick     rick      0.2   748   4092 screen -d -r
17008 rick     rick      1.0  2660   5508 vim
/tmp/mutt-linuxmafia-1000-24345-2157
17044 rick     rick      0.8  2136   4916 /bin/bash
17060 rick     rick      1.9  5044   9460 mutt -f Mail/conspire

(I've excluded the ps, grep, and matching shell from that list of
processes, because they terminated immediately upon my doing the command
and closing that shell.)

That's 37140 kilobytes of RSS, or about 37 MB of my personal processes.

Processes being run by the system, as opposed to by me or Karsten,
amount to 233292 kilobytes or 233 MB of RSS -- out of 256 MB total
system physical RAM.  Obviously, with that much being taken up just by
Apache httpd, BIND9, Mailman, Exim4, and miscellaneous system processes,
there's just not a lot left over for us two RAM-hungry users -- but we
use it anyway.  Thank heavens for swap.  But, every week or so, Linux 
exhausts swap, too:

# egrep 'failed|killing' /var/log/messages | grep -v Warning
Mar  8 15:59:42 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 15:59:42 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 15:59:42 linuxmafia kernel: VM: killing process exim4
Mar  8 15:59:58 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0xf0/0)
Mar  8 16:00:05 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 16:00:05 linuxmafia kernel: VM: killing process cron
Mar  8 16:00:05 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 16:00:05 linuxmafia kernel: VM: killing process exim4
Mar  8 21:51:56 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 21:51:56 linuxmafia kernel: VM: killing process exim4
Mar  8 21:52:58 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 21:52:58 linuxmafia kernel: VM: killing process exim4
Mar  8 22:47:50 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  8 22:47:50 linuxmafia kernel: VM: killing process apache
Mar  8 22:47:50 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1f0/0)
Mar  9 12:00:53 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  9 12:00:54 linuxmafia kernel: VM: killing process mutt
Mar  9 12:02:29 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  9 12:02:29 linuxmafia kernel: VM: killing process apache
Mar  9 12:02:29 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  9 12:02:29 linuxmafia kernel: VM: killing process exim4
Mar  9 12:05:30 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  9 12:05:30 linuxmafia kernel: VM: killing process apache
Mar  9 12:05:30 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar  9 12:05:30 linuxmafia kernel: VM: killing process exim4
Mar 11 11:22:28 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 11 11:22:28 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 11 11:22:28 linuxmafia kernel: VM: killing process exim4
Mar 11 11:22:28 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 11 11:22:28 linuxmafia kernel: VM: killing process exim4
Mar 12 21:09:29 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 12 21:09:29 linuxmafia kernel: VM: killing process spfd
Mar 12 21:09:29 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 12 21:09:29 linuxmafia kernel: VM: killing process modprobe
Mar 12 21:09:29 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 12 21:09:29 linuxmafia kernel: VM: killing process exim4
Mar 12 21:33:46 linuxmafia kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
Mar 12 21:33:46 linuxmafia kernel: VM: killing process exim4


What you're seeing, there, is the kernel hitting hard limit on memory
exhaustion, in that the swapper thread (thus "VM" = virtual memory)
decides that it has no choice but to find some process and kill it, in
order that the system doesn't fall over.  In the period covered by the
most recent system logs, that happened to be instances of Apache httpd,
the Exim4 SMTP server, the cron daemon, the Sender Permitted Mail
daemon, a kernel module utility, and a running instance of the mutt mail
reader (one of mine, in fact).  

Most of the time, with killing of system daemon processes, it doesn't
kill _all_ instances of that process, so more of them respawn, but
occasionally I suddenly realise that the Web server software, or the
mail server software, or my DNS nameserver, or something like that is 
offline completely, and have to manually restart it.


Two other things to be aware of, coming up:

1.  SVLUG's installfest will be joining us on the 4th-Saturday CABAL
dates during March, April, and May, because its normal 3rd-Saturday
venue at Evergreen Valley College isn't available for those months.
That's March 21st, April 25th, and May 23rd.


2.  Five days after that April date, on Thursday, April 30th, I'll be 
having in-patient surgery for a (curable) serious health problem, which
will have me in Kaiser Hospital, Santa Clara for at least a day, maybe
several, and in recovery for some unknown number of weeks afterwards.

This certainly won't affect the April 25th CABAL meeting, but I will, if
all goes well, have some shiny new scars starting with the May 8th
meeting plus some mobility impairment (or worse), but have otherwise
will (again) have every prospect for excellent health.


Which reminds me:  Guys and gals, but in particular guys, _do_ make sure
you have health screenings AS RECOMMENDED.  I did, and it probably just 
saved my life.  Please see and print out:
https://members.kaiserpermanente.org/kpweb/pdf/cal/nocal_prevention_guidelines.pdf





More information about the conspire mailing list