[sf-lug] What are the best practices for Linux partitioning & Mount points for Production systems

Rick Moen rick at linuxmafia.com
Fri Mar 2 16:00:25 PST 2012


Quoting nk oorda (nk.oorda at gmail.com):

> i need some suggestion for defining the partition size for my production
> systems.

Your partitioning is logically dictated by what you're trying to
achieve, including what threat modes you're attempting to protect
against.

Some of the concerns that might drive partitioning design for a server
are laid out here:
http://linuxmafia.com/pipermail/conspire/2012-February/006925.html
http://linuxmafia.com/pipermail/conspire/2012-February/006970.html
http://linuxmafia.com/pipermail/conspire/2012-February/006921.html

Themes mentioned:
1.  Partitions carved out in order to use ext2 for high performance.
2.  Partitions carved out to enable use of custom mount options, 
    e.g., noatime, nodev, nosuid
3.  Partitions carved out to cluster most-accessed parts of the file
    tree around the swap partition for minimum average seek
    distince/time within a spindle (where spinning media is used).
4.  Partitions carved out to keep them normally read-only as a 
    protection against sysadmin error.

One might add: 
5.  Partitions made be NOT part of the root filesystem to better protect 
    the root FS from getting overfilled or damaged.
6.  Partitions kept separate because they're network-shared e.g., via NFS

Poster rgmoore on LWN posted (https://lwn.net/Articles/484332/)

  The idea is that you should be able to have a separate partition for
  each different kind of data. It should be possible to keep read-only
  data (or data that is only supposed to be written by a sysadmin) on a
  separate partition from data that's frequently written, data that's
  specific to a particular machine separate from data that can be shared
  across multiple machines, and data that is volatile across a reboot
  separate from data that needs to be preserved across reboots. So the
  idea is that standard partitions are supposed to be:

  / Machine specific, read-only
  /var Machine specific, read-write, stable across reboots
  /tmp Machine specific, read-write, volatile across reboots
  /usr Shared, read-only
  /home Shared, read-write 

Exactly so.

> What i am able to get from the google search is:

What you should be concentrating on finding is _why_ a particular
division was used, i.e., towards what purpose or benefit.

My URL #2 (above) includes a brief schema of filesystems on the 
server that runs this mailing list -- and some of the reasons.  If
anyone's interested, I'd be glad to elaborate more about that.





More information about the sf-lug mailing list