Early UNIX systems had no centralized logging system, leaving each
application to decide on its own log policies. Log-related decisions
included where to place logfiles, how much information to store, how long to
store it, and whether to warn any or all logged in users of any particular
events/conditions. Of course most applications simply logged to a text file
and didn't do any alerting or log file rotation, but that's unimportant. ;)
One of the early UNIX hackers at UC Berkeley took note of the situation, and
added a system logging facility to BSD UNIX. It should be noted that if he
had forseen the impact of this impromptu system, he certainly would have
designed it differently (at a minimum he would surely have included a year
field in the log file format).
The way this centralized logging worked was that a program used the system
API (a UNIX system call) to log information. The application using the
system call (syscall from here on) included information on what subsystem
the information was coming from (i.e. ftp, mail, kernel, etc) and also
information on the importance or severity of the information (i.e.
informational, critical failure, debugging information, etc). Armed with
this information, the UNIX system manager could configure a centralized
logging mechanism to log according to a single set of guidelines (where to
place logfiles, how much information to store, etc).
The original use of a syslog daemon (syslogd) on UNIX was to collect
log sent via the syslog syscall(s) (there are usually two similar
syslog syscalls, but that's not important here). A single configuration
file, usually stored at /etc/syslog.conf was used to configure it. The
information from the syscall(s) arrived via a UNIX filesystem socket,
usually at /dev/log. There was no network transmission originally
involved, simply some local collection and storage of log information.
|