A new oracle DBA can find here detailed information about the CRON tool on Unix operating systems and how to schedule scripts automatically using the cron scheduler.
CRON
There are two methods of editing the crontab file. First you can use the "crontab -l > filename" option to list the contents and pipe this to a file. Once you’ve editied the file you can then apply it using the "crontab filename":
- Login as root
- crontab -l > newcron
- Edit newcron file.
- crontab newcron
Alternatively you can use the "crontab -e" option to edit the crontab file directly.
The entries have the following elements:
field allowed values
----- --------------
minute 0-59
hour 0-23
day of month 1-31
month 1-12
day of week 0-7 (both 0 and 7 are Sunday)
user Valid OS user
command Valid command or script.
The first 5 fields can be specified using the following rules:
* - All available values or "first-last".
3-4 - A single range representing each possible from the start to the end of the range inclusive.
1,2,5,6 - A specific list of values.
1-3,5-8 - A specific list of ranges.
0-23/2 - Every other value in the specified range.
The following entry runs a cleanup script a 01:00 each Sunday. Any output or errors from the script are piped to /dev/null to prevent a buildup of mails to root:
0 1 * * 0 /u01/app/oracle/dba/weekly_cleanup > /dev/null 2>&1
Cluster Wide CRON Jobs On Tru64
On clustered systems cron is node-specific. If you need a job to fire once per cluster, rather than once per node you need an alternative approach to the standard cron job. One approach is put forward in the HP best practices document (Using cron in a TruCluster Server Cluster), but in my opinion a more elegant solution is proposed by Jason Orendorf of HP Tru64 Unix Enterprise Team (TruCluster Clustercron).
In his solution Jason creates a file called /bin/cronrun with the following contents:
#!/bin/ksh
set -- $(/usr/sbin/cfsmgr -F raw /)
shift 12
[[ "$1" = "$(/bin/hostname -s)" ]] && exit 0
exit 1
This script returns TRUE (0) only on the node which is the CFS serving cluster_root.
All cluster wide jobs should have a crontab entry on each node of the cluster like:
5 * * * /bin/cronrun && /usr/local/bin/myjob
Although the cron jobs fire on all nodes, the "/bin/cronrun &&" part of the entry prevents the script from running on all nodes except the current CFS serving cluster_root.
The crontab is a schedule of scripts to run so it can have schedules of many scripts
the syntax is:
min hr dom mon dow command/script
The script should:
– be executable
– contains all needed env variables like HOME, PATH, etc
– commands and files should be referred to by full path names e.g. /usr/bin/ls /dir/file
– output and errors to be redirected e.g. /usr/bin/ls /dir/file > /path/to/mylog 2>&1
to create a crontab you need to:
EDITOR=vi ; export EDITOR
crontab -e
this will call for you a temp file which contains all existing job schedules (or empty file if none). Add or delete schedules from this temp file then save and exit. The temp file will be edited using vi commands
you may add schedules like
0 12 * * * /path/to/myscript
for more info please see
man crontab
man cron