TOPIC: JOB SCHEDULING
Managing Cron jobs on Linux systems
Cron jobs are often-used workhorses of Linux automation, running silently in the background to handle everything from nightly backups to log rotation. For system administrators, understanding how to list, inspect and modify these scheduled tasks is not merely useful knowledge, it is a core skill that keeps infrastructure running smoothly. Whether you are auditing an unfamiliar server or tracking down a misfiring script, knowing where to look and what commands to use will save you considerable time and frustration.
Listing Cron Jobs for the Current User
The starting point for any cron investigation is crontab -l, which displays the scheduled jobs belonging to the user who runs it. Running this command in a terminal will either show a list of entries or print a message such as no crontab for [username], confirming that no jobs have been set. Each line in the output represents a separate scheduled task, formatted with five time fields followed by the command to execute. If you are new to writing that five-field schedule expression, Crontab Guru is a useful browser-based tool that translates cron syntax into plain English as you type.
Listing Cron Jobs for Other Users
When you need to inspect jobs belonging to a different account, the crontab -u flag allows you to specify a username, though this requires root or sudo privileges. To audit every user on a system in one pass, administrators often pair the cut command with a loop that reads usernames from /etc/passwd, cycling through each account in turn. A simple shell loop to achieve this looks like the following:
for user in $(cut -f1 -d: /etc/passwd); do
echo "Crontab for $user:"
sudo crontab -u $user -l 2>/dev/null
echo
done
Running this as root will surface any scheduled task on the machine, regardless of which account owns it.
System-Wide Cron Locations
Beyond user-specific crontabs, several system-wide locations hold scheduled tasks that apply more broadly. The /etc/crontab file is the main system crontab, which differs from user crontabs in that it includes an additional field specifying which user should run each command. The /etc/cron.d/ directory serves a similar purpose, allowing packages and administrators to drop in individual configuration files rather than editing a single shared file. nixCraft's thorough guide to listing cron jobs covers all of these locations in detail and is a useful reference to keep to hand.
User crontab files are stored separately, typically under /var/spool/cron/crontabs/ on Debian and Ubuntu systems and under /var/spool/cron/ on Red Hat-based distributions such as CentOS and Fedora. Archiving both these directories and the /etc/cron* locations before a major system change is a sensible precaution, as it preserves a full picture of the scheduled workload.
A Critical Naming Convention
One pitfall that catches many administrators is the filename convention enforced by run-parts, a utility used to execute scripts in the /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly directories. Filenames in these locations must consist entirely of upper and lower-case letters, digits, underscores and hyphens. This means that a script named myscript.sh will be silently ignored because the dot in the filename causes run-parts to skip it. Renaming the file as myscript is all that is needed to bring it back into service.
The same rule applies to files placed in /etc/cron.d/. The convention exists partly to prevent cron from acting on package management residue files such as .dpkg-dist backups, which can linger after software updates. It is worth running run-parts --test /etc/cron.daily to verify which scripts will actually execute before assuming that everything in a directory is active.
BusyBox Cron on Alpine Linux
The cron landscape changes on systems using BusyBox, the lightweight utility suite at the heart of Alpine Linux. The BusyBox crond implementation does not read from /etc/cron.d/ at all. Instead, it looks to /etc/crontabs/ for per-user crontab files and relies on /etc/periodic/ subdirectories (such as /etc/periodic/hourly and /etc/periodic/daily) for the familiar interval-based tasks. Any administrator accustomed to placing files in /etc/cron.d/ on a Debian or Red Hat system will find that approach simply does not work on Alpine, and must adapt accordingly.
The filename restriction for scripts in /etc/periodic/ directories is even stricter under the default BusyBox configuration. Scripts must not include a dot anywhere in their filename, meaning that even backup.sh will be overlooked. The safest approach is to use names such as backup or daily-backup, without any extension.
systemd Timers as a Modern Alternative
The rise of systemd has introduced a complementary approach to job scheduling through systemd.timer units. Each timer is paired with a corresponding service unit, giving the scheduled task all the benefits of a regular systemd service, including detailed logging via journalctl, dependency management and resource controls. Traditional cron daemons such as vixie-cron and its successors remain widely used, but systemd timers offer capabilities that cron cannot easily replicate, such as triggering a task a set interval after system boot rather than at a fixed clock time.
To view all active systemd timers on a machine, the following command lists them alongside the time of their last run and their next scheduled activation:
systemctl list-timers
This gives a single, clear view of systemd-managed schedules across the whole system. On systems that use both traditional cron and systemd timers, checking both sources is necessary for a complete picture of what is scheduled to run.
Combining the Approaches
A thorough audit of a Linux system therefore involves checking several locations: user crontabs via crontab -l or the loop described above, the system-wide /etc/crontab file, the files in /etc/cron.d/ and the periodic directories, and finally the output of systemctl list-timers. On Alpine Linux, the audit instead covers /etc/crontabs/ and the /etc/periodic/ directories. It is also worth verifying that the cron daemon itself is running, as a stopped service explains why perfectly valid job entries are not executing. On systemd-based distributions, this is checked with systemctl status cron (or systemctl status crond on Red Hat-based systems).
In Summary
Cron job management rewards attention to detail because the consequences of a missed naming convention or an overlooked directory can be silent and difficult to diagnose. The commands and locations covered here provide a reliable foundation for listing, auditing and verifying scheduled tasks across the main Linux environments in common use today. Combining familiarity with traditional cron, an understanding of BusyBox quirks for container and lightweight deployments and a working knowledge of systemd timers will equip any administrator to keep their automation running with confidence. For those who want to go deeper, A Comprehensive Guide to Using Cronjobs from SitePoint and Linuxize's guide on running cron jobs at specific short intervals are both worth reading once the fundamentals are in place.
Fixing Crontab editor permissions for www-data
There are times when I set jobs to run using the web server account www-data for various website maintenance tasks. To ensure that I am doing this for the right account, I issue the following command:
sudo -u www-data crontab -e
However, doing so on this server yielded the following:
touch: cannot touch '/var/www/.selected_editor': Permission denied
Unable to create directory
/var/www/.local/share/nano/: No such file or directory
It is required for saving/loading search history or cursor positions.
No modification made
While things otherwise worked as they should with nano as the editor, I felt it best to avoid such output if I could. Thus, I modified the command like this:
sudo -u www-data HOME=/tmp crontab -e
This sets the home directory for www-data as /tmp to allow the setting of an editor, at least on an ephemeral basis. The root cause of the messages is that www-data is not a user account like others and does not get a home area. Thus, the above workaround gets around that, without the artificiality of creating a www-data folder in the /home directory. Some might get around the whole business using the Vi editor, but nano suits me better.
When CRON is stalled by incorrect file and folder permissions
During the past week, I rebooted my system only to find that a number of things no longer worked, and my Pi-hole DNS server was among them. Having exhausted other possibilities by testing out things on another machine, I did a status check when I spotted a line like the following in my system logs and went investigating further:
cron[322]: (root) INSECURE MODE (mode 0600 expected) (crontabs/root)
It turned out to be more significant than I had expected because this was why every CRON job was failing and that included the network set up needed by Pi-hole; a script is executed using the @reboot directive to accomplish this, and I got Pi-hole working again by manually executing it. The evening before, I did introduce some changes to file permissions under /var/www, but I was not expecting it to affect other parts of the /var, though that may have something to do with some forgotten heavy-handedness. The cure was to issue a command like the following for execution in a terminal session:
sudo chmod -R 600 /var/spool/cron/crontabs/
Then, CRON itself needed to start since it had not been running at all and executing this command did the needful without restarting the system:
sudo systemctl start cron
That outcome was proved by executing the following command to issue some terminal output that include the welcome text "active (running)" highlighted in green:
sudo systemctl status cron
There was newly updated output from a frequently executing job that checked on web servers for me, but this was added confirmation. It was a simple solution to a perplexing situation that led up all sorts of blind alleys before I alighted on the right solution to the problem.
Running cron jobs using the www-data system account
When you set up your own web server or use a private server (virtual or physical), you will find that web servers run using the www-data account. That means that website files need to be accessible to that system account if not owned by it. The latter is mandatory if you want WordPress to be able to update itself with needing FTP details.
It also means that you probably need scheduled jobs to be executed using the privileges possessed by the www-data account. For instance, I use WP-CLI to automate spam removal and updates to plugins, themes and WordPress itself. Spam removal can be done without the www-data account, but the updates need file access and cannot be completed without this. Therefore, I got interested in setting up cron jobs to run under that account and the following command helps to address this:
sudo -u www-data crontab -e
For that to work, your own account needs to be listed in /etc/sudoers or be assigned to the sudo group in /etc/group. If it is either of those, then entering your own password will open the cron file for www-data, and it can be edited as for any other account. Closing and saving the session will update cron with the new job details.
In fact, the same approach can be taken for a variety of commands where files only can be accessed using www-data. This includes copying, pasting and deleting files as well as executing WP-CLI commands. The latter issues a striking message if you run a command using the root account, a pervasive temptation given what it allows. Any alternative to the latter has to be better from a security standpoint.
Controlling clearance of /tmp on Linux systems
While some may view the behaviour in a less favourable, I always have liked the way that Linux can clear its /tmp directory every time the system is restarted. The setting for this is in /etc/default/rcS and the associated line looks something like:
TMPTIME=0
The value of 0 means that the directory is flushed completely every time the system is restarted, but there are other options. A setting of -1 makes the directory behave like any other one on the system, where any file deletions are manual affairs. Using other positive integer values like 7 will specify the number of days that a file can stay in /tmp before it is removed.
What brought me to this topic was the observation that my main Linux Mint system was accumulating files in /tmp and the cause was the commenting out of the TMPTIME=0 line in /etc/default/rcS. This is not the case on Ubuntu, and using that is how I got accustomed to automatic file removal from /tmp in the first place.
All of this discussion so far has pertained to PC's where systems are turned off or restarted regularly. Things are different for servers of course and I have seen tools like tmpreaper and tmpwatch being given a mention. As if to prove that there is more than one way to do anything on Linux, shell scripting and cron remain an ever present fallback.
A peculiarity with PROC EXPORT
I have just encountered an issue with PROC EXPORT that I did not expect to see: it needs to run in a windowing environment. The way that I found this was that I was running a SAS macro as part of a batch job in a headless UNIX session and my program stopped dead with the job needing to be killed; that returned a message containing something about SAS/FSP and SAS/AF which does explain things. Still, this was not something that I would have expected with an export to a CSV file; the behaviour sounds more what you see with the likes of PROC GPLOT or PROC REPORT. As it happened, adding the -noterminal option to the batch command line sorted things out.