TOPIC: WINDOWS TASK SCHEDULER
Managing Cron jobs on Linux systems
Cron jobs are often-used workhorses of Linux automation, running silently in the background to handle everything from nightly backups to log rotation. For system administrators, understanding how to list, inspect and modify these scheduled tasks is not merely useful knowledge, it is a core skill that keeps infrastructure running smoothly. Whether you are auditing an unfamiliar server or tracking down a misfiring script, knowing where to look and what commands to use will save you considerable time and frustration.
Listing Cron Jobs for the Current User
The starting point for any cron investigation is crontab -l, which displays the scheduled jobs belonging to the user who runs it. Running this command in a terminal will either show a list of entries or print a message such as no crontab for [username], confirming that no jobs have been set. Each line in the output represents a separate scheduled task, formatted with five time fields followed by the command to execute. If you are new to writing that five-field schedule expression, Crontab Guru is a useful browser-based tool that translates cron syntax into plain English as you type.
Listing Cron Jobs for Other Users
When you need to inspect jobs belonging to a different account, the crontab -u flag allows you to specify a username, though this requires root or sudo privileges. To audit every user on a system in one pass, administrators often pair the cut command with a loop that reads usernames from /etc/passwd, cycling through each account in turn. A simple shell loop to achieve this looks like the following:
for user in $(cut -f1 -d: /etc/passwd); do
echo "Crontab for $user:"
sudo crontab -u $user -l 2>/dev/null
echo
done
Running this as root will surface any scheduled task on the machine, regardless of which account owns it.
System-Wide Cron Locations
Beyond user-specific crontabs, several system-wide locations hold scheduled tasks that apply more broadly. The /etc/crontab file is the main system crontab, which differs from user crontabs in that it includes an additional field specifying which user should run each command. The /etc/cron.d/ directory serves a similar purpose, allowing packages and administrators to drop in individual configuration files rather than editing a single shared file. nixCraft's thorough guide to listing cron jobs covers all of these locations in detail and is a useful reference to keep to hand.
User crontab files are stored separately, typically under /var/spool/cron/crontabs/ on Debian and Ubuntu systems and under /var/spool/cron/ on Red Hat-based distributions such as CentOS and Fedora. Archiving both these directories and the /etc/cron* locations before a major system change is a sensible precaution, as it preserves a full picture of the scheduled workload.
A Critical Naming Convention
One pitfall that catches many administrators is the filename convention enforced by run-parts, a utility used to execute scripts in the /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly directories. Filenames in these locations must consist entirely of upper and lower-case letters, digits, underscores and hyphens. This means that a script named myscript.sh will be silently ignored because the dot in the filename causes run-parts to skip it. Renaming the file as myscript is all that is needed to bring it back into service.
The same rule applies to files placed in /etc/cron.d/. The convention exists partly to prevent cron from acting on package management residue files such as .dpkg-dist backups, which can linger after software updates. It is worth running run-parts --test /etc/cron.daily to verify which scripts will actually execute before assuming that everything in a directory is active.
BusyBox Cron on Alpine Linux
The cron landscape changes on systems using BusyBox, the lightweight utility suite at the heart of Alpine Linux. The BusyBox crond implementation does not read from /etc/cron.d/ at all. Instead, it looks to /etc/crontabs/ for per-user crontab files and relies on /etc/periodic/ subdirectories (such as /etc/periodic/hourly and /etc/periodic/daily) for the familiar interval-based tasks. Any administrator accustomed to placing files in /etc/cron.d/ on a Debian or Red Hat system will find that approach simply does not work on Alpine, and must adapt accordingly.
The filename restriction for scripts in /etc/periodic/ directories is even stricter under the default BusyBox configuration. Scripts must not include a dot anywhere in their filename, meaning that even backup.sh will be overlooked. The safest approach is to use names such as backup or daily-backup, without any extension.
systemd Timers as a Modern Alternative
The rise of systemd has introduced a complementary approach to job scheduling through systemd.timer units. Each timer is paired with a corresponding service unit, giving the scheduled task all the benefits of a regular systemd service, including detailed logging via journalctl, dependency management and resource controls. Traditional cron daemons such as vixie-cron and its successors remain widely used, but systemd timers offer capabilities that cron cannot easily replicate, such as triggering a task a set interval after system boot rather than at a fixed clock time.
To view all active systemd timers on a machine, the following command lists them alongside the time of their last run and their next scheduled activation:
systemctl list-timers
This gives a single, clear view of systemd-managed schedules across the whole system. On systems that use both traditional cron and systemd timers, checking both sources is necessary for a complete picture of what is scheduled to run.
Combining the Approaches
A thorough audit of a Linux system therefore involves checking several locations: user crontabs via crontab -l or the loop described above, the system-wide /etc/crontab file, the files in /etc/cron.d/ and the periodic directories, and finally the output of systemctl list-timers. On Alpine Linux, the audit instead covers /etc/crontabs/ and the /etc/periodic/ directories. It is also worth verifying that the cron daemon itself is running, as a stopped service explains why perfectly valid job entries are not executing. On systemd-based distributions, this is checked with systemctl status cron (or systemctl status crond on Red Hat-based systems).
In Summary
Cron job management rewards attention to detail because the consequences of a missed naming convention or an overlooked directory can be silent and difficult to diagnose. The commands and locations covered here provide a reliable foundation for listing, auditing and verifying scheduled tasks across the main Linux environments in common use today. Combining familiarity with traditional cron, an understanding of BusyBox quirks for container and lightweight deployments and a working knowledge of systemd timers will equip any administrator to keep their automation running with confidence. For those who want to go deeper, A Comprehensive Guide to Using Cronjobs from SitePoint and Linuxize's guide on running cron jobs at specific short intervals are both worth reading once the fundamentals are in place.
How to persist R packages across remote Windows server sessions
Recently, I was using R to automate some code changes that needed implementation when porting code from a vendor to client systems. While I was doing so, I noticed that packages needed to be reinstalled every time that I logged into their system. This was because they were going into a temporary area by default. The solution was to define another location where the packages could be persisted.
That meant creating a .Renviron file, with Windows Explorer making that manoeuvre an awkward one that could not be completed. Using PowerShell was the solution for this. There, I could use the following command to do what I needed:
New-Item -ItemType File "$env:USERPROFILE\Documents\.Renviron" -Force
That gave me an empty .Renviron file, to which I could add the following text for where the packages should be kept (the path may differ on your system):
R_LIBS_USER=C:/R/packages
Here, the paths are only examples and do not always represent what the real ones were, and that is by design for reasons of client confidentiality. Restarting RStudio to give me a fresh R session meant that I now could install packages using commands like this one:
install.packages("tidyverse")
Version constraints meant for compilation from source in my case, making for a long wait time for completion. Once that was done, though, there was no need for a repeat operation.
One final remark is that file creation and population could be done in the same command in PowerShell:
'R_LIBS_USER=C:/R/packages' | Out-File -Encoding ascii "$env:USERPROFILE\Documents\.Renviron"
It places the text into a new file or completely overwrites an existing, meaning that you really want to do this once should you decide to add any more setting details to .Renviron later on.
Running cron jobs using the www-data system account
When you set up your own web server or use a private server (virtual or physical), you will find that web servers run using the www-data account. That means that website files need to be accessible to that system account if not owned by it. The latter is mandatory if you want WordPress to be able to update itself with needing FTP details.
It also means that you probably need scheduled jobs to be executed using the privileges possessed by the www-data account. For instance, I use WP-CLI to automate spam removal and updates to plugins, themes and WordPress itself. Spam removal can be done without the www-data account, but the updates need file access and cannot be completed without this. Therefore, I got interested in setting up cron jobs to run under that account and the following command helps to address this:
sudo -u www-data crontab -e
For that to work, your own account needs to be listed in /etc/sudoers or be assigned to the sudo group in /etc/group. If it is either of those, then entering your own password will open the cron file for www-data, and it can be edited as for any other account. Closing and saving the session will update cron with the new job details.
In fact, the same approach can be taken for a variety of commands where files only can be accessed using www-data. This includes copying, pasting and deleting files as well as executing WP-CLI commands. The latter issues a striking message if you run a command using the root account, a pervasive temptation given what it allows. Any alternative to the latter has to be better from a security standpoint.
Killing Windows processes from the command line
During my days at work, I often hear about the need to restart a server because something has gone awry with it. This makes me wonder if you can kill processes from the command line, like you do in Linux and UNIX. A recent need to reset Windows Update on a Windows 10 machine gave me enough reason to answer the question.
Because I already knew the names of the services, I had no need to look at the Services tab in the Task Manager like you otherwise would. Then, it was a matter of opening up a command line session with Administrator privileges and issuing a command like the following (replacing [service name] with the name of the service):
sc queryex [service name]
From the output of the above command, you can find the process identifier, or PID. With that information, you can execute a command like the following in the same command line session (replacing [PID] with the actual numeric value of the PID):
taskkill /f /pid [PID]
After the above, the process no longer exists and the service can be restarted. With any system, you need to find the service that is stuck to kill it, but that would be the subject of another posting. What I have not got to testing is whether these work in PowerShell, since I used them with the legacy command line instead. Along with processes belonging to software applications (think Word, Excel, Firefox, etc.), that may be something else to try should the occasion arise.