Cron is a Unix utility created in the 1970s to execute tasks at certain times or intervals. It is run as a daemon, a background process that runs continuously. There is a main configuration file for cron called a crontab. Inside of your crontab you generally will find paths to various locations where scripts taking advantage of cron can execute. Most installations of cron provide several directories where cron scripts can live including hourly, daily, weekly, and monthly, to simplify configuration. This capability is implemented by four cron jobs, one for each time interval, which just runs all the scripts in those directories.
The primary point of cron and why it was created was to enable systems to run and execute commands, scripts, and programs on a specified, repeated interval. Effectively automating tasks that need to be run on machines that you don’t want or need to do manually. When cron was introduced, the use cases for why programmers and admins would run tasks regularly were likely somewhat limited. Today, though, cron is used extensively by virtually all IT professionals that leverage Linux or Unix-based systems. Use cases for the utility are too numerous to list.
There are three primary challenges IT admins and developers see when they are using cron:
- Arcane format – crontab scheduling is painful. The format is a relic of the 70s and has not really been updated. Setting the dates and times that scripts or commands will run is painful. It ends up being a system of trial and error to see if the format is set correctly and often it isn’t.
- Reliability – there can be many dependencies to satisfy to make a cron job run correctly. If any of those dependencies, such as an environment variable, directory permissions, etc., are incorrect, the cron job may fail, and in these cases, you may not be aware of the issue until you don’t see the desired behavior (log files may fill up, data may not be archived, software may fail due to lack of maintenance processing), and then your recourse is to scan log files to figure out what happened.
- Visibility – every user on a system has their own crontab file. As a result, every user can setup their own scheduled jobs. Many of these could conflict, could be duplicative, or just waste resources. Further, when a job has run and failed, there is no notification or details behind the failure. Sysadmins need to manually write that code to give them greater visibility into the results.
Despite these challenges, millions of IT admins and developers continue to leverage cron. However, there is a way to leverage cron while eliminating these challenges. JumpCloud, through its Directory-as-a-Service® platform, has the ability to execute commands, scripts, and programs on an ad hoc, scheduled, or triggered basis. JumpCloud’s command execution capabilities give you all of the benefits of cron without the challenges.
Repetitive tasks or jobs can be easily scheduled from a Web-based interface. Because the system is centralized, the tasks can easily be allocated to one or more systems. The scheduling is done via a schedule editor making it simple to determine when the scheduled job will run. JumpCloud’s command execution system provides full results on whether the task was executed and the results from the task (stdout). Through the JumpCloud console it is easy to see every scheduled job throughout the infrastructure and also see whether they were successful or not. Cron functionality is critical to IT admins, but having a centralized, reliable cron capability is a significant step up.
If you are leveraging cron, but would like an industrial strength, centralized version, give JumpCloud a try. You can sign-up for a free account and use it on as many servers as you like for free. You only pay if you have more than 10 users.