HTTPS SSH

PostgreSQL job scheduler - pgBucket 2.0

We are so glad to announce the pgBucket 2.0-beta which evolved from its 1.0 version. Also, we would like to thank all of you, who support and gave wonderful feedback about this new tool. This helped us to improve it and make it a bit better. We hope everyone will like the features that we embedded in this new version.

Latest updated options

pgBucket usage is as below

     -h --help            Display this message
     -v --version         Display version details
     -I --init            Initialize catalog tables
     -O --drop            Drop catalog tables
     -f --configfile      Jobs configuration file
     -D --startdaemon     Start pgBucket daemon
                               F   -> Foreground mode
                               B   -> Background mode
     -Q --quitdaemon      Quit pgBucket daemon
                               n   -> Normal
                               f   -> Force
     -o --reload          Reload the configuration settings
     -R --refresh         Reload buckets
     -S --status          Jobs status
                               A   -> All Success/Running/Failed
                               S   -> Success
                               E   -> Enabled
                               D   -> Disable
                               F   -> Failed
                               R   -> Running
     -n --now             Daemon instant actions
                               RN  -> Run a job
                               RF  -> Run a job forcefully
                               E   -> Enable a job
                               D   -> Disable a job
                               SN  -> Stop a job with SIGTERM
                               SF  -> Stop a job with SIGKILL
                               PQ  -> Print job queue instance
                               PH  -> Print job hash instance
                               PCP -> Print connection pool state
                               SKN -> Skip job's next run
     -s --serialid        Job result for the serial id
     -L --limit           Number of jobs (Default 10)
                               -1  -> Limit all
     -e --extended        Extended table print mode
     -i --insert          Insert jobs config entries
     -u --update          Update jobs config entries
     -x --delete          Delete given job
     -j --jobid           Jobid for the specific job maintenance

Features

In this new version 2.0, we have added features, which will give you more control over the jobs and the daemon process. Now, let us see all the options pgBucket 2.0 offers us.

1. Dedicated configuration file

In previous version 1.0, we used to drive the pgBucket daemon by using few environment variables, which are removed in this version. They are maintained in a configuration file now. However, we still need one environment variable called PGBUCKET_CONFIG_FILE, which tells pgBucket about the configuration file location.

This is how the configuration file looks like:

[CONFIG]
sock_dir=/tmp/
pid_dir=/tmp/
child_process_mode=posix_spawn
pgbucket_host_addr=127.0.0.1
pgbucket_dbname=postgres
pgbucket_username=postgres
pgbucket_port=5432
pgbucket_password=postgres
pgbucket_dbpool_connections=10
log_location=/tmp/pgBucket.log
debug=off
dispatch_limit=100
dispatch_delay=1000

[JOBS]
{
    ...
}

sock_dir

This parameter tells to the daemon about, where to initiate the socket for the process communications. And this can't reload online.

pid_dir

This parameter tells to the daemon about, where to store the process pid file. And this can't reload online.

child_process_mode

This parameter tells to daemon about, which approach the daemon needs to follow while forking a new OS job process. This parameter accepts only below two values and can be reloaded online.

posix_spawn

This is the recommended configuration setting for the OS related jobs, where OS will take care of resources which are required for the process clone.

fork_exec

This is the regular child process forking mechanism, where we load the new child process image into the newly forked process. Currently, this method is lacking to track of OS job's error messages.

pgbucket_host_addr

This parameter tells to the daemon about, the target PostgreSQL server host address. And this can't reload online.

pgbucket_dbname

This parameter tells to the daemon about, the target PostgreSQL database name where pgBucket catalogs are created. And this can't be reload online.

pgbucket_username

This parameter tells to the daemon about, the target PostgreSQL server user name. And this can't be reload online.

pgbucket_port

This parameter tells to the daemon about, the target PostgreSQL server port number. And this can't be reload online.

pgbucket_password

This parameter tells to the daemon about, the target PostgreSQL server user's password. And this can't be reload online.

pgbucket_dbpool_connections

This parameter tells to the daemon about, how many source database connections the daemon needs to utilize. And this can't be reload online. In previous 1.0 version, the daemon is utilizing only one database connection, which is used for all the jobs to report their status. If the number of parallel running jobs increases, then each job takes more time in report synchronization. By using this dedicated pooled connections, each and every job may get it's own database connection to report their results into the database.

log_location

This parameter tells to daemon about, the log file location. And this can't reload online. If the daemon is running in background mode, then it will log all the log messages into the mentioned log file.

debug

This parameter tells to daemon about, whether it needs to log the debug messages or not. And it can be reloaded online.

dispatch_limit

This parameter tells to daemon about, how many jobs it needs to dispatch at once. And it can be reloaded online.

dispatch_delay

This parameter tells to daemon about, how many milliseconds it needs to sleep after we reach the dispatch limit. And it can be reloaded online.

2. Event Jobs

In previously, we only had scheduled jobs. Now, we introduced continuous job flows as per the job execution status. That is, once a job is complete either pass or failure, we can configure what the next action is based on it's status.

We introduced a new job class for each entry in the configuration file. That is, by defining a job in the configuration file we need to mention it's class type with one of the below options.

JOB

This is the option we need to mention for all our scheduled jobs. That means each JOB class should have a specific schedule.

EVT

This is the option we need to mention for all our cascaded/event jobs. That means each EVT class should not have any schedule.

Here, we would like to show you a simple demonstration using event jobs. It tries to build pgBucket binaries every hour. We divided the building phases into multiple steps and each step has its own pass, fail cascaded jobs. This is the simple configuration for this demonstration:

Job id -> 1

Here, the job id 1 is configured with the OS command true, which always succeeds and moves to it's event job id 31 (JOBPASSEVNTS = 31). This job is configured to run for every hour.

[CONFIG]
sock_dir=/tmp/
pid_dir=/tmp/
child_process_mode=posix_spawn
pgbucket_host_addr=127.0.0.1
pgbucket_dbname=postgres
pgbucket_username=postgres
pgbucket_port=5432
pgbucket_password=postgres
pgbucket_dbpool_connections=10
log_location=/tmp/pgBucket.log
debug=off
dispatch_limit=100
dispatch_delay=1000

[JOBS]
# Starting the building phase
#
{
JOBCLASS = JOB
JOBID = 1 
JOBNAME = Job initial phase
ENABLE = True
JOBTYPE = OS 
JOBRUNFREQ = Day:*  Month:*  Hour:*  Minute:0 Second:0
JOBPASSEVNTS = 31
CMD     = true
}

Job id -> 31

This an event (EVT) job, which tries to insert a record into the database and stores the job's result. The pgBucket daemon generally stores the database result in CSV format. While storing the database query result, it prepends the CSV formatted column names to the results. And, if you do not want to store the column headers, then we need to mention that RECDBRESCOLNAMES = False. You are going to see an advantage of this parameter in further steps. Below is the table description what we created for this demonstration.

# Record the pgBucket build start phase
#
{
JOBCLASS = EVT
JOBID = 31
JOBNAME = Refreshing the pgBucket local source
ENABLE = True
JOBTYPE = DB 
JOBPASSEVNTS = 33
DBCONN = postgresql://postgres:postgres@127.0.0.1:5432/postgres 
CMD     = INSERT INTO build_status(lable, start_time) VALUES('Starting build', now()) RETURNING id;
RECDBRESCOLNAMES = False
}
postgres=# \d+ build_status
                                                         Table "public.build_status"
   Column   |            Type             |                         Modifiers                         | Storage  | Stats target | Description 
------------+-----------------------------+-----------------------------------------------------------+----------+--------------+-------------
 id         | integer                     | not null default nextval('build_status_id_seq'::regclass) | plain    |              | 
 lable      | text                        |                                                           | extended |              | 
 start_time | timestamp without time zone |                                                           | plain    |              | 
 end_time   | timestamp without time zone |                                                           | plain    |              | 
 emsg       | text                        |                                                           | extended |              | 
 result     | text                        |                                                           | extended |              | 
 status     | boolean                     |                                                           | plain    |              | 

Job id -> 32

Here, the job id 32 is an EVT(Event job), which update the build status into our build table. In this job, if you see we have used some special command tags like __pissuccess__. Let us discuss what are all the special tags pgBucket offers, which will be parsed before the job execution. Each command tag will be prefixed and suffixed by two underscore symbols. To parse these special commands, we need to explicitly mention an another job property called PARSECMDPARAMS = True.

# Record the pgBucket build status in database
#
{
JOBCLASS = EVT
JOBID = 32
JOBNAME = Storing job status
ENABLE = True
JOBTYPE = DB 
CMD = UPDATE build_status SET status=__pissuccess__, end_time=now(), emsg=$$__perror__$$, result=$$__presult__$$ WHERE id = 31@__presult__
PARSECMDPARAMS = True
DBCONN = postgresql://postgres:postgres@127.0.0.1:5432/postgres
}
Command Tags Usage
_ pname _ Get the caller's job name
_ pjid _ Get the caller's job id
_ perror _ Get the caller's job error
_ presult _ Get the caller's job result
_ pruncnt _ Get the caller's job run count
_ pissuccess _ Get the caller's job run status (False/True)
_ pschstatus _ Get the caller's job schedule status

Job scheduler status returns one of the following schedule status.

Schedule Status
INITIALIZED
SCHEDULED
RUNNING
RUNNING_EVENTJOB
DISPATCHED
COMPLETED
SKIPPED
KILLED
INVALID

Whenever we enable the PARSECMDPARAMS, then pgBucket always look for the command tags in the job's command and will replace those tags with its parent(callee) job properties. From the above event job command, we have also mentioned a command tag like 31@__presult__, which is prefixed by an another job id 31. That means, while executing the UPDATE statement, it will fetch the result of job 31, and replace the command tag with 31's result. This is the reason we skipped to store the column names during the execution of the job id 31, by mentioning RECDBRESCOLNAMES = False in job 31's configuration.

Job id -> 33

Here, the job id 33 is configured as to create a brand new folder in /tmp/ and clearing the previous entries using rm command. Once this command is successful it goes to it's event job 34, otherwise it goes to 32, where 32 will replace all the command tag details with 33's result. For demonstration purpose we have added JOBFAILIFRESULT = NONE, where job result will be compare with this parameter value, and then change the job execution status as failed if the output is equal to NONE.

# Job, which refresh the local pgBucket repo
#
{
JOBCLASS = EVT
JOBID = 33
JOBNAME = Refreshing the pgBucket local source
ENABLE = True
JOBTYPE = OS
JOBFAILIFRESULT = NONE
JOBPASSEVNTS = 34
JOBFAILEVNTS = 32
CMD     = mkdir -p /tmp/pgBucket_Build; rm -rf /tmp/pgBucket_Build/*
}

Job id -> 34

Here, the job id 34 is configured as to pull the pgBucket source from bitbucket and then do checkout to a branch called eventjobs. If this is succeeded then, it goes to event job id 35, otherwise it goes to 32.

# Job, which pull the repo from bitbucket
#
{
JOBCLASS = EVT
JOBID = 34
JOBNAME = Pull the source from bitbucket
ENABLE = True
JOBTYPE = OS
JOBFAILIFRESULT = NONE
JOBPASSEVNTS = 35
JOBFAILEVNTS = 32
CMD     = cd /tmp/pgBucket_Build; git clone https://bitbucket.org/dineshopenscg/pgbucket.git; cd /tmp/pgBucket_Build/pgbucket; git checkout eventjobs
}

Job id -> 35

Here, the job id 35 is configured as to process the make command by exporting the pg_config path. If this is succeeded then, it goes to the event job 36, otherwise, it reaches to job 32. In this job configuration, we have mentioned a new feature called Auto job disable using DISABLEIFFAILCNT = 3 option. That is, if this job fails continuously 3 times, then this job will be disabled automatically, and the job flow will end here whenever the job id 1 starts it's execution next time as per its schedule.

# Job, which builds the source
#
{
JOBCLASS = EVT
JOBID = 35
JOBNAME = Build the source
ENABLE = True
JOBTYPE = OS
JOBPASSEVNTS = 36
JOBFAILEVNTS = 32
CMD     = cd /tmp/pgBucket_Build/pgbucket/pgBucket; export PATH=$PATH:/Users/dinesh/PostgreSQL/pg96/bin/; make
DISABLEIFFAILCNT = 3
}

Job id -> 36

Here, the job id 36 is configured as to test whether the pgBucket binary works or not. If this job is succeeded then it goes to job id 32, otherwise it goes to 32 and then update the result of 36, and then start a fresh build process by going to 31.

# Job, which tests the pgBucket binary
#
{
JOBCLASS = EVT
JOBID = 36
JOBNAME = Testing pgBucket binary
ENABLE = True
JOBTYPE = OS
JOBFAILEVNTS = 32,31
JOBPASSEVNTS = 32
CMD     = cd /tmp/pgBucket_Build/pgbucket/pgBucket; ./pgBucket
}

A simple pictorial representation of the above process as shown below, and hope it gives you more idea about what we are trying to achieve here. In the below diagram, RED LINE indicates the job failure path, GREEN LINE indicates the job success path.

eventJobs1.png

CAUTION:

Please be aware of stack size configured at OS level. Thread stack may get overflow during the deep event job recursion and may lead to daemon crash.

DEMO OUTPUT

A sample run for the above demonstration is as below.

./pgBucket -DF
  * Starting pgBucket daemon...
2017/4/30 10:31:0 IST NOTICE  Starting pgBucket daemon...
2017/4/30 10:31:1 IST NOTICE  Job id ->1 is processing...
2017/4/30 10:31:1 IST NOTICE  Event Job id ->31 is processing...
2017/4/30 10:31:1 IST NOTICE  Event Job id ->31 is completed with PID(5327) with duration 0 seconds.
2017/4/30 10:31:1 IST NOTICE  Event Job id ->33 is processing...
2017/4/30 10:31:1 IST NOTICE  Event Job id ->33 is completed with PID(5328) with duration 0 seconds.
2017/4/30 10:31:1 IST NOTICE  Event Job id ->34 is processing...
2017/4/30 10:31:18 IST WARNING Got error while executing the Job id ->34. Error: Cloning into 'pgbucket'...
Switched to a new branch 'eventjobs'

2017/4/30 10:31:18 IST NOTICE  Event Job id ->34 is completed with PID(5331) with duration 17 seconds.
2017/4/30 10:31:18 IST NOTICE  Event Job id ->35 is processing...
2017/4/30 10:31:31 IST NOTICE  Event Job id ->35 is completed with PID(5339) with duration 13 seconds.
2017/4/30 10:31:31 IST NOTICE  Event Job id ->36 is processing...
2017/4/30 10:31:31 IST NOTICE  Event Job id ->36 is completed with PID(5469) with duration 0 seconds.
2017/4/30 10:31:31 IST NOTICE  Event Job id ->32 is processing...
2017/4/30 10:31:31 IST NOTICE  Event Job id ->32 is completed with PID(5471) with duration 0 seconds.
2017/4/30 10:31:31 IST NOTICE  Job id ->1 is completed with PID(5326) with duration 0 seconds. Execution path is: START->1->31->33->34->35->36->32->END

Let us get the job 1 status by using job's status command as shown below.

./pgBucket -SA -e 1 -j 1
                    All  Jobs
--------------------------------------------------
Serial ID  | 23                                   
ID         | 1                                    
Name       | Job initial phase                    
Command    | true                                 
Start Time | 2017-04-30 10:31:32                  
End Time   | 2017-04-30 10:32:03                  
Duration   | 00:00:31                             
PID        | 5472                                 
Next Run   | No more schedules today              
Exec Path  | START->1->31->33->34->35->36->32->END
JobStatus  | Success                              
Error      |                                      

3. Extended Table Format

pgBucket is inspired by the psql's extended table format, and we got a chance to extend it even more. That is, we can mention how many columns we want to display rather a single column. For example, consider the below output.

./pgBucket -SE -e 2 -L 3
                                                                                                                  Enabled  Jobs
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID             | 1                                                                                                                                    | 31                                                                                       
Name           | Job initial phase                                                                                                                    | Refreshing the pgBucket local source                                                     
Type           | OS                                                                                                                                   | DB                                                                                       
Command        | true                                                                                                                                 | INSERT INTO build_status(lable, start_time) VALUES('Starting build', now()) RETURNING id;
Run Frequency  | Day: * Mon: * H: * M: * S: *                                                                                                         | NONE                                                                                     
Class          | JOB                                                                                                                                  | EVT                                                                                      
FailEventIds   | {}                                                                                                                                   | {}                                                                                       
PassEventIds   | {31}                                                                                                                                 | {33}                                                                                     
DisableFailCnt | 0                                                                                                                                    | 3                                                                                        
FailIfResult   |                                                                                                                                      |                                                                                          
PraseCmd       | f                                                                                                                                    | f                                                                                        
RecordColNames | f                                                                                                                                    | f                                                                                        
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID             | 32                                                                                                                                  
Name           | Storing job status                                                                                                                  
Type           | DB                                                                                                                                  
Command        | UPDATE build_status SET status=__pissuccess__, end_time=now(), emsg=$$__perror__$$, result=$$__presult__$$ WHERE id = 31@__presult__
Run Frequency  | NONE                                                                                                                                
Class          | EVT                                                                                                                                 
FailEventIds   | {}                                                                                                                                  
PassEventIds   | {}                                                                                                                                  
DisableFailCnt | 0                                                                                                                                   
FailIfResult   |                                                                                                                                     
PraseCmd       | t                                                                                                                                   
RecordColNames | f                                                                                                                                   
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(rows 3)

From the above example, we have retrieved 3 records(-L) and printed 2 columns(-e) for each row. We can also mention the number of columns we want to print by using -e option.

An another example of the same extended output is as below.

./pgBucket -nPEH -e 3
                                                 Event hash instance
----------------------------------------------------------------------------------------------------------------------
ID            | 35                                   | 36                       | 34                                  
Name          | Build the source                     | Testing pgBucket binary  | Pull the source from bitbucket      
PID           | 1808                                 | 0                        | 1823                                
Enabled       | Enable                               | Enable                   | Enable                              
SchStatus     | COMPLETED                            | INITIALIZED              | RUNNING                             
PrevRunStatus | Fail                                 | Unknown                  | Success                             
StartTime     | Fri Apr 28 20:57:42 2017             |                          | Fri Apr 28 20:57:43 2017            
EndTime       | Fri Apr 28 20:57:42 2017             |                          |                                     
RunCount      | 1                                    | 0                        | 2                                   
----------------------------------------------------------------------------------------------------------------------
ID            | 33                                   | 32                       | 31                                  
Name          | Refreshing the pgBucket local source | Storing job status       | Refreshing the pgBucket local source
PID           | 1816                                 | 1811                     | 1814                                
Enabled       | Enable                               | Enable                   | Enable                              
SchStatus     | RUNNING_EVENTJOB                     | COMPLETED                | RUNNING_EVENTJOB                    
PrevRunStatus | Success                              | Success                  | Success                             
StartTime     | Fri Apr 28 20:57:43 2017             | Fri Apr 28 20:57:42 2017 | Fri Apr 28 20:57:43 2017            
EndTime       | Fri Apr 28 20:57:43 2017             | Fri Apr 28 20:57:42 2017 | Fri Apr 28 20:57:43 2017            
RunCount      | 2                                    | 1                        | 2                                   
----------------------------------------------------------------------------------------------------------------------
(rows 6)

4. Online reload configuration settings

In this version as we introduced a dedicated configuration file, and we can change the daemon settings online and we can reload the daemon settings using -o option.

./pgBucket -o
  * Initiating sighup signal...

5. Auto job disable

In this version, we have introduced a new job-related parameter called, DISABLEIFFAILCNT which a job will automatically disabled if the count of continuous failure reaches to this parameter.

6. Custom job failure setting

In this version, we have introduced a new job-related parameter called, JOBFAILIFRESULT which a job will treat it self as a failure when the result of job execution match with this setting value.

Besides to the above-mentioned features, we have improved the pgBucket daemon stability and improved a bit coding standards.

Previous version 1.0

This is a simple concurrent job scheduler of a database server. That is, using this tool we can schedule either OS jobs or DB jobs by using a cron style syntax. This tool is implemented using c++11 (gcc 4.9.3).

The pgBucket internal implementation is as like below.

pgBucket_New1.png

All today scheduled jobs will be loading into a job queue which is followed by a job hash instantiation. Here, job queue is divided into 4 buckets and while loading jobs from hash into the queue, jobs will be going into their respective buckets based on it's next dispatch time.

The pgbucket dispatcher is always pointed to the 1Hour bucket, and in the interval of 1 second, it is looking for the jobs to dispatch from the 1Hour bucket. Once any job is dispatched, then it is job's responsibility to complete the given task and report the status along with result to the data source.

After spending 1Hour time that is 3600 seconds, the daemon will merge the 1Hour bucket with 2Hour bucket to get the latest jobs to dispatch. Also 2Hour, 3Hour and GT3Hour(Greater Than) buckets will be updated accordingly.

At every day beginning second, that is at 0 hours 0 minute 0 second the instance will be refreshed automatically.

Now, let us see how to build this tool and how to use it.

How to build

Set pg_config in your environment variable PATH

pgbucket@ubuntu:~/pgBucket$ which pg_config
/opt/PostgreSQL/9.5/bin/pg_config

Go to pgBucket source and run make

pgbucket@ubuntu:~/pgBucket$ make
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/utils/utilities.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/logger.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/jobConfig.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/dbpref.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/schDbOps.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/pgBucket.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/bucketDaemon.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/bucketSocket.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/jobQueue.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/jobs.cpp
g++ -I  /opt/PostgreSQL/9.5/include -I ./include -std=c++11 -O0 -fexceptions -c -fmessage-length=0 source/jobsRunner.cpp
g++ utilities.o logger.o jobConfig.o dbpref.o schDbOps.o pgBucket.o bucketDaemon.o bucketSocket.o jobQueue.o jobs.o jobsRunner.o -L  /opt/PostgreSQL/9.5/lib -o pgBucket -lpthread -lpq
pgBucket installation is completed.
rm -f utilities.o logger.o jobConfig.o dbpref.o schDbOps.o pgBucket.o bucketDaemon.o bucketSocket.o jobQueue.o jobs.o jobsRunner.o

Let us check pgBucket binary

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -v
pgBucket version 1.0

CLI Options

To run pgBucket we need data source details. We can either set the libpq environment settings or we can use pgBucket cli options to define the data source. I would encourage you to set them as environment variables as like below.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/PostgreSQL/9.5/lib
export PGHOSTADDR=127.0.0.1
export PGUSER=postgres
export PGDATABASE=postgres
export PGPORT=5432
export PGPASSWORD=postgres
export PGBUCKET_SOCK_DIR=/tmp/
export PGBUCKET_PID_DIR=/tmp/

Now let us see the usage of pgBucket options.

pgBucket usage is as below

     -h --help            Display this message
     -I --init            Initialize catalog tables
     -o --drop            Drop catalog tables
     -f --configfile      Jobs configuration file
     -D --startdaemon     Start pgBucket daemon
     -C --sockdir         Sock directory
     -Q --quitdaemon      Quit pgBucket daemon
                               n   -> Normal
                               f   -> Force
     -R --refresh         Reload buckets
     -S --status          Jobs status
                               A   -> All Success/Running/Failed
                               S   -> Success
                               E   -> Enabled
                               D   -> Disable
                               F   -> Failed
                               R   -> Running
     -l --logfile         Logfile location
     -n --now             Daemon instant actions
                               RN  -> Run a job
                               RF  -> Run a job forcefully
                               E   -> Enable a job
                               D   -> Disable a job
                               SN  -> Stop a job with SIGTERM
                               SF  -> Stop a job with SIGKILL
                               PQ  -> Print job queue instance
                               PH  -> Print job hash instance
                               SKN -> Skip job's next run
     -s --serialid        Job result for the serial id
     -L --limit           Number of jobs (Default 10)
                               -1  -> Limit all
     -i --insert          Insert jobs config entries
     -u --update          Update jobs config entries
     -x --delete          Delete given job
     -j --jobid           Jobid
     -b --debug           Enable debug messages
     -H --hostip          Bucket db hostip
     -U --user            Bucket login username
     -d --database        Bucket dbname
     -p --port            Bucket db port
     -w --password        Bucket login password
     -C --sockdir         Daemon socket directory
     -P --piddir          Daemon pid directory

Options

I
    This option initializes the pgBucket catalog tables in the given data source.

o
    This option drops the catalog tables.

f
    This option specifies the jobs config file.

D
    This option runs the pgBucket as daemon process.

C
    This option provides the SOCK DIR path.

Q
    This option will quit the daemon.

        n -> It will use SIGTERM to terminate the daemon process
        f -> It will use SIGKILL to terminate the daemon process

R
    This option clears the job hash and job queue, and will re-instantiate the daemon process

S
    This option needs further options which will provide more information about configured jobs.

        A -> Display all successful or running or failed jobs
        S -> Display only jobs which are executed successfully
        E -> Display only enabled jobs
        D -> Display only disabled jobs
        F -> Display only jobs which are failed in execution
        R -> Display only running jobs

l
    This option specifies logfile location.

n
    This option needs further options, which will work on daemon instance.

       RN  -> Run a job normally.
            If any job is disabled/skipped it's nextrun then this option won't force the given job to run now.

       RF  -> Run a job forcefully
            This option will run the given job forcefully.

       E   -> Enable a job
            This option will enable the job only for today. Since, the instance will be re-instantiated for the nextday.

       D   -> Disable a job
            This option will disable the job only for today. Since, the instance will be re-instanced for the nextday.

       SN  -> Stop a job with SIGTERM

       SF  -> Stop a job with SIGKILL
            This option won't work on db level jobs. Because, we are killing db connections using pg_terminate_backend.

       PQ  -> Print job queue instance
            This option prints the current job queue.

       PH  -> Print job hash instance
            This option prints the current job hash.

       SKN -> Skip job's next run
            This option will skip the job's next run.

s
    This option will print the serial id's job complete information

L
    This option will limit results while printing the tables. This option will work only with "-S*" options.
    Default limit value is 10.

i
    Using this option we can insert jobs into data source/instance from the config file.

u
    Using this option we can update job into data source/instance from a config file.

x
    Using this option we can delete a job from the data source, and will be cleared from instance once it is re-instantiated.

j
    This option takes the jobid.

b
    This option will run the daemon in DEBUG mode.

H
    This option will provide data source hostip.

U
    This option will provide data source username.

d
    This option will provide data source database.

p
    This option will provide data source port.

w
    This option will provide data source password.

C
    This option defines the local socket directory.

P
    This option defines the local pid directory.

Catalog Initialization

pgBucket requires data source to keep the configured job's information as well as job's results.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -I
    * pgBucket catalog is created

Preparing Jobs

Create a config file with our required jobs as like below.

pgbucket@ubuntu:~/pgBucket$ cat jobs.config
{
# Jobid for this job

JOBID = 1
# Jobname for this job

JOBNAME = DailyDump        

# Job status
ENABLE = True

# Job schedule cron style

JOBRUNFREQ = Day:*  Month:*  Hour:0  Minute:0 Second:0

# Job type: OS or DB
JOBTYPE = OS

# Job command to run
CMD     = sh /home/dinesh/dba/dailyDump.sh

# For DB level jobs, specify the database connection string as like below

DBCONN  = postgresql://postgres:postgres@127.0.0.1:5432/template1
}

Parse/Push config jobs to data source

For demonstration, let us some prepare some sample jobs and then push to the data source.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -i -f jobs.config
  * Jobs parsing is started..
  * Pushing jobs to database..
  * 4 jobs processed.

Now let us see whether jobs are registered or not.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -SE
                                         Enabled jobs
Jobid | Jobname                 | Jobtype | Command                            | Run frequency | 
------------------------------------------------------------------------------------------------
1     | DailyDump               | OS      | sh /home/dinesh/dba/dailyDump.sh   | * * 0 0 0     | 
2     | DailyBIJobs             | OS      | sh /home/dinesh/dba/dailyBIjobs.sh | * * 0 0 0     | 
3     | ServerAvailabilityCheck | OS      | sh /home/dinesh/dba/serverAvail.sh | * * * * */30  | 
4     | DBAvailabilityCheck     | DB      | SELECT 1                           | * * * * */10  | 
(4 rows)

Start daemon process

Let start the daemon process with a logfile.

pgbucket@ubuntu:~/pgBucket$./pgBucket -D -l /tmp/pgBucket.log
    * Starting pgBucket daemon...

Let us tail the log file and see what it is writing.

pgbucket@ubuntu:~/pgBucket$ tail -10f /tmp/pgBucket.log
2016/7/3 17:40:3 IST NOTICE  Jobid -> 3 is completed with pid(5675) with duration 3 secs.
2016/7/3 17:40:10 IST NOTICE  Jobid -> 4 is processing...
2016/7/3 17:40:10 IST NOTICE  Jobid -> 4 is completed with pid(5679) with duration 0 secs.
2016/7/3 17:40:20 IST NOTICE  Jobid -> 4 is processing...
2016/7/3 17:40:20 IST NOTICE  Jobid -> 4 is completed with pid(5684) with duration 0 secs.

Track the job queue/hash instance

pgBucket provides a way to deal with the running daemon process via a socket.

Now let us go and see the job queue instance, and check what are all the jobs are ready to dispatch.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPQ
  * Trying to connect local socket...
  * Connected to local socket..
pgBucket jobQ instance snapshot
Job Bucket | Job id | Next run(sec) |
-------------------------------------
1Hour      | 4      | 9             |
           | 3      | 9             |
(2 rows)

Also, let us see the job hash instance, and check the job's live metrics.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPH
  * Trying to connect local socket...
  * Connected to local socket..
                                                                           pgBucket jobHash instance snapshot
Jobid | Jobname                 | Pid  | Enabled | Schedule Status | Prev Run Status | Start Time               | End Time                 | Next Run                 | Run Count | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3     | ServerAvailabilityCheck | 5866 | Enable  | SCHEDULED       | Success         | Sun Jul  3 17:48:00 2016 | Sun Jul  3 17:48:03 2016 | Sun Jul  3 17:48:30 2016 | 18        | 
1     | DailyDump               | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
2     | DailyBIJobs             | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
4     | DBAvailabilityCheck     | 5865 | Enable  | SCHEDULED       | Success         | Sun Jul  3 17:48:00 2016 | Sun Jul  3 17:48:00 2016 | Sun Jul  3 17:48:10 2016 | 53        | 
(4 rows)

From the above table, we see the jobs 1,2 are INITIALIZED but not SCHEDULED. Since, these job's are configured to run today but it's scheduled time is already completed.

Track the status from data source

We can also track the job status from the data source. Data source gives us more flexible options. Now, let us get all the job's execution status as below.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -SA
                                                                                 All jobs
Serial id | Job id | Job name                | Command                            | Start time          | End time            | Duration | Pid  | Nextrun             | JobStatus | Error | 
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
393       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:28:20 | 2016-07-03 18:28:20 | 00:00:00 | 7039 | 2016-07-03 18:28:30 | Success   |       | 
392       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:28:10 | 2016-07-03 18:28:10 | 00:00:00 | 7036 | 2016-07-03 18:28:20 | Success   |       | 
391       | 3      | ServerAvailabilityCheck | sh /home/dinesh/dba/serverAvail.sh | 2016-07-03 18:28:00 | 2016-07-03 18:28:03 | 00:00:03 | 7032 | 2016-07-03 18:28:30 | Success   |       | 
...
...
...
387       | 3      | ServerAvailabilityCheck | sh /home/dinesh/dba/serverAvail.sh | 2016-07-03 18:27:30 | 2016-07-03 18:27:33 | 00:00:03 | 7021 | 2016-07-03 18:28:00 | Success   |       | 
386       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:27:30 | 2016-07-03 18:27:30 | 00:00:00 | 7020 | 2016-07-03 18:27:40 | Success   |       | 
385       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:27:20 | 2016-07-03 18:27:20 | 00:00:00 | 7017 | 2016-07-03 18:27:30 | Success   |       | 
384       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:27:10 | 2016-07-03 18:27:10 | 00:00:00 | 7014 | 2016-07-03 18:27:20 | Success   |       | 
(10 rows)

Seems, we got only the latest 10 rows from the data source which is the default rows limit. If you want to get all the job's results use -L along with -S options. Use -1 if you don't want to limit the number of rows.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -SA -L -1
                                                                             All jobs
Serial id | Job id | Job name                | Command                            | Start time          | End time            | Duration | Pid  | Nextrun             | JobStatus | Error | 
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
407       | 3      | ServerAvailabilityCheck | sh /home/dinesh/dba/serverAvail.sh | 2016-07-03 18:30:00 | 2016-07-03 18:30:03 | 00:00:03 | 7080 | 2016-07-03 18:30:30 | Success   |       | 
406       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:30:00 | 2016-07-03 18:30:00 | 00:00:00 | 7079 | 2016-07-03 18:30:10 | Success   |       | 
405       | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 18:29:50 | 2016-07-03 18:29:50 | 00:00:00 | 7076 | 2016-07-03 18:30:00 | Success   |       | 
...
...
...
3         | 3      | ServerAvailabilityCheck | sh /home/dinesh/dba/serverAvail.sh | 2016-07-03 17:39:30 | 2016-07-03 17:39:33 | 00:00:03 | 5659 | 2016-07-03 17:40:00 | Success   |       | 
2         | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 17:39:30 | 2016-07-03 17:39:30 | 00:00:00 | 5658 | 2016-07-03 17:39:40 | Success   |       | 
1         | 4      | DBAvailabilityCheck     | SELECT 1                           | 2016-07-03 17:39:20 | 2016-07-03 17:39:20 | 00:00:00 | 5655 | 2016-07-03 17:39:30 | Success   |       | 
(407 rows)

Run a job now

As there is a way to communicate with the daemon, we can tell to the daemon to run the given job manually.

For example, let us run the jobid -> 1 which is scheduled to run every day mid night.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nRN -j 1
  * Trying to connect local socket...
  * Connected to local socket..
  * Jobid -> 1 is dispatched.

From the above message, it is saying that the jobid -> 1 is dispatched. Let us validate it's status from hash and data sources.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPH
  * Trying to connect local socket...
  * Connected to local socket..
                                                                       pgBucket jobHash instance snapshot
Jobid | Jobname                 | Pid  | Enabled | Schedule Status | Prev Run Status | Start Time               | End Time                 | Next Run                 | Run Count | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3     | ServerAvailabilityCheck | 7232 | Enable  | SCHEDULED       | Success         | Sun Jul  3 18:36:00 2016 | Sun Jul  3 18:36:03 2016 | Sun Jul  3 18:36:30 2016 | 114       | 
1     | DailyDump               | 7192 | Enable  | COMPLETED       | Success         | Sun Jul  3 18:34:31 2016 | Sun Jul  3 18:34:56 2016 |                          | 1         | 
2     | DailyBIJobs             | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
4     | DBAvailabilityCheck     | 7231 | Enable  | SCHEDULED       | Success         | Sun Jul  3 18:36:00 2016 | Sun Jul  3 18:36:00 2016 | Sun Jul  3 18:36:10 2016 | 341       | 
(4 rows)

From the above table, it seems our job is executed successfully.

Let us validate the same from data source also.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -SA -j 1
                                                                   All jobs
Serial id | Job id | Job name  | Command                          | Start time          | End time            | Duration | Pid  | Nextrun                 | JobStatus | Error | 
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
444       | 1      | DailyDump | sh /home/dinesh/dba/dailyDump.sh | 2016-07-03 18:34:31 | 2016-07-03 18:34:56 | 00:00:24 | 7192 | No more schedules today | Success   |       | 
(1 rows)

Now let us see the script, and validate whether the dump file is generated or not.

    pgbucket@ubuntu:~/pgBucket$ cat /home/dinesh/dba/dailyDump.sh
    pg_dump -Fc -d postgres > /tmp/pgDump.dmp

    pgbucket@ubuntu:~/pgBucket$ ls -lh /tmp/pgDump.dmp 
    -rw-rw-rw- 1 dinesh dinesh 36M Jul  3 18:34 /tmp/pgDump.dmp

Insert/update a job

To configure a new job or update an existing job we have to use the jobs configuration file.

For example, let us insert a new job which tracks the current database sessions from pg_stat_activity. So, append a new job entry in the existing config file like below.

{
JOBID = 5
JOBNAME = GetAllActiveSessions
ENABLE = True
JOBRUNFREQ = Day:*  Month:*  Hour:*  Minute:* Second:0
JOBTYPE = DB
CMD     = SELECT * FROM pg_stat_activity WHERE active=true
DBCONN  = postgresql://postgres:postgres@127.0.0.1:5432/template1
}

Now, let us insert this job into data source and instance as well.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -f jobs.config -i
  * Jobs parsing is started..
  * Jobid -> 1 already exists. Skipping it..
  * Jobid -> 2 already exists. Skipping it..
  * Jobid -> 3 already exists. Skipping it..
  * Jobid -> 4 already exists. Skipping it..
  * Pushing jobs to database..
  * 1 job processed.

Seems it skipped the existing jobs and processed only the new job.

Now, let us validate it from data source/instance.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPH
  * Trying to connect local socket...
  * Connected to local socket..
                                                                           pgBucket jobHash instance snapshot
Jobid | Jobname                 | Pid  | Enabled | Schedule Status | Prev Run Status | Start Time               | End Time                 | Next Run                 | Run Count | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
5     | GetAllActiveSessions    | 8060 | Enable  | SCHEDULED       | Fail            | Sun Jul  3 19:09:00 2016 | Sun Jul  3 19:09:00 2016 | Sun Jul  3 19:10:00 2016 | 7         | 
3     | ServerAvailabilityCheck | 8057 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:09:00 2016 | Sun Jul  3 19:09:03 2016 | Sun Jul  3 19:09:30 2016 | 180       | 
1     | DailyDump               | 7192 | Enable  | COMPLETED       | Success         | Sun Jul  3 18:34:31 2016 | Sun Jul  3 18:34:56 2016 |                          | 1         | 
2     | DailyBIJobs             | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
4     | DBAvailabilityCheck     | 8073 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:09:10 2016 | Sun Jul  3 19:09:10 2016 | Sun Jul  3 19:09:20 2016 | 540       | 
(5 rows)

From the above results, we can see that the job is dispatched 7 times and it's "Prev Run Status" is Fail.

Now let us validate the same from the data source.

pgbucket@ubuntu:~/pgBucket$  ./pgBucket -SA -j 5 -L 1
                                                                                                                                                                    All jobs
Serial id | Job id | Job name             | Command                                          | Start time          | End time            | Duration | Pid  | Nextrun             | JobStatus | Error                                                                                                                                                                   | 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
754       | 5      | GetAllActiveSessions | SELECT * FROM pg_stat_activity WHERE active=true | 2016-07-03 19:12:00 | 2016-07-03 19:12:00 | 00:00:00 | 8152 | 2016-07-03 19:13:00 | Failed    | Query execution failed: ERROR:  column "active" does not exist
LINE 1: SELECT * FROM pg_stat_activity WHERE active=true
                                             ^
 | 
(1 rows)

Ah, seems we have a syntax error in the given SQL statement. Let us update the same job by fixing the sql statement in jobs.config file.

{
JOBID = 5
JOBNAME = GetAllActiveSessions
ENABLE = True
JOBRUNFREQ = Day:*  Month:*  Hour:*  Minute:* Second:0
JOBTYPE = DB
CMD     = SELECT * FROM pg_stat_activity WHERE state = 'active' 
DBCONN  = postgresql://postgres:postgres@127.0.0.1:5432/template1
}

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -u -f jobs.config -j 5
  * Jobs parsing is started..
  * Given job -> 5 is not matching with config file job -> 1. Hence, skipping it ..
  * Given job -> 5 is not matching with config file job -> 2. Hence, skipping it ..
  * Given job -> 5 is not matching with config file job -> 3. Hence, skipping it ..
  * Given job -> 5 is not matching with config file job -> 4. Hence, skipping it ..
  * Found the given job -> 5 to update
  * Pushing jobs to database..
  * 1 job processed.

Now let us validate the both data source and instance.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPH
  * Trying to connect local socket...
  * Connected to local socket..
                                                                           pgBucket jobHash instance snapshot
Jobid | Jobname                 | Pid  | Enabled | Schedule Status | Prev Run Status | Start Time               | End Time                 | Next Run                 | Run Count | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
5     | GetAllActiveSessions    | 8244 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:15:00 2016 | Sun Jul  3 19:15:00 2016 | Sun Jul  3 19:16:00 2016 | 13        | 
3     | ServerAvailabilityCheck | 8242 | Enable  | RUNNING         | Success         | Sun Jul  3 19:15:00 2016 |                          |                          | 191       | 
1     | DailyDump               | 7192 | Enable  | COMPLETED       | Success         | Sun Jul  3 18:34:31 2016 | Sun Jul  3 18:34:56 2016 |                          | 1         | 
2     | DailyBIJobs             | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
4     | DBAvailabilityCheck     | 8241 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:15:00 2016 | Sun Jul  3 19:15:00 2016 | Sun Jul  3 19:15:10 2016 | 575       | 
(5 rows)
pgbucket@ubuntu:~/pgBucket$ ./pgBucket -SA -j 5 -L 1
                                                                                     All jobs
Serial id | Job id | Job name             | Command                                               | Start time          | End time            | Duration | Pid  | Nextrun             | JobStatus | Error | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
781       | 5      | GetAllActiveSessions | SELECT * FROM pg_stat_activity WHERE state = 'active' | 2016-07-03 19:15:00 | 2016-07-03 19:15:00 | 00:00:00 | 8244 | 2016-07-03 19:16:00 | Success   |       | 
(1 rows)

Seems we fixed the jobid -> 5 syntax issue.

Get a job result

pgBucket stores the OS job's output as it is in the data source, where as DB outputs will be in CSV format.

For example, if we want to see the jobid -> 5 output we have to use it's "serial id". "Serial id" is nothing but job's execution sequence number. We can get this "Serial id" while getting the jobs from data sources.

From the above example, let us get the output for the serialid -> 781.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -s 781
                                             Serial id 781 result
Serial id | Job id | Command                                               | Result                                                                                                                                                                                                                                                                                                                                                                                                                                                | 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
781       | 5      | SELECT * FROM pg_stat_activity WHERE state = 'active' | datid,datname,pid,usesysid,usename,application_name,client_addr,client_hostname,client_port,backend_start,xact_start,query_start,state_change,waiting,state,backend_xid,backend_xmin,query
1,template1,8244,10,postgres,,127.0.0.1,,49464,2016-07-03 08:15:00.445606-05:30,2016-07-03 08:15:00.447459-05:30,2016-07-03 08:15:00.447459-05:30,2016-07-03 08:15:00.44746-05:30,f,active,,2704310,SELECT * FROM pg_stat_activity WHERE state = 'active'
 | 
(1 rows)

Refresh instance

We have added (-R) refresh as an addon utility to pgBucket which will do an online re-fresh of the complete pgBucket instance by creating a new job hash and job queue instances. However, it won't disturb the currently running jobs. All running jobs will be managed concurrently.

Let us refresh the instance once.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -R
* Initiated instance refresh

Now validate the hash.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPH
  * Trying to connect local socket...
  * Connected to local socket..
                                                                           pgBucket jobHash instance snapshot
Jobid | Jobname                 | Pid  | Enabled | Schedule Status | Prev Run Status | Start Time               | End Time                 | Next Run                 | Run Count | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3     | ServerAvailabilityCheck | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          | Sun Jul  3 19:53:00 2016 | 0         | 
1     | DailyDump               | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
2     | DailyBIJobs             | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
5     | GetAllActiveSessions    | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          | Sun Jul  3 19:53:00 2016 | 0         | 
4     | DBAvailabilityCheck     | 9353 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:52:50 2016 | Sun Jul  3 19:52:50 2016 | Sun Jul  3 19:53:00 2016 | 1         | 
(5 rows)

From the above details, we see that all job's schedule status set to "INITIALIZED" and run count set to 0.

Let us validate again after a couple of minutes.

pgbucket@ubuntu:~/pgBucket$ ./pgBucket -nPH
  * Trying to connect local socket...
  * Connected to local socket..
                                                                           pgBucket jobHash instance snapshot
Jobid | Jobname                 | Pid  | Enabled | Schedule Status | Prev Run Status | Start Time               | End Time                 | Next Run                 | Run Count | 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3     | ServerAvailabilityCheck | 9442 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:56:00 2016 | Sun Jul  3 19:56:03 2016 | Sun Jul  3 19:56:30 2016 | 7         | 
1     | DailyDump               | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
2     | DailyBIJobs             | 0    | Enable  | INITIALIZED     | Unknown         |                          |                          |                          | 0         | 
5     | GetAllActiveSessions    | 9445 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:56:00 2016 | Sun Jul  3 19:56:00 2016 | Sun Jul  3 19:57:00 2016 | 4         | 
4     | DBAvailabilityCheck     | 9447 | Enable  | SCHEDULED       | Success         | Sun Jul  3 19:56:10 2016 | Sun Jul  3 19:56:10 2016 | Sun Jul  3 19:56:20 2016 | 21        | 
(5 rows)

Disable/Enable job

We can enable/disable a job using either updating the job.config file or we can also do this at the instance level.

If we disable a job in an instance, it will be re-enabled while we do an instance refresh. So, please make a note of it before doing any job disable at the instance level.

pgbucket@ubuntu:~/pgBucket$  ./pgBucket -nD -j 1
  * Trying to connect local socket...
  * Connected to local socket..
  * Disabled the jobid -> 1 in an instance.

Tested Platforms

  • Mac with clang Apple LLVM version 7.3.0 (clang-703.0.31)
  • Ubuntu with gcc (Ubuntu 4.9.3-8ubuntu2~14.04) 4.9.3
  • CentOS 7 with Manual installation of gcc 4.9.3

My special thanks

I am thankful to my wife Manoja who gave me a great guidance and support to make this tool more effective.