diff options
| author | jason | 2019-01-15 17:01:32 -0700 |
|---|---|---|
| committer | jason | 2019-01-16 11:17:30 -0700 |
| commit | 0bed1a072d15ec01822b421bee51965a2d6910e7 (patch) | |
| tree | cbb01780f216ea9544a2452d7984da89b611b6d1 /docs | |
| parent | eedb40e2f1c6f3ba46518baba3401578ae20cf2b (diff) | |
| download | eventmq-0bed1a072d15ec01822b421bee51965a2d6910e7.tar.gz eventmq-0bed1a072d15ec01822b421bee51965a2d6910e7.zip | |
Scheduler: adds optional support for redis and specifying redis clientfeature/scheduler_redis_cluster_support
- allow optional support for running the scheduler without a redis
backend for persistent schedule storage. The default is to exit on
startup error. The user can set `redis_startup_error_hard_kill` to
false if they really want to run the scheduler with no backend db.
- allow users the ability to specify a redis client class. this will let
them use redis-cluster if they wish. They can install the appropriate
module and define `redis_client_class` (and if necessary class keyword
arguments via `redis_client_class_kwargs`).
Diffstat (limited to 'docs')
| -rw-r--r-- | docs/settings_file.rst | 70 |
1 files changed, 66 insertions, 4 deletions
diff --git a/docs/settings_file.rst b/docs/settings_file.rst index 04fe451..e9f4b04 100644 --- a/docs/settings_file.rst +++ b/docs/settings_file.rst | |||
| @@ -19,6 +19,13 @@ Default: False | |||
| 19 | 19 | ||
| 20 | Enable most verbose level of debug statements | 20 | Enable most verbose level of debug statements |
| 21 | 21 | ||
| 22 | hide_heartbeat_logs | ||
| 23 | =================== | ||
| 24 | Default: True | ||
| 25 | |||
| 26 | This hides heart beat messages from the logs. Disabling this will result in very | ||
| 27 | noisy log output. | ||
| 28 | |||
| 22 | max_sockets | 29 | max_sockets |
| 23 | =========== | 30 | =========== |
| 24 | Default: 1024 | 31 | Default: 1024 |
| @@ -91,15 +98,55 @@ Default: '' | |||
| 91 | 98 | ||
| 92 | Password to use when connecting to redis | 99 | Password to use when connecting to redis |
| 93 | 100 | ||
| 101 | redis_client_class | ||
| 102 | ================== | ||
| 103 | Default: ``redis.StrictRedis`` | ||
| 104 | |||
| 105 | The class to use as the redis client. This can be overridden if you want to use | ||
| 106 | a different module to connect to redis. For example | ||
| 107 | ``rediscluster.StrictRedisCluster``. Note: You make get errors if you don't use | ||
| 108 | a strict mode class. | ||
| 109 | |||
| 110 | redis_client_class_kwargs | ||
| 111 | ========================= | ||
| 112 | Default: {} | ||
| 113 | |||
| 114 | This is a JSON hash map of keyword arguments to pass to the Python class | ||
| 115 | constructor. This is useful for using ``redis-cluster-py`` on AWS Elasticache. | ||
| 116 | When using Elasticache this value should be set to | ||
| 117 | ``{"skip_full_coverage_check": true}`` to prevent startup errors. | ||
| 118 | |||
| 119 | redis_startup_error_hard_kill | ||
| 120 | ============================= | ||
| 121 | Default: True | ||
| 122 | |||
| 123 | If there is an error connecting to the Redis server for persistent schedule | ||
| 124 | storage on startup then kill the app. This is useful if you want to prevent | ||
| 125 | accidentally accepting schedules that can't be saved to a persistent store. If | ||
| 126 | you would like to use redis you will need to ``pip install redis`` or | ||
| 127 | ``pip install redis-py-cluster`` and define the necessary options. | ||
| 128 | |||
| 94 | *********** | 129 | *********** |
| 95 | Job Manager | 130 | Job Manager |
| 96 | *********** | 131 | *********** |
| 97 | 132 | ||
| 133 | default_queue_name | ||
| 134 | ================== | ||
| 135 | Default: default | ||
| 136 | |||
| 137 | This is the default queue a job manager will listen on if nothing is specified. | ||
| 138 | |||
| 139 | default_queue_weight | ||
| 140 | ==================== | ||
| 141 | Default: 10 | ||
| 142 | |||
| 143 | This is the default weight for the default queue is it is not explicitly set. | ||
| 144 | |||
| 98 | concurrent_jobs | 145 | concurrent_jobs |
| 99 | =============== | 146 | =============== |
| 100 | Default: 4 | 147 | Default: 4 |
| 101 | 148 | ||
| 102 | This is the number of concurrent jobs the indiviudal job manager should execute | 149 | This is the number of concurrent jobs the individual job manager should execute |
| 103 | at a time. If you are using the multiprocess or threading model this number | 150 | at a time. If you are using the multiprocess or threading model this number |
| 104 | becomes important as you will want to control the load on your server. If the | 151 | becomes important as you will want to control the load on your server. If the |
| 105 | load equals the number of cores on the server, processes will begin waiting for | 152 | load equals the number of cores on the server, processes will begin waiting for |
| @@ -115,7 +162,7 @@ queues | |||
| 115 | ====== | 162 | ====== |
| 116 | Default: [[10, "default"]] | 163 | Default: [[10, "default"]] |
| 117 | 164 | ||
| 118 | Comma seperated list of queues to process jobs for with their weights. This list | 165 | Comma separated list of queues to process jobs with their weights. This list |
| 119 | must be valid JSON otherwise an error will be thrown. | 166 | must be valid JSON otherwise an error will be thrown. |
| 120 | Example: ``queues=[[10, "data_process"], [15, "email"]]``. With these | 167 | Example: ``queues=[[10, "data_process"], [15, "email"]]``. With these |
| 121 | weights and the ``CONCURRENT_JOBS`` setting, you should be able to tune managers | 168 | weights and the ``CONCURRENT_JOBS`` setting, you should be able to tune managers |
| @@ -131,14 +178,29 @@ number until the large box is ready to accept another q1 job. | |||
| 131 | default queue so that anything that is not explicitly assigned will still be | 178 | default queue so that anything that is not explicitly assigned will still be |
| 132 | run. | 179 | run. |
| 133 | 180 | ||
| 134 | setup_callabe/setup_path | 181 | setup_callable/setup_path |
| 135 | ======================== | 182 | ========================= |
| 136 | Default: '' (Signifies no task will be attempted) | 183 | Default: '' (Signifies no task will be attempted) |
| 137 | 184 | ||
| 138 | Strings containing path and callable to be run when a worker is spawned | 185 | Strings containing path and callable to be run when a worker is spawned |
| 139 | if applicable to that type of worker. Currently the only supported worker is a | 186 | if applicable to that type of worker. Currently the only supported worker is a |
| 140 | MultiProcessWorker, and is useful for pulling any global state into memory. | 187 | MultiProcessWorker, and is useful for pulling any global state into memory. |
| 141 | 188 | ||
| 189 | job_entry_func | ||
| 190 | ============== | ||
| 191 | Default: '' (Signifies no function will be executed) | ||
| 192 | |||
| 193 | The function to execute before **every** job a worker thread executes. For | ||
| 194 | example: cleaning up stale database connections. (Django's | ||
| 195 | ``django.db.connections[].close_if_unusable_or_obsolete()``) | ||
| 196 | |||
| 197 | job_exit_func | ||
| 198 | ============= | ||
| 199 | Default: '' (Signifies no function will be executed) | ||
| 200 | |||
| 201 | The function to execute **after** every job a worker thread executes. For | ||
| 202 | example: closing any database handles that were left open. | ||
| 203 | |||
| 142 | max_job_count | 204 | max_job_count |
| 143 | ============= | 205 | ============= |
| 144 | Default: 1024 | 206 | Default: 1024 |