aboutsummaryrefslogtreecommitdiffstats
path: root/docs/settings_file.rst
diff options
context:
space:
mode:
authorjason2019-01-15 17:01:32 -0700
committerjason2019-01-16 11:17:30 -0700
commit0bed1a072d15ec01822b421bee51965a2d6910e7 (patch)
treecbb01780f216ea9544a2452d7984da89b611b6d1 /docs/settings_file.rst
parenteedb40e2f1c6f3ba46518baba3401578ae20cf2b (diff)
downloadeventmq-0bed1a072d15ec01822b421bee51965a2d6910e7.tar.gz
eventmq-0bed1a072d15ec01822b421bee51965a2d6910e7.zip
Scheduler: adds optional support for redis and specifying redis clientfeature/scheduler_redis_cluster_support
- allow optional support for running the scheduler without a redis backend for persistent schedule storage. The default is to exit on startup error. The user can set `redis_startup_error_hard_kill` to false if they really want to run the scheduler with no backend db. - allow users the ability to specify a redis client class. this will let them use redis-cluster if they wish. They can install the appropriate module and define `redis_client_class` (and if necessary class keyword arguments via `redis_client_class_kwargs`).
Diffstat (limited to 'docs/settings_file.rst')
-rw-r--r--docs/settings_file.rst70
1 files changed, 66 insertions, 4 deletions
diff --git a/docs/settings_file.rst b/docs/settings_file.rst
index 04fe451..e9f4b04 100644
--- a/docs/settings_file.rst
+++ b/docs/settings_file.rst
@@ -19,6 +19,13 @@ Default: False
19 19
20Enable most verbose level of debug statements 20Enable most verbose level of debug statements
21 21
22hide_heartbeat_logs
23===================
24Default: True
25
26This hides heart beat messages from the logs. Disabling this will result in very
27noisy log output.
28
22max_sockets 29max_sockets
23=========== 30===========
24Default: 1024 31Default: 1024
@@ -91,15 +98,55 @@ Default: ''
91 98
92Password to use when connecting to redis 99Password to use when connecting to redis
93 100
101redis_client_class
102==================
103Default: ``redis.StrictRedis``
104
105The class to use as the redis client. This can be overridden if you want to use
106a different module to connect to redis. For example
107``rediscluster.StrictRedisCluster``. Note: You make get errors if you don't use
108a strict mode class.
109
110redis_client_class_kwargs
111=========================
112Default: {}
113
114This is a JSON hash map of keyword arguments to pass to the Python class
115constructor. This is useful for using ``redis-cluster-py`` on AWS Elasticache.
116When using Elasticache this value should be set to
117``{"skip_full_coverage_check": true}`` to prevent startup errors.
118
119redis_startup_error_hard_kill
120=============================
121Default: True
122
123If there is an error connecting to the Redis server for persistent schedule
124storage on startup then kill the app. This is useful if you want to prevent
125accidentally accepting schedules that can't be saved to a persistent store. If
126you would like to use redis you will need to ``pip install redis`` or
127``pip install redis-py-cluster`` and define the necessary options.
128
94*********** 129***********
95Job Manager 130Job Manager
96*********** 131***********
97 132
133default_queue_name
134==================
135Default: default
136
137This is the default queue a job manager will listen on if nothing is specified.
138
139default_queue_weight
140====================
141Default: 10
142
143This is the default weight for the default queue is it is not explicitly set.
144
98concurrent_jobs 145concurrent_jobs
99=============== 146===============
100Default: 4 147Default: 4
101 148
102This is the number of concurrent jobs the indiviudal job manager should execute 149This is the number of concurrent jobs the individual job manager should execute
103at a time. If you are using the multiprocess or threading model this number 150at a time. If you are using the multiprocess or threading model this number
104becomes important as you will want to control the load on your server. If the 151becomes important as you will want to control the load on your server. If the
105load equals the number of cores on the server, processes will begin waiting for 152load equals the number of cores on the server, processes will begin waiting for
@@ -115,7 +162,7 @@ queues
115====== 162======
116Default: [[10, "default"]] 163Default: [[10, "default"]]
117 164
118Comma seperated list of queues to process jobs for with their weights. This list 165Comma separated list of queues to process jobs with their weights. This list
119must be valid JSON otherwise an error will be thrown. 166must be valid JSON otherwise an error will be thrown.
120Example: ``queues=[[10, "data_process"], [15, "email"]]``. With these 167Example: ``queues=[[10, "data_process"], [15, "email"]]``. With these
121weights and the ``CONCURRENT_JOBS`` setting, you should be able to tune managers 168weights and the ``CONCURRENT_JOBS`` setting, you should be able to tune managers
@@ -131,14 +178,29 @@ number until the large box is ready to accept another q1 job.
131 default queue so that anything that is not explicitly assigned will still be 178 default queue so that anything that is not explicitly assigned will still be
132 run. 179 run.
133 180
134setup_callabe/setup_path 181setup_callable/setup_path
135======================== 182=========================
136Default: '' (Signifies no task will be attempted) 183Default: '' (Signifies no task will be attempted)
137 184
138Strings containing path and callable to be run when a worker is spawned 185Strings containing path and callable to be run when a worker is spawned
139if applicable to that type of worker. Currently the only supported worker is a 186if applicable to that type of worker. Currently the only supported worker is a
140MultiProcessWorker, and is useful for pulling any global state into memory. 187MultiProcessWorker, and is useful for pulling any global state into memory.
141 188
189job_entry_func
190==============
191Default: '' (Signifies no function will be executed)
192
193The function to execute before **every** job a worker thread executes. For
194example: cleaning up stale database connections. (Django's
195``django.db.connections[].close_if_unusable_or_obsolete()``)
196
197job_exit_func
198=============
199Default: '' (Signifies no function will be executed)
200
201The function to execute **after** every job a worker thread executes. For
202example: closing any database handles that were left open.
203
142max_job_count 204max_job_count
143============= 205=============
144Default: 1024 206Default: 1024