aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/settings_file.rst70
-rw-r--r--etc/eventmq.conf-dist1
-rw-r--r--eventmq/__init__.py2
-rw-r--r--eventmq/client/messages.py4
-rw-r--r--eventmq/conf.py1
-rw-r--r--eventmq/scheduler.py211
-rw-r--r--setup.py1
7 files changed, 186 insertions, 104 deletions
diff --git a/docs/settings_file.rst b/docs/settings_file.rst
index 04fe451..e9f4b04 100644
--- a/docs/settings_file.rst
+++ b/docs/settings_file.rst
@@ -19,6 +19,13 @@ Default: False
19 19
20Enable most verbose level of debug statements 20Enable most verbose level of debug statements
21 21
22hide_heartbeat_logs
23===================
24Default: True
25
26This hides heart beat messages from the logs. Disabling this will result in very
27noisy log output.
28
22max_sockets 29max_sockets
23=========== 30===========
24Default: 1024 31Default: 1024
@@ -91,15 +98,55 @@ Default: ''
91 98
92Password to use when connecting to redis 99Password to use when connecting to redis
93 100
101redis_client_class
102==================
103Default: ``redis.StrictRedis``
104
105The class to use as the redis client. This can be overridden if you want to use
106a different module to connect to redis. For example
107``rediscluster.StrictRedisCluster``. Note: You make get errors if you don't use
108a strict mode class.
109
110redis_client_class_kwargs
111=========================
112Default: {}
113
114This is a JSON hash map of keyword arguments to pass to the Python class
115constructor. This is useful for using ``redis-cluster-py`` on AWS Elasticache.
116When using Elasticache this value should be set to
117``{"skip_full_coverage_check": true}`` to prevent startup errors.
118
119redis_startup_error_hard_kill
120=============================
121Default: True
122
123If there is an error connecting to the Redis server for persistent schedule
124storage on startup then kill the app. This is useful if you want to prevent
125accidentally accepting schedules that can't be saved to a persistent store. If
126you would like to use redis you will need to ``pip install redis`` or
127``pip install redis-py-cluster`` and define the necessary options.
128
94*********** 129***********
95Job Manager 130Job Manager
96*********** 131***********
97 132
133default_queue_name
134==================
135Default: default
136
137This is the default queue a job manager will listen on if nothing is specified.
138
139default_queue_weight
140====================
141Default: 10
142
143This is the default weight for the default queue is it is not explicitly set.
144
98concurrent_jobs 145concurrent_jobs
99=============== 146===============
100Default: 4 147Default: 4
101 148
102This is the number of concurrent jobs the indiviudal job manager should execute 149This is the number of concurrent jobs the individual job manager should execute
103at a time. If you are using the multiprocess or threading model this number 150at a time. If you are using the multiprocess or threading model this number
104becomes important as you will want to control the load on your server. If the 151becomes important as you will want to control the load on your server. If the
105load equals the number of cores on the server, processes will begin waiting for 152load equals the number of cores on the server, processes will begin waiting for
@@ -115,7 +162,7 @@ queues
115====== 162======
116Default: [[10, "default"]] 163Default: [[10, "default"]]
117 164
118Comma seperated list of queues to process jobs for with their weights. This list 165Comma separated list of queues to process jobs with their weights. This list
119must be valid JSON otherwise an error will be thrown. 166must be valid JSON otherwise an error will be thrown.
120Example: ``queues=[[10, "data_process"], [15, "email"]]``. With these 167Example: ``queues=[[10, "data_process"], [15, "email"]]``. With these
121weights and the ``CONCURRENT_JOBS`` setting, you should be able to tune managers 168weights and the ``CONCURRENT_JOBS`` setting, you should be able to tune managers
@@ -131,14 +178,29 @@ number until the large box is ready to accept another q1 job.
131 default queue so that anything that is not explicitly assigned will still be 178 default queue so that anything that is not explicitly assigned will still be
132 run. 179 run.
133 180
134setup_callabe/setup_path 181setup_callable/setup_path
135======================== 182=========================
136Default: '' (Signifies no task will be attempted) 183Default: '' (Signifies no task will be attempted)
137 184
138Strings containing path and callable to be run when a worker is spawned 185Strings containing path and callable to be run when a worker is spawned
139if applicable to that type of worker. Currently the only supported worker is a 186if applicable to that type of worker. Currently the only supported worker is a
140MultiProcessWorker, and is useful for pulling any global state into memory. 187MultiProcessWorker, and is useful for pulling any global state into memory.
141 188
189job_entry_func
190==============
191Default: '' (Signifies no function will be executed)
192
193The function to execute before **every** job a worker thread executes. For
194example: cleaning up stale database connections. (Django's
195``django.db.connections[].close_if_unusable_or_obsolete()``)
196
197job_exit_func
198=============
199Default: '' (Signifies no function will be executed)
200
201The function to execute **after** every job a worker thread executes. For
202example: closing any database handles that were left open.
203
142max_job_count 204max_job_count
143============= 205=============
144Default: 1024 206Default: 1024
diff --git a/etc/eventmq.conf-dist b/etc/eventmq.conf-dist
index 96dbad4..7882e0c 100644
--- a/etc/eventmq.conf-dist
+++ b/etc/eventmq.conf-dist
@@ -19,6 +19,7 @@ scheduler_addr=tcp://eventmq:47291
19[router] 19[router]
20 20
21[scheduler] 21[scheduler]
22redis_client_class = redis.StrictRedis
22 23
23[jobmanager] 24[jobmanager]
24worker_addr=tcp://127.0.0.1:47290 25worker_addr=tcp://127.0.0.1:47290
diff --git a/eventmq/__init__.py b/eventmq/__init__.py
index 4143854..018a540 100644
--- a/eventmq/__init__.py
+++ b/eventmq/__init__.py
@@ -1,5 +1,5 @@
1__author__ = 'EventMQ Contributors' 1__author__ = 'EventMQ Contributors'
2__version__ = '0.3.9' 2__version__ = '0.3.10-rc1'
3 3
4PROTOCOL_VERSION = 'eMQP/1.0' 4PROTOCOL_VERSION = 'eMQP/1.0'
5 5
diff --git a/eventmq/client/messages.py b/eventmq/client/messages.py
index 9e49ea8..c178c30 100644
--- a/eventmq/client/messages.py
+++ b/eventmq/client/messages.py
@@ -226,14 +226,14 @@ def send_request(socket, message, reply_requested=False, guarantee=False,
226 } 226 }
227 } 227 }
228 Args: 228 Args:
229 socket (socket): Socket to use when sending `message` 229 socket: Socket (Sender or Receiver) to use when sending `message`
230 message: message to send to `socket` 230 message: message to send to `socket`
231 reply_requested (bool): request the return value of func as a reply 231 reply_requested (bool): request the return value of func as a reply
232 guarantee (bool): (Give your best effort) to guarantee that func is 232 guarantee (bool): (Give your best effort) to guarantee that func is
233 executed. Exceptions and things will be logged. 233 executed. Exceptions and things will be logged.
234 retry_count (int): How many times should be retried when encountering 234 retry_count (int): How many times should be retried when encountering
235 an Exception or some other failure before giving up. (default: 0 235 an Exception or some other failure before giving up. (default: 0
236 or immediatly fail) 236 or immediately fail)
237 timeout (int): How many seconds should we wait before killing the job 237 timeout (int): How many seconds should we wait before killing the job
238 default: 0 which means infinite timeout 238 default: 0 which means infinite timeout
239 queue (str): Name of queue to use when executing the job. Default: is 239 queue (str): Name of queue to use when executing the job. Default: is
diff --git a/eventmq/conf.py b/eventmq/conf.py
index f15a1ef..8634ebb 100644
--- a/eventmq/conf.py
+++ b/eventmq/conf.py
@@ -90,6 +90,7 @@ RQ_DB = 0
90RQ_PASSWORD = '' 90RQ_PASSWORD = ''
91REDIS_CLIENT_CLASS = 'redis.StrictRedis' 91REDIS_CLIENT_CLASS = 'redis.StrictRedis'
92REDIS_CLIENT_CLASS_KWARGS = {} 92REDIS_CLIENT_CLASS_KWARGS = {}
93REDIS_STARTUP_ERROR_HARD_KILL = True
93 94
94MAX_JOB_COUNT = 1024 95MAX_JOB_COUNT = 1024
95 96
diff --git a/eventmq/scheduler.py b/eventmq/scheduler.py
index 2ff726a..e7c13ec 100644
--- a/eventmq/scheduler.py
+++ b/eventmq/scheduler.py
@@ -18,14 +18,15 @@
18Handles cron and other scheduled tasks 18Handles cron and other scheduled tasks
19""" 19"""
20from hashlib import sha1 as emq_hash 20from hashlib import sha1 as emq_hash
21import importlib
21import json 22import json
22from json import dumps as serialize 23from json import dumps as serialize
23from json import loads as deserialize 24from json import loads as deserialize
24import logging 25import logging
26import sys
25 27
26from croniter import croniter 28from croniter import croniter
27import redis 29from six import iteritems, next
28from six import next
29 30
30from eventmq.log import setup_logger 31from eventmq.log import setup_logger
31 32
@@ -56,7 +57,6 @@ class Scheduler(HeartbeatMixin, EMQPService):
56 57
57 logger.info('EventMQ Version {}'.format(__version__)) 58 logger.info('EventMQ Version {}'.format(__version__))
58 logger.info('Initializing Scheduler...') 59 logger.info('Initializing Scheduler...')
59 import_settings()
60 super(Scheduler, self).__init__(*args, **kwargs) 60 super(Scheduler, self).__init__(*args, **kwargs)
61 self.outgoing = Sender() 61 self.outgoing = Sender()
62 self._redis_server = None 62 self._redis_server = None
@@ -85,30 +85,28 @@ class Scheduler(HeartbeatMixin, EMQPService):
85 85
86 self.poller = Poller() 86 self.poller = Poller()
87 87
88 self.load_jobs()
89
90 self._setup() 88 self._setup()
91 89
92 def load_jobs(self): 90 def load_jobs(self):
93 """ 91 """
94 Loads the jobs that need to be scheduled 92 Loads the jobs from redis that need to be scheduled
95 """ 93 """
96 try: 94 if self.redis_server:
97 interval_job_list = self.redis_server.lrange( 95 try:
98 'interval_jobs', 0, -1) 96 interval_job_list = self.redis_server.lrange(
99 if interval_job_list is not None: 97 'interval_jobs', 0, -1)
100 for i in interval_job_list: 98 if interval_job_list is not None:
101 logger.debug('Restoring job with hash %s' % i) 99 for i in interval_job_list:
102 if (self.redis_server.get(i)): 100 logger.debug('Restoring job with hash %s' % i)
103 self.load_job_from_redis( 101 if self.redis_server.get(i):
104 message=deserialize(self.redis_server.get(i))) 102 self.load_job_from_redis(
105 else: 103 message=deserialize(self.redis_server.get(i)))
106 logger.warning('Expected scheduled job in redis,' + 104 else:
107 'but none was found with hash %s' % i) 105 logger.warning(
108 except redis.ConnectionError: 106 'Expected scheduled job in redis, but none '
109 logger.warning('Could not contact redis server') 107 'was found with hash {}'.format(i))
110 except Exception as e: 108 except Exception as e:
111 logger.warning(str(e)) 109 logger.warning(str(e), exc_info=True)
112 110
113 def _start_event_loop(self): 111 def _start_event_loop(self):
114 """ 112 """
@@ -126,7 +124,6 @@ class Scheduler(HeartbeatMixin, EMQPService):
126 msg = self.outgoing.recv_multipart() 124 msg = self.outgoing.recv_multipart()
127 self.process_message(msg) 125 self.process_message(msg)
128 126
129 # TODO: distribute me!
130 for hash_, cron in self.cron_jobs.items(): 127 for hash_, cron in self.cron_jobs.items():
131 # If the time is now, or passed 128 # If the time is now, or passed
132 if cron[0] <= ts_now: 129 if cron[0] <= ts_now:
@@ -145,10 +142,9 @@ class Scheduler(HeartbeatMixin, EMQPService):
145 seconds_until(cron[0])) 142 seconds_until(cron[0]))
146 143
147 cancel_jobs = [] 144 cancel_jobs = []
148 for k, v in self.interval_jobs.iteritems(): 145 for k, v in iteritems(self.interval_jobs):
149 # TODO: Refactor this entire loop to be readable by humankind
150 # The schedule time has elapsed
151 if v[0] <= m_now: 146 if v[0] <= m_now:
147 # The schedule time has elapsed
152 msg = v[1] 148 msg = v[1]
153 queue = v[3] 149 queue = v[3]
154 150
@@ -168,23 +164,25 @@ class Scheduler(HeartbeatMixin, EMQPService):
168 v[0] = next(v[2]) 164 v[0] = next(v[2])
169 # Persist the change to redis if there are run 165 # Persist the change to redis if there are run
170 # counts still left 166 # counts still left
171 try: 167 if self.redis_server:
172 message = deserialize( 168 try:
173 self.redis_server.get(k)) 169 message = deserialize(
174 new_headers = [] 170 self.redis_server.get(k))
175 for header in message[1].split(','): 171 new_headers = []
176 if 'run_count:' in header: 172 for header in message[1].split(','):
177 new_headers.append( 173 if 'run_count:' in header:
178 'run_count:{}'.format(v[4])) 174 new_headers.append(
179 else: 175 'run_count:{}'.format(
180 new_headers.append(header) 176 v[4]))
181 message[1] = ",".join(new_headers) 177 else:
182 self.redis_server.set( 178 new_headers.append(header)
183 k, serialize(message)) 179 message[1] = ",".join(new_headers)
184 except Exception as e: 180 self.redis_server.set(
185 logger.warning( 181 k, serialize(message))
186 'Unable to update key in redis ' 182 except Exception as e:
187 'server: {}'.format(e)) 183 logger.warning(
184 'Unable to update key in redis '
185 'server: {}'.format(e))
188 else: 186 else:
189 cancel_jobs.append(k) 187 cancel_jobs.append(k)
190 188
@@ -204,21 +202,45 @@ class Scheduler(HeartbeatMixin, EMQPService):
204 202
205 @property 203 @property
206 def redis_server(self): 204 def redis_server(self):
207 # Open connection to redis server for persistance
208 if self._redis_server is None: 205 if self._redis_server is None:
206 # Open connection to redis server for persistence
207 cls_split = conf.REDIS_CLIENT_CLASS.split('.')
208 cls_path, cls_name = '.'.join(cls_split[:-1]), cls_split[-1]
209 try: 209 try:
210 self._redis_server = \ 210 mod = importlib.import_module(cls_path)
211 redis.StrictRedis(host=conf.RQ_HOST, 211 redis_cls = getattr(mod, cls_name)
212 port=conf.RQ_PORT, 212 except (ImportError, AttributeError) as e:
213 db=conf.RQ_DB, 213 errmsg = 'Unable to import redis_client_class {} ({})'.format(
214 password=conf.RQ_PASSWORD) 214 conf.REDIS_CLIENT_CLASS, e)
215 return self._redis_server 215 logger.warning(errmsg)
216
217 if conf.REDIS_STARTUP_ERROR_HARD_KILL:
218 sys.exit(1)
219 return None
220
221 url = 'redis://{}:{}/{}'.format(
222 conf.RQ_HOST, conf.RQ_PORT, conf.RQ_DB)
216 223
224 logger.info('Connecting to redis: {}{}'.format(
225 url, ' with password' if conf.RQ_PASSWORD else ''))
226
227 if conf.RQ_PASSWORD:
228 url = 'redis://:{}@{}:{}/{}'.format(
229 conf.RQ_PASSWORD, conf.RQ_HOST, conf.RQ_PORT,
230 conf.RQ_DB)
231
232 try:
233 self._redis_server = \
234 redis_cls.from_url(url, **conf.REDIS_CLIENT_CLASS_KWARGS)
217 except Exception as e: 235 except Exception as e:
218 logger.warning('Unable to connect to redis server: {}'.format( 236 logger.warning('Unable to connect to redis server: {}'.format(
219 e)) 237 e))
220 else: 238 if conf.REDIS_STARTUP_ERROR_HARD_KILL:
221 return self._redis_server 239 sys.exit(1)
240
241 return None
242
243 return self._redis_server
222 244
223 def send_request(self, jobmsg, queue=None): 245 def send_request(self, jobmsg, queue=None):
224 """ 246 """
@@ -238,21 +260,28 @@ class Scheduler(HeartbeatMixin, EMQPService):
238 return msgid 260 return msgid
239 261
240 def on_disconnect(self, msgid, message): 262 def on_disconnect(self, msgid, message):
263 """Process request to shut down."""
241 logger.info("Received DISCONNECT request: {}".format(message)) 264 logger.info("Received DISCONNECT request: {}".format(message))
242 self._redis_server.connection_pool.disconnect()
243 sendmsg(self.outgoing, KBYE) 265 sendmsg(self.outgoing, KBYE)
244 self.outgoing.unbind(conf.SCHEDULER_ADDR) 266 self.outgoing.unbind(conf.SCHEDULER_ADDR)
267
268 if self._redis_server:
269 # Check the internal var. No need to connect if we haven't already
270 # connected by this point
271 try:
272 self._redis_server.connection_pool.disconnect()
273 except Exception:
274 pass
245 super(Scheduler, self).on_disconnect(msgid, message) 275 super(Scheduler, self).on_disconnect(msgid, message)
246 276
247 def on_kbye(self, msgid, msg): 277 def on_kbye(self, msgid, msg):
278 """Process router going offline"""
248 if not self.is_heartbeat_enabled: 279 if not self.is_heartbeat_enabled:
249 self.reset() 280 self.reset()
250 281
251 def on_unschedule(self, msgid, message): 282 def on_unschedule(self, msgid, message):
252 """ 283 """Unschedule an existing schedule job, if it exists."""
253 Unschedule an existing schedule job, if it exists 284 logger.debug("Received new UNSCHEDULE request: {}".format(message))
254 """
255 logger.info("Received new UNSCHEDULE request: {}".format(message))
256 285
257 schedule_hash = self.schedule_hash(message) 286 schedule_hash = self.schedule_hash(message)
258 # TODO: Notify router whether or not this succeeds 287 # TODO: Notify router whether or not this succeeds
@@ -263,11 +292,10 @@ class Scheduler(HeartbeatMixin, EMQPService):
263 Cancels a job if it exists 292 Cancels a job if it exists
264 293
265 Args: 294 Args:
266 schedule_hash (str): The schedule's unique hash. See 295 schedule_hash (str): The schedule's unique hash.
267 :meth:`Scheduler.schedule_hash` 296 See :meth:`Scheduler.schedule_hash`
268 """ 297 """
269 if schedule_hash in self.interval_jobs: 298 if schedule_hash in self.interval_jobs:
270
271 # If the hash wasn't found in either `cron_jobs` or `interval_jobs` 299 # If the hash wasn't found in either `cron_jobs` or `interval_jobs`
272 # then it's safe to assume it's already deleted. 300 # then it's safe to assume it's already deleted.
273 try: 301 try:
@@ -282,18 +310,18 @@ class Scheduler(HeartbeatMixin, EMQPService):
282 # Double check the redis server even if we didn't find the hash 310 # Double check the redis server even if we didn't find the hash
283 # in memory 311 # in memory
284 try: 312 try:
285 if (self.redis_server.get(schedule_hash)): 313 if self.redis_server:
286 self.redis_server.delete(schedule_hash)
287 self.redis_server.lrem('interval_jobs', 0, schedule_hash) 314 self.redis_server.lrem('interval_jobs', 0, schedule_hash)
288 self.redis_server.save() 315 except Exception as e:
289 except redis.ConnectionError as e: 316 logger.warning(str(e), exc_info=True)
290 logger.warning('Could not contact redis server: {}'.format(e)) 317 try:
318 if self.redis_server and self.redis_server.get(schedule_hash):
319 self.redis_server.delete(schedule_hash)
291 except Exception as e: 320 except Exception as e:
292 logger.warning(str(e), exc_info=True) 321 logger.warning(str(e), exc_info=True)
293 322
294 def load_job_from_redis(self, message): 323 def load_job_from_redis(self, message):
295 """ 324 """Parses and loads a message from redis as a scheduler job."""
296 """
297 from .utils.timeutils import IntervalIter 325 from .utils.timeutils import IntervalIter
298 326
299 queue = message[0].encode('utf-8') 327 queue = message[0].encode('utf-8')
@@ -329,8 +357,7 @@ class Scheduler(HeartbeatMixin, EMQPService):
329 self.cron_jobs[schedule_hash] = [c_next, message[3], c, queue] 357 self.cron_jobs[schedule_hash] = [c_next, message[3], c, queue]
330 358
331 def on_schedule(self, msgid, message): 359 def on_schedule(self, msgid, message):
332 """ 360 """Create a new scheduled job."""
333 """
334 logger.info("Received new SCHEDULE request: {}".format(message)) 361 logger.info("Received new SCHEDULE request: {}".format(message))
335 362
336 queue = message[0] 363 queue = message[0]
@@ -380,19 +407,16 @@ class Scheduler(HeartbeatMixin, EMQPService):
380 self.interval_jobs.pop(schedule_hash) 407 self.interval_jobs.pop(schedule_hash)
381 408
382 # Persist the scheduled job 409 # Persist the scheduled job
383 try: 410 if self.redis_server:
384 if schedule_hash not in self.redis_server.lrange( 411 try:
385 'interval_jobs', 0, -1): 412 if schedule_hash not in self.redis_server.lrange(
386 self.redis_server.lpush('interval_jobs', schedule_hash) 413 'interval_jobs', 0, -1):
387 self.redis_server.set(schedule_hash, serialize(message)) 414 self.redis_server.lpush('interval_jobs', schedule_hash)
388 self.redis_server.save() 415 self.redis_server.set(schedule_hash, serialize(message))
389 logger.debug('Saved job {} with hash {} to redis'.format( 416 logger.debug('Saved job {} with hash {} to redis'.format(
390 message, schedule_hash)) 417 message, schedule_hash))
391 except redis.ConnectionError: 418 except Exception as e:
392 logger.warning('Could not contact redis server. Unable to ' 419 logger.warning(str(e))
393 'guarantee persistence.')
394 except Exception as e:
395 logger.warning(str(e))
396 420
397 # Send a request in haste mode, decrement run_count if needed 421 # Send a request in haste mode, decrement run_count if needed
398 if 'nohaste' not in headers: 422 if 'nohaste' not in headers:
@@ -410,21 +434,17 @@ class Scheduler(HeartbeatMixin, EMQPService):
410 return run_count 434 return run_count
411 435
412 def on_heartbeat(self, msgid, message): 436 def on_heartbeat(self, msgid, message):
413 """ 437 """Noop command. The logic for heart beating is in the event loop."""
414 Noop command. The logic for heartbeating is in the event loop.
415 """
416 438
417 @classmethod 439 @classmethod
418 def schedule_hash(cls, message): 440 def schedule_hash(cls, message):
419 """ 441 """Create a unique identifier to store and reference a message later.
420 Create a unique identifier for this message for storing
421 and referencing later
422 442
423 Args: 443 Args:
424 message (str): The serialized message passed to the scheduler 444 message (str): The serialized message passed to the scheduler
425 445
426 Returns: 446 Returns:
427 int: unique hash for the job 447 str: unique hash for the job
428 """ 448 """
429 449
430 # Get the job portion of the message 450 # Get the job portion of the message
@@ -452,18 +472,17 @@ class Scheduler(HeartbeatMixin, EMQPService):
452 """ 472 """
453 setup_logger("eventmq") 473 setup_logger("eventmq")
454 import_settings() 474 import_settings()
455 self.__init__() 475 import_settings('scheduler')
456 self.start(addr=conf.SCHEDULER_ADDR)
457
458 476
459# Entry point for pip console scripts 477 self.load_jobs()
460def scheduler_main(): 478 self.start(addr=conf.SCHEDULER_ADDR)
461 s = Scheduler()
462 s.scheduler_main()
463 479
464 480
465def test_job(*args, **kwargs): 481def test_job(*args, **kwargs):
466 """ 482 """
467 Simple test job for use with the scheduler 483 Simple test job for use with the scheduler
468 """ 484 """
485 from pprint import pprint
469 print("hello!") # noqa 486 print("hello!") # noqa
487 pprint(args) # noqa
488 pprint(kwargs) # noqa
diff --git a/setup.py b/setup.py
index 1427d67..dceac7f 100644
--- a/setup.py
+++ b/setup.py
@@ -22,7 +22,6 @@ setup(
22 'six==1.10.0', 22 'six==1.10.0',
23 'monotonic==0.4', 23 'monotonic==0.4',
24 'croniter==0.3.10', 24 'croniter==0.3.10',
25 'redis==2.10.3',
26 'future==0.15.2', 25 'future==0.15.2',
27 'psutil==5.0.0', 26 'psutil==5.0.0',
28 'python-dateutil>=2.1,<3.0.0'], 27 'python-dateutil>=2.1,<3.0.0'],