你正在阅读 Celery 3.1 的文档。开发版本文档见:
此处.
celery.bin.worker
The celery worker command (previously known as celeryd)
-
-c, --concurrency
Number of child processes processing the queue. The default
is the number of CPUs available on your system.
-
-P, --pool
Pool implementation:
prefork (default), eventlet, gevent, solo or threads.
-
-f, --logfile
Path to log file. If no logfile is specified, stderr is used.
-
-l, --loglevel
Logging level, choose between DEBUG, INFO, WARNING,
ERROR, CRITICAL, or FATAL.
-
-n, --hostname
Set custom hostname, e.g. ‘w1.%h’. Expands: %h (hostname),
%n (name) and %d, (domain).
-
-B, --beat
Also run the celery beat periodic task scheduler. Please note that
there must only be one instance of this service.
-
-Q, --queues
List of queues to enable for this worker, separated by comma.
By default all configured queues are enabled.
Example: -Q video,image
-
-I, --include
Comma separated list of additional modules to import.
Example: -I foo.tasks,bar.tasks
-
-s, --schedule
Path to the schedule database if running with the -B option.
Defaults to celerybeat-schedule. The extension ”.db” may be
appended to the filename.
-
-O
Apply optimization profile. Supported: default, fair
-
--scheduler
Scheduler class to use. Default is celery.beat.PersistentScheduler
-
-S, --statedb
Path to the state database. The extension ‘.db’ may
be appended to the filename. Default: {default}
-
-E, --events
Send events that can be captured by monitors like celery events,
celerymon, and others.
-
--without-gossip
Do not subscribe to other workers events.
-
--without-mingle
Do not synchronize with other workers at startup.
-
--without-heartbeat
Do not send event heartbeats.
-
--purge
Purges all waiting tasks before the daemon is started.
WARNING: This is unrecoverable, and the tasks will be
deleted from the messaging server.
-
--time-limit
Enables a hard time limit (in seconds int/float) for tasks.
-
--soft-time-limit
Enables a soft time limit (in seconds int/float) for tasks.
-
--maxtasksperchild
Maximum number of tasks a pool worker can execute before it’s
terminated and replaced by a new worker.
-
--pidfile
Optional file used to store the workers pid.
The worker will not start if this file already exists
and the pid is still alive.
-
--autoscale
Enable autoscaling by providing
max_concurrency, min_concurrency. Example:
(always keep 3 processes, but grow to 10 if necessary)
-
--autoreload
Enable autoreloading.
-
--no-execv
Don’t do execv after multiprocessing child fork.
-
class celery.bin.worker.worker(app=None, get_app=None, no_color=False, stdout=None, stderr=None, quiet=False, on_error=None, on_usage_error=None)[源代码]
Start worker instance.
Examples:
celery worker --app=proj -l info
celery worker -A proj -l info -Q hipri,lopri
celery worker -A proj --concurrency=4
celery worker -A proj --concurrency=1000 -P eventlet
celery worker --autoscale=10,0
-
doc = u'\n\nThe :program:`celery worker` command (previously known as ``celeryd``)\n\n.. program:: celery worker\n\n.. seealso::\n\n See :ref:`preload-options`.\n\n.. cmdoption:: -c, --concurrency\n\n Number of child processes processing the queue. The default\n is the number of CPUs available on your system.\n\n.. cmdoption:: -P, --pool\n\n Pool implementation:\n\n prefork (default), eventlet, gevent, solo or threads.\n\n.. cmdoption:: -f, --logfile\n\n Path to log file. If no logfile is specified, `stderr` is used.\n\n.. cmdoption:: -l, --loglevel\n\n Logging level, choose between `DEBUG`, `INFO`, `WARNING`,\n `ERROR`, `CRITICAL`, or `FATAL`.\n\n.. cmdoption:: -n, --hostname\n\n Set custom hostname, e.g. \'w1.%h\'. Expands: %h (hostname),\n %n (name) and %d, (domain).\n\n.. cmdoption:: -B, --beat\n\n Also run the `celery beat` periodic task scheduler. Please note that\n there must only be one instance of this service.\n\n.. cmdoption:: -Q, --queues\n\n List of queues to enable for this worker, separated by comma.\n By default all configured queues are enabled.\n Example: `-Q video,image`\n\n.. cmdoption:: -I, --include\n\n Comma separated list of additional modules to import.\n Example: -I foo.tasks,bar.tasks\n\n.. cmdoption:: -s, --schedule\n\n Path to the schedule database if running with the `-B` option.\n Defaults to `celerybeat-schedule`. The extension ".db" may be\n appended to the filename.\n\n.. cmdoption:: -O\n\n Apply optimization profile. Supported: default, fair\n\n.. cmdoption:: --scheduler\n\n Scheduler class to use. Default is celery.beat.PersistentScheduler\n\n.. cmdoption:: -S, --statedb\n\n Path to the state database. The extension \'.db\' may\n be appended to the filename. Default: {default}\n\n.. cmdoption:: -E, --events\n\n Send events that can be captured by monitors like :program:`celery events`,\n `celerymon`, and others.\n\n.. cmdoption:: --without-gossip\n\n Do not subscribe to other workers events.\n\n.. cmdoption:: --without-mingle\n\n Do not synchronize with other workers at startup.\n\n.. cmdoption:: --without-heartbeat\n\n Do not send event heartbeats.\n\n.. cmdoption:: --purge\n\n Purges all waiting tasks before the daemon is started.\n **WARNING**: This is unrecoverable, and the tasks will be\n deleted from the messaging server.\n\n.. cmdoption:: --time-limit\n\n Enables a hard time limit (in seconds int/float) for tasks.\n\n.. cmdoption:: --soft-time-limit\n\n Enables a soft time limit (in seconds int/float) for tasks.\n\n.. cmdoption:: --maxtasksperchild\n\n Maximum number of tasks a pool worker can execute before it\'s\n terminated and replaced by a new worker.\n\n.. cmdoption:: --pidfile\n\n Optional file used to store the workers pid.\n\n The worker will not start if this file already exists\n and the pid is still alive.\n\n.. cmdoption:: --autoscale\n\n Enable autoscaling by providing\n max_concurrency, min_concurrency. Example::\n\n --autoscale=10,3\n\n (always keep 3 processes, but grow to 10 if necessary)\n\n.. cmdoption:: --autoreload\n\n Enable autoreloading.\n\n.. cmdoption:: --no-execv\n\n Don\'t do execv after multiprocessing child fork.\n\n'
-
enable_config_from_cmdline = True
-
get_options()[源代码]
-
maybe_detach(argv, dopts=[u'-D', u'--detach'])[源代码]
-
namespace = u'celeryd'
-
run(hostname=None, pool_cls=None, app=None, uid=None, gid=None, loglevel=None, logfile=None, pidfile=None, state_db=None, **kwargs)[源代码]
-
run_from_argv(prog_name, argv=None, command=None)[源代码]
-
supports_args = False
-
with_pool_option(argv)[源代码]
-
celery.bin.worker.main(app=None)[源代码]