你正在阅读 Celery 3.1 的文档。开发版本文档见: 此处.
注解
Alternate routing concepts like topic and fanout may not be available for all transports, please consult the transport comparison table.
The simplest way to do routing is to use the CELERY_CREATE_MISSING_QUEUES setting (on by default).
With this setting on, a named queue that is not already defined in CELERY_QUEUES will be created automatically. This makes it easy to perform simple routing tasks.
Say you have two servers, x, and y that handles regular tasks, and one server z, that only handles feed related tasks. You can use this configuration:
CELERY_ROUTES = {'feed.tasks.import_feed': {'queue': 'feeds'}}
With this route enabled import feed tasks will be routed to the “feeds” queue, while all other tasks will be routed to the default queue (named “celery” for historical reasons).
Now you can start server z to only process the feeds queue like this:
user@z:/$ celery worker -Q feeds
You can specify as many queues as you want, so you can make this server process the default queue as well:
user@z:/$ celery worker -Q feeds,celery
You can change the name of the default queue by using the following configuration:
from kombu import Exchange, Queue
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
)
The point with this feature is to hide the complex AMQP protocol for users with only basic needs. However – you may still be interested in how these queues are declared.
A queue named “video” will be created with the following settings:
{'exchange': 'video',
'exchange_type': 'direct',
'routing_key': 'video'}
The non-AMQP backends like ghettoq does not support exchanges, so they require the exchange to have the same name as the queue. Using this design ensures it will work for them as well.
Say you have two servers, x, and y that handles regular tasks, and one server z, that only handles feed related tasks, you can use this configuration:
from kombu import Queue
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default', routing_key='task.#'),
Queue('feed_tasks', routing_key='feed.#'),
)
CELERY_DEFAULT_EXCHANGE = 'tasks'
CELERY_DEFAULT_EXCHANGE_TYPE = 'topic'
CELERY_DEFAULT_ROUTING_KEY = 'task.default'
CELERY_QUEUES is a list of Queue instances. If you don’t set the exchange or exchange type values for a key, these will be taken from the CELERY_DEFAULT_EXCHANGE and CELERY_DEFAULT_EXCHANGE_TYPE settings.
To route a task to the feed_tasks queue, you can add an entry in the CELERY_ROUTES setting:
CELERY_ROUTES = {
'feeds.tasks.import_feed': {
'queue': 'feed_tasks',
'routing_key': 'feed.import',
},
}
You can also override this using the routing_key argument to Task.apply_async(), or send_task():
>>> from feeds.tasks import import_feed
>>> import_feed.apply_async(args=['http://cnn.com/rss'],
... queue='feed_tasks',
... routing_key='feed.import')
To make server z consume from the feed queue exclusively you can start it with the -Q option:
user@z:/$ celery worker -Q feed_tasks --hostname=z@%h
Servers x and y must be configured to consume from the default queue:
user@x:/$ celery worker -Q default --hostname=x@%h
user@y:/$ celery worker -Q default --hostname=y@%h
If you want, you can even have your feed processing worker handle regular tasks as well, maybe in times when there’s a lot of work to do:
user@z:/$ celery worker -Q feed_tasks,default --hostname=z@%h
If you have another queue but on another exchange you want to add, just specify a custom exchange and exchange type:
from kombu import Exchange, Queue
CELERY_QUEUES = (
Queue('feed_tasks', routing_key='feed.#'),
Queue('regular_tasks', routing_key='task.#'),
Queue('image_tasks', exchange=Exchange('mediatasks', type='direct'),
routing_key='image.compress'),
)
If you’re confused about these terms, you should read up on AMQP.
参见
In addition to the AMQP Primer below, there’s Rabbits and Warrens, an excellent blog post describing queues and exchanges. There’s also AMQP in 10 minutes*: Flexible Routing Model, and Standard Exchange Types. For users of RabbitMQ the RabbitMQ FAQ could be useful as a source of information.
A message consists of headers and a body. Celery uses headers to store the content type of the message and its content encoding. The content type is usually the serialization format used to serialize the message. The body contains the name of the task to execute, the task id (UUID), the arguments to apply it with and some additional metadata – like the number of retries or an ETA.
This is an example task message represented as a Python dictionary:
{'task': 'myapp.tasks.add',
'id': '54086c5e-6193-4575-8308-dbab76798756',
'args': [4, 4],
'kwargs': {}}
The client sending messages is typically called a publisher, or a producer, while the entity receiving messages is called a consumer.
The broker is the message server, routing messages from producers to consumers.
You are likely to see these terms used a lot in AMQP related material.
The steps required to send and receive messages are:
Celery automatically creates the entities necessary for the queues in CELERY_QUEUES to work (except if the queue’s auto_declare setting is set to False).
Here’s an example queue configuration with three queues; One for video, one for images and one default queue for everything else:
from kombu import Exchange, Queue
CELERY_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
Queue('videos', Exchange('media'), routing_key='media.video'),
Queue('images', Exchange('media'), routing_key='media.image'),
)
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_EXCHANGE_TYPE = 'direct'
CELERY_DEFAULT_ROUTING_KEY = 'default'
The exchange type defines how the messages are routed through the exchange. The exchange types defined in the standard are direct, topic, fanout and headers. Also non-standard exchange types are available as plug-ins to RabbitMQ, like the last-value-cache plug-in by Michael Bridgen.
Direct exchanges match by exact routing keys, so a queue bound by the routing key video only receives messages with that routing key.
Topic exchanges matches routing keys using dot-separated words, and the wildcard characters: * (matches a single word), and # (matches zero or more words).
With routing keys like usa.news, usa.weather, norway.news and norway.weather, bindings could be *.news (all news), usa.# (all items in the USA) or usa.weather (all USA weather items).
Celery comes with a tool called celery amqp that is used for command line access to the AMQP API, enabling access to administration tasks like creating/deleting queues and exchanges, purging queues or sending messages. It can also be used for non-AMQP brokers, but different implementation may not implement all commands.
You can write commands directly in the arguments to celery amqp, or just start with no arguments to start it in shell-mode:
$ celery amqp
-> connecting to amqp://guest@localhost:5672/.
-> connected.
1>
Here 1> is the prompt. The number 1, is the number of commands you have executed so far. Type help for a list of commands available. It also supports auto-completion, so you can start typing a command and then hit the tab key to show a list of possible matches.
Let’s create a queue you can send messages to:
$ celery amqp
1> exchange.declare testexchange direct
ok.
2> queue.declare testqueue
ok. queue:testqueue messages:0 consumers:0.
3> queue.bind testqueue testexchange testkey
ok.
This created the direct exchange testexchange, and a queue named testqueue. The queue is bound to the exchange using the routing key testkey.
From now on all messages sent to the exchange testexchange with routing key testkey will be moved to this queue. You can send a message by using the basic.publish command:
4> basic.publish 'This is a message!' testexchange testkey
ok.
Now that the message is sent you can retrieve it again. You can use the basic.get` command here, which polls for new messages on the queue (which is alright for maintainence tasks, for services you’d want to use basic.consume instead)
Pop a message off the queue:
5> basic.get testqueue
{'body': 'This is a message!',
'delivery_info': {'delivery_tag': 1,
'exchange': u'testexchange',
'message_count': 0,
'redelivered': False,
'routing_key': u'testkey'},
'properties': {}}
AMQP uses acknowledgment to signify that a message has been received and processed successfully. If the message has not been acknowledged and consumer channel is closed, the message will be delivered to another consumer.
Note the delivery tag listed in the structure above; Within a connection channel, every received message has a unique delivery tag, This tag is used to acknowledge the message. Also note that delivery tags are not unique across connections, so in another client the delivery tag 1 might point to a different message than in this channel.
You can acknowledge the message you received using basic.ack:
6> basic.ack 1
ok.
To clean up after our test session you should delete the entities you created:
7> queue.delete testqueue
ok. 0 messages deleted.
8> exchange.delete testexchange
ok.
In Celery available queues are defined by the CELERY_QUEUES setting.
Here’s an example queue configuration with three queues; One for video, one for images and one default queue for everything else:
default_exchange = Exchange('default', type='direct')
media_exchange = Exchange('media', type='direct')
CELERY_QUEUES = (
Queue('default', default_exchange, routing_key='default'),
Queue('videos', media_exchange, routing_key='media.video'),
Queue('images', media_exchange, routing_key='media.image')
)
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_EXCHANGE = 'default'
CELERY_DEFAULT_ROUTING_KEY = 'default'
Here, the CELERY_DEFAULT_QUEUE will be used to route tasks that doesn’t have an explicit route.
The default exchange, exchange type and routing key will be used as the default routing values for tasks, and as the default values for entries in CELERY_QUEUES.
The destination for a task is decided by the following (in order):
It is considered best practice to not hard-code these settings, but rather leave that as configuration options by using Routers; This is the most flexible approach, but sensible defaults can still be set as task attributes.
A router is a class that decides the routing options for a task.
All you need to define a new router is to create a class with a route_for_task method:
class MyRouter(object):
def route_for_task(self, task, args=None, kwargs=None):
if task == 'myapp.tasks.compress_video':
return {'exchange': 'video',
'exchange_type': 'topic',
'routing_key': 'video.compress'}
return None
If you return the queue key, it will expand with the defined settings of that queue in CELERY_QUEUES:
{'queue': 'video', 'routing_key': 'video.compress'}
becomes –>
{'queue': 'video',
'exchange': 'video',
'exchange_type': 'topic',
'routing_key': 'video.compress'}
You install router classes by adding them to the CELERY_ROUTES setting:
CELERY_ROUTES = (MyRouter(), )
Router classes can also be added by name:
CELERY_ROUTES = ('myapp.routers.MyRouter', )
For simple task name -> route mappings like the router example above, you can simply drop a dict into CELERY_ROUTES to get the same behavior:
CELERY_ROUTES = ({'myapp.tasks.compress_video': {
'queue': 'video',
'routing_key': 'video.compress'
}}, )
The routers will then be traversed in order, it will stop at the first router returning a true value, and use that as the final route for the task.
Celery can also support broadcast routing. Here is an example exchange broadcast_tasks that delivers copies of tasks to all workers connected to it:
from kombu.common import Broadcast
CELERY_QUEUES = (Broadcast('broadcast_tasks'), )
CELERY_ROUTES = {'tasks.reload_cache': {'queue': 'broadcast_tasks'}}
Now the tasks.reload_tasks task will be sent to every worker consuming from this queue.
Broadcast & Results
Note that Celery result does not define what happens if two tasks have the same task_id. If the same task is distributed to more than one worker, then the state history may not be preserved.
It is a good idea to set the task.ignore_result attribute in this case.