Spider Middleware
The spider middleware is a framework of hooks into Scrapy’s spider processing mechanism where you can plug custom functionality to process the responses that are sent to Spiders for processing and to process the requests and items that are generated from spiders.
Activating a spider middleware
To activate a spider middleware component, add it to the
SPIDER_MIDDLEWARES setting, which is a dict whose keys are the
middleware class path and their values are the middleware orders.
Here’s an example:
SPIDER_MIDDLEWARES = {
"myproject.middlewares.CustomSpiderMiddleware": 543,
}
The SPIDER_MIDDLEWARES setting is merged with the
SPIDER_MIDDLEWARES_BASE setting defined in Scrapy (and not meant to
be overridden) and then sorted by order to get the final sorted list of enabled
middlewares: the first middleware is the one closer to the engine and the last
is the one closer to the spider. In other words,
the process_spider_input()
method of each middleware will be invoked in increasing
middleware order (100, 200, 300, …), and the
process_spider_output() method
of each middleware will be invoked in decreasing order.
To decide which order to assign to your middleware see the
SPIDER_MIDDLEWARES_BASE setting and pick a value according to where
you want to insert the middleware. The order does matter because each
middleware performs a different action and your middleware could depend on some
previous (or subsequent) middleware being applied.
If you want to disable a builtin middleware (the ones defined in
SPIDER_MIDDLEWARES_BASE, and enabled by default) you must define it
in your project SPIDER_MIDDLEWARES setting and assign None as its
value. For example, if you want to disable the off-site middleware:
SPIDER_MIDDLEWARES = {
"scrapy.spidermiddlewares.referer.RefererMiddleware": None,
"myproject.middlewares.CustomRefererSpiderMiddleware": 700,
}
Finally, keep in mind that some middlewares may need to be enabled through a particular setting. See each middleware documentation for more info.
Writing your own spider middleware
Each spider middleware is a component that defines one or more of these methods:
- class scrapy.spidermiddlewares.SpiderMiddleware
- async process_start(start: AsyncIterator[Any], /) AsyncIterator[Any]
Iterate over the output of
start()or that of theprocess_start()method of an earlier spider middleware, overriding it. For example:async def process_start(self, start): async for item_or_request in start: yield item_or_request
You may yield the same type of objects as
start().To write spider middlewares that work on Scrapy versions lower than 2.13, define also a synchronous
process_start_requests()method that returns an iterable. For example:def process_start_requests(self, start, spider): yield from start
- process_spider_input(response, spider)
This method is called for each response that goes through the spider middleware and into the spider, for processing.
process_spider_input()should returnNoneor raise an exception.If it returns
None, Scrapy will continue processing this response, executing all other middlewares until, finally, the response is handed to the spider for processing.If it raises an exception, Scrapy won’t bother calling any other spider middleware
process_spider_input()and will call the request errback if there is one, otherwise it will start theprocess_spider_exception()chain. The output of the errback is chained back in the other direction forprocess_spider_output()to process it, orprocess_spider_exception()if it raised an exception.- Parameters:
response (
Responseobject) – the response being processedspider (
Spiderobject) – the spider for which this response is intended
- process_spider_output(response, result, spider)
This method is called with the results returned from the Spider, after it has processed the response.
process_spider_output()must return an iterable ofRequestobjects and item objects.Changed in version 2.7: This method may be defined as an asynchronous generator, in which case
resultis an asynchronous iterable.Consider defining this method as an asynchronous generator, which will be a requirement in a future version of Scrapy. However, if you plan on sharing your spider middleware with other people, consider either enforcing Scrapy 2.7 as a minimum requirement of your spider middleware, or making your spider middleware universal so that it works with Scrapy versions earlier than Scrapy 2.7.
- Parameters:
response (
Responseobject) – the response which generated this output from the spiderresult (an iterable of
Requestobjects and item objects) – the result returned by the spiderspider (
Spiderobject) – the spider whose result is being processed
- async process_spider_output_async(response, result, spider)
Added in version 2.7.
If defined, this method must be an asynchronous generator, which will be called instead of
process_spider_output()ifresultis an asynchronous iterable.
- process_spider_exception(response, exception, spider)
This method is called when a spider or
process_spider_output()method (from a previous spider middleware) raises an exception.process_spider_exception()should return eitherNoneor an iterable ofRequestor item objects.If it returns
None, Scrapy will continue processing this exception, executing any otherprocess_spider_exception()in the following middleware components, until no middleware components are left and the exception reaches the engine (where it’s logged and discarded).If it returns an iterable the
process_spider_output()pipeline kicks in, starting from the next spider middleware, and no otherprocess_spider_exception()will be called.- Parameters:
response (
Responseobject) – the response being processed when the exception was raisedexception (
Exceptionobject) – the exception raisedspider (
Spiderobject) – the spider which raised the exception
Base class for custom spider middlewares
Scrapy provides a base class for custom spider middlewares. It’s not required to use it but it can help with simplifying middleware implementations and reducing the amount of boilerplate code in universal middlewares.
Built-in spider middleware reference
This page describes all spider middleware components that come with Scrapy. For information on how to use them and how to write your own spider middleware, see the spider middleware usage guide.
For a list of the components enabled by default (and their orders) see the
SPIDER_MIDDLEWARES_BASE setting.
DepthMiddleware
- class scrapy.spidermiddlewares.depth.DepthMiddleware
DepthMiddleware is used for tracking the depth of each Request inside the site being scraped. It works by setting
request.meta['depth'] = 0whenever there is no value previously set (usually just the first Request) and incrementing it by 1 otherwise.It can be used to limit the maximum depth to scrape, control Request priority based on their depth, and things like that.
The
DepthMiddlewarecan be configured through the following settings (see the settings documentation for more info):DEPTH_LIMIT- The maximum depth that will be allowed to crawl for any site. If zero, no limit will be imposed.DEPTH_STATS_VERBOSE- Whether to collect the number of requests for each depth.DEPTH_PRIORITY- Whether to prioritize the requests based on their depth.
HttpErrorMiddleware
- class scrapy.spidermiddlewares.httperror.HttpErrorMiddleware
Filter out unsuccessful (erroneous) HTTP responses so that spiders don’t have to deal with them, which (most of the time) imposes an overhead, consumes more resources, and makes the spider logic more complex.
According to the HTTP standard, successful responses are those whose status codes are in the 200-300 range.
If you still want to process response codes outside that range, you can
specify which response codes the spider is able to handle using the
handle_httpstatus_list spider attribute or
HTTPERROR_ALLOWED_CODES setting.
For example, if you want your spider to handle 404 responses you can do this:
from scrapy.spiders import CrawlSpider
class MySpider(CrawlSpider):
handle_httpstatus_list = [404]
The handle_httpstatus_list key of Request.meta can also be used to specify which response codes to
allow on a per-request basis. You can also set the meta key handle_httpstatus_all
to True if you want to allow any response code for a request, and False to
disable the effects of the handle_httpstatus_all key.
Keep in mind, however, that it’s usually a bad idea to handle non-200 responses, unless you really know what you’re doing.
For more information see: HTTP Status Code Definitions.
HttpErrorMiddleware settings
HTTPERROR_ALLOWED_CODES
Default: []
Pass all responses with non-200 status codes contained in this list.
HTTPERROR_ALLOW_ALL
Default: False
Pass all responses, regardless of its status code.
RefererMiddleware
- class scrapy.spidermiddlewares.referer.RefererMiddleware
Populates Request
Refererheader, based on the URL of the Response which generated it.
RefererMiddleware settings
REFERER_ENABLED
Default: True
Whether to enable referer middleware.
REFERRER_POLICY
Default: 'scrapy.spidermiddlewares.referer.DefaultReferrerPolicy'
Referrer Policy to apply when populating Request “Referer” header.
Note
You can also set the Referrer Policy per request,
using the special "referrer_policy" Request.meta key,
with the same acceptable values as for the REFERRER_POLICY setting.
Acceptable values for REFERRER_POLICY
either a path to a
scrapy.spidermiddlewares.referer.ReferrerPolicysubclass — a custom policy or one of the built-in ones (see classes below),or one or more comma-separated standard W3C-defined string values,
or the special
"scrapy-default".
String value |
Class name (as a string) |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Warning
Scrapy’s default referrer policy — just like “no-referrer-when-downgrade”,
the W3C-recommended value for browsers — will send a non-empty
“Referer” header from any http(s):// to any https:// URL,
even if the domain is different.
“same-origin” may be a better choice if you want to remove referrer information for cross-domain requests.
Note
“no-referrer-when-downgrade” policy is the W3C-recommended default, and is used by major web browsers.
However, it is NOT Scrapy’s default referrer policy (see DefaultReferrerPolicy).
Warning
“unsafe-url” policy is NOT recommended.
StartSpiderMiddleware
UrlLengthMiddleware
- class scrapy.spidermiddlewares.urllength.UrlLengthMiddleware
Filters out requests with URLs longer than URLLENGTH_LIMIT
The
UrlLengthMiddlewarecan be configured through the following settings (see the settings documentation for more info):URLLENGTH_LIMIT- The maximum URL length to allow for crawled URLs.