Here’s a list of all available Scrapy settings, in alphabetical order, along with their default values and the scope where they apply.
The scope, where available, shows where the setting is being used, if it’s tied to any particular component. In that case the module of that component will be shown, typically an extension, middleware or pipeline. It also means that the component must be enabled in order for the setting to have any effect.
AWS_ACCESS_KEY_ID
Default: NoneThe AWS access key used by code that requires access to Amazon Web services, such as the S3 feed storage backend.
AWS_SECRET_ACCESS_KEY
Default: NoneThe AWS secret key used by code that requires access to Amazon Web services, such as the S3 feed storage backend.
BOT_NAME
Default: 'scrapybot'The name of the bot implemented by this Scrapy project (also known as the project name). This will be used to construct the User-Agent by default, and also for logging.
It’s automatically populated with your project name when you create your project with the startproject command.
Change your Bot_Name to a Specific USER_AGENT to Scrape a Site
CONCURRENT_ITEMS
Default: 100Maximum number of concurrent items (per response) to process in parallel in the Item Processor (also known as the Item Pipeline).
Keep CONCURRENT_ITEMS Variable
CONCURRENT_REQUESTS
Default: 16The maximum number of concurrent (ie. simultaneous) requests that will be performed by the Scrapy downloader.
Change your CONCURRENT_REQUESTS according to a website. A website can detect your BOT Having multiple requests From Same IP and User Agent.
CONCURRENT_REQUESTS_PER_DOMAIN
Default: 8The maximum number of concurrent (ie. simultaneous) requests that will be performed to any single domain.
CONCURRENT_REQUESTS_PER_IP
Default: 0The maximum number of concurrent (ie. simultaneous) requests that will be performed to any single IP. If non-zero, the CONCURRENT_REQUESTS_PER_DOMAIN setting is ignored, and this one is used instead. In other words, concurrency limits will be applied per IP, not per domain.
This setting also affects DOWNLOAD_DELAY: if CONCURRENT_REQUESTS_PER_IP is non-zero, download delay is enforced per IP, not per domain.
From Here You Can Choose to send concurrent requests to a serve. Keep It According to website security parameters. Too much concurrent requests to a server can ban your BOT
DEFAULT_ITEM_CLASS
Default: 'scrapy.item.Item'The default class that will be used for instantiating items in the the Scrapy shell.
DEFAULT_REQUEST_HEADERS
Default:{
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
You Can Use Website Headers Here For Making Your BOT a Human
DEPTH_LIMIT
Default: 0The maximum depth that will be allowed to crawl for any site. If zero, no limit will be imposed.
If You are Using Rule's In YOur Crawler than it is advisable to Use DEPTH_LIMIT
DEPTH_PRIORITY
Default: 0An integer that is used to adjust the request priority based on its depth.
If zero, no priority adjustment is made from depth.
DEPTH_STATS
Default: TrueWhether to collect maximum depth stats.
DEPTH_STATS_VERBOSE
Default: FalseWhether to collect verbose depth stats. If this is enabled, the number of requests for each depth is collected in the stats.
DNSCACHE_ENABLED
Default: TrueWhether to enable DNS in-memory cache.
DOWNLOADER_DEBUG
Default: FalseWhether to enable the Downloader debugging mode.
DOWNLOADER_MIDDLEWARES
Default:: {}A dict containing the downloader middlewares enabled in your project, and their orders. For more info see Activating a downloader middleware.
DOWNLOADER_MIDDLEWARES_BASE
Default:{
'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
}
DOWNLOADER_STATS
Default: TrueWhether to enable downloader stats collection.
DOWNLOAD_DELAY
Default: 0The amount of time (in secs) that the downloader should wait before downloading consecutive pages from the same website. This can be used to throttle the crawling speed to avoid hitting servers too hard. Decimal numbers are supported. Example:
DOWNLOAD_DELAY = 0.25 # 250 ms of delay
When CONCURRENT_REQUESTS_PER_IP is non-zero, delays are enforced per ip address instead of per domain.
You can also change this setting per spider by setting download_delay spider attribute.
Keep Your Download_Delay Variable using range()
DOWNLOAD_HANDLERS
Default: {}A dict containing the request downloader handlers enabled in your project. See DOWNLOAD_HANDLERS_BASE for example format.
DOWNLOAD_HANDLERS_BASE
Default:{
'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
'http': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
'https': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
}
DOWNLOAD_TIMEOUT
Default: 180The amount of time (in secs) that the downloader will wait before timing out.
DUPEFILTER_CLASS
Default: 'scrapy.dupefilter.RFPDupeFilter'The class used to detect and filter duplicate requests.
The default (RFPDupeFilter) filters based on request fingerprint using the scrapy.utils.request.request_fingerprint function.
EDITOR
Default: depends on the environmentThe editor to use for editing spiders with the edit command. It defaults to the EDITOR environment variable, if set. Otherwise, it defaults to vi (on Unix systems) or the IDLE editor (on Windows).
EXTENSIONS
Default:: {}A dict containing the extensions enabled in your project, and their orders.
EXTENSIONS_BASE
Default:{
'scrapy.contrib.corestats.CoreStats': 0,
'scrapy.webservice.WebService': 0,
'scrapy.telnet.TelnetConsole': 0,
'scrapy.contrib.memusage.MemoryUsage': 0,
'scrapy.contrib.memdebug.MemoryDebugger': 0,
'scrapy.contrib.closespider.CloseSpider': 0,
'scrapy.contrib.feedexport.FeedExporter': 0,
'scrapy.contrib.logstats.LogStats': 0,
'scrapy.contrib.spiderstate.SpiderState': 0,
'scrapy.contrib.throttle.AutoThrottle': 0,
}
ITEM_PIPELINES
Default: {}A dict containing the item pipelines to use, and their orders. The dict is empty by default order values are arbitrary but it’s customary to define them in the 0-1000 range.
Lists are supported in ITEM_PIPELINES for backwards compatibility, but they are deprecated.
Example:
ITEM_PIPELINES = {
'mybot.pipeline.validate.ValidateMyItem': 300,
'mybot.pipeline.validate.StoreMyItem': 800,
}
ITEM_PIPELINES_BASE
Default: {}A dict containing the pipelines enabled by default in Scrapy. You should never modify this setting in your project, modify ITEM_PIPELINES instead.
LOG_ENABLED
Default: TrueWhether to enable logging.
LOG_ENCODING
Default: 'utf-8'The encoding to use for logging.
LOG_FILE
Default: NoneFile name to use for logging output. If None, standard error will be used.
LOG_LEVEL
Default: 'DEBUG'Minimum level to log. Available levels are: CRITICAL, ERROR, WARNING, INFO, DEBUG.
LOG_STDOUT
Default: FalseIf True, all standard output (and error) of your process will be redirected to the log. For example if you print 'hello' it will appear in the Scrapy log.
MEMDEBUG_ENABLED
Default: FalseWhether to enable memory debugging.
MEMDEBUG_NOTIFY
Default: []When memory debugging is enabled a memory report will be sent to the specified addresses if this setting is not empty, otherwise the report will be written to the log.
Example:
MEMDEBUG_NOTIFY = ['user@example.com']
MEMUSAGE_ENABLED
Default: FalseScope: scrapy.contrib.memusage
Whether to enable the memory usage extension that will shutdown the Scrapy process when it exceeds a memory limit, and also notify by email when that happened.
MEMUSAGE_LIMIT_MB
Default: 0Scope: scrapy.contrib.memusage
The maximum amount of memory to allow (in megabytes) before shutting down Scrapy (if MEMUSAGE_ENABLED is True). If zero, no check will be performed.
MEMUSAGE_NOTIFY_MAIL
Default: FalseScope: scrapy.contrib.memusage
A list of emails to notify if the memory limit has been reached.
Example:
MEMUSAGE_NOTIFY_MAIL = ['user@example.com']
MEMUSAGE_REPORT
Default: FalseScope: scrapy.contrib.memusage
Whether to send a memory usage report after each spider has been closed.
MEMUSAGE_WARNING_MB
Default: 0Scope: scrapy.contrib.memusage
The maximum amount of memory to allow (in megabytes) before sending a warning email notifying about it. If zero, no warning will be produced.
NEWSPIDER_MODULE
Default: ''Module where to create new spiders using the genspider command.
Example:
NEWSPIDER_MODULE = 'mybot.spiders_dev'
RANDOMIZE_DOWNLOAD_DELAY
Default: TrueIf enabled, Scrapy will wait a random amount of time (between 0.5 and 1.5 * DOWNLOAD_DELAY) while fetching requests from the same website.
This randomization decreases the chance of the crawler being detected (and subsequently blocked) by sites which analyze requests looking for statistically significant similarities in the time between their requests.
The randomization policy is the same used by wget --random-wait option.
If DOWNLOAD_DELAY is zero (default) this option has no effect.
REDIRECT_MAX_TIMES
Default: 20Defines the maximum times a request can be redirected. After this maximum the request’s response is returned as is. We used Firefox default value for the same task.
REDIRECT_MAX_METAREFRESH_DELAY
Default: 100Some sites use meta-refresh for redirecting to a session expired page, so we restrict automatic redirection to a maximum delay (in seconds)
REDIRECT_PRIORITY_ADJUST
Default: +2Adjust redirect request priority relative to original request. A negative priority adjust means more priority.
ROBOTSTXT_OBEY
Default: FalseScope: scrapy.contrib.downloadermiddleware.robotstxt
If enabled, Scrapy will respect robots.txt policies. For more information see RobotsTxtMiddleware
SCHEDULER
Default: 'scrapy.core.scheduler.Scheduler'The scheduler to use for crawling.
SPIDER_CONTRACTS
Default:: {}A dict containing the scrapy contracts enabled in your project, used for testing spiders. For more info see Spiders Contracts.
SPIDER_CONTRACTS_BASE
Default:{
'scrapy.contracts.default.UrlContract' : 1,
'scrapy.contracts.default.ReturnsContract': 2,
'scrapy.contracts.default.ScrapesContract': 3,
}
SPIDER_MIDDLEWARES
Default:: {}A dict containing the spider middlewares enabled in your project, and their orders. For more info see Activating a spider middleware.
SPIDER_MIDDLEWARES_BASE
Default:{
'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
}
SPIDER_MODULES
Default: []A list of modules where Scrapy will look for spiders.
Example:
SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']
STATS_CLASS
Default: 'scrapy.statscol.MemoryStatsCollector'The class to use for collecting stats, who must implement the Stats Collector API.
STATS_DUMP
Default: TrueDump the Scrapy stats (to the Scrapy log) once the spider finishes.
For more info see: Stats Collection.
STATSMAILER_RCPTS
Default: [] (empty list)Send Scrapy stats after spiders finish scraping. See StatsMailer for more info.
TELNETCONSOLE_ENABLED
Default: TrueA boolean which specifies if the telnet console will be enabled (provided its extension is also enabled).
TELNETCONSOLE_PORT
Default: [6023, 6073]The port range to use for the telnet console. If set to None or 0, a dynamically assigned port is used. For more info see Telnet Console.
TEMPLATES_DIR
Default: templates dir inside scrapy moduleThe directory where to look for templates when creating new projects with startproject command.
URLLENGTH_LIMIT
Default: 2083Scope: contrib.spidermiddleware.urllength
The maximum URL length to allow for crawled URLs. For more information about the default value for this setting see: http://www.boutell.com/newfaq/misc/urllength.html
No comments:
Post a Comment