'Scrapy error: 'Pipeline' object has no attribute 'exporter'
I made a scraper and am using this tutorial to export using a pipeline. When I run scrapy crawl [myspider]
I see the objects flashing by in my terminal, but after each it gives the error 'PostPipeline' object has no attribute 'exporter'
.
My Spider
class FokSpider1(CrawlSpider):
name = 'fok'
allowed_domains = ['fok.nl']
start_urls = ['http://forum.fok.nl/?token=77c1f767bc31859fee1ffe041343fa48&allowcookies=ACCEPTEER+ALLE+COOKIES']
rules = (
# My rules, leave out to save space
)
def __init__(self, *args, **kwargs):
self.driver = webdriver.Chrome()
super(FokSpider1, self).__init__(*args, **kwargs)
def parse_topic(self, response):
posts = response.xpath("//div[contains(@class, 'post userid')]")
for i, post in enumerate(posts):
l = ItemLoader(selector=post, item=ForumTopic(), response=response)
l.add_xpath('subforum_title',"//*[@id='pageWrapper']/div[4]/div/h2/a/text()")
l.add_xpath('topic_title',"//*[@id='pageWrapper']/div[4]/h1/span/text()")
l.add_xpath('unique_post_id', ".//@data-postid")
l.add_xpath('post_rank', ".//@data-teller")
l.add_xpath('author', ".//@data-member")
l.add_xpath('timestamp', ".//span[contains(@class, 'post_time')]/a[1]/text()")
l.add_xpath('content', ".//div[contains(@id, '_bot')]/div[contains(@class, 'postmain_right')]/text()")
yield l.load_item()
settings.py
ITEM_PIPELINES = {
'scrapy_spider.pipelines.PostPipeline': 300,
}
I don't think the rest of settings.py is relevant?
pipelines.py
from scrapy.exceptions import DropItem
from scrapy import signals
from scrapy.exporters import XmlItemExporter
class PostPipeline(object):
def __init__(self):
self.ids_seen = set()
self.files = {}
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
return pipeline
def spider_opened(self, spider):
file = open('fokSpider1.xml' % spider.name, 'w+b')
self.files[spider] = file
self.exporter = XmlItemExporter(file)
self.exporter.start_exporting()
def spider_closed(self, spider):
self.exporter.finish_exporting()
file = self.files.pop(spider)
file.close()
def process_item(self, item, spider):
if item['unique_post_id'] in self.ids_seen:
raise DropItem("Duplicate item found: %s" % item)
else:
self.ids_seen.add(item['unique_post_id'])
self.exporter.export_item(item)
return item
Additional info
The error is raised at the process_item
method:
2017-12-18 17:24:00 [scrapy.core.scraper] ERROR: Error processing {'author': u'HaverMoutKoekje',
'content': u'Here is the content',
'post_rank': u'7',
'subforum_title': u'televisie',
'timestamp': u'vrijdag 8 december 2017 @ 21:59',
'unique_post_id': u'175586521'}
**Traceback (most recent call last):
File "/anaconda/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/Users/my.name/scrapy/scrapy_spider/scrapy_spider/pipelines.py", line 40, in process_item
self.exporter.export_item(item)
AttributeError: 'PostPipeline' object has no attribute 'exporter'**
When I run scrapy crawl [myspider] -o somefile.xml
, the file is created, but it has no content.
Same error as here, but no answers over there...
Any help is greatly appreciated!
UPDATE: While not really solving this problem, I for now can at least export data using the simple pipeline
from scrapy.exceptions import DropItem
class PostPipeline(object):
def __init__(self):
self.ids_seen = set()
self.files = {}
def process_item(self, item, spider):
if item['unique_post_id'] in self.ids_seen:
raise DropItem("Duplicate item found: %s" % item)
else:
self.ids_seen.add(item['unique_post_id'])
return item
using the command scrapy crawl [myspider] -o somefile.xml
. Why the earlier approach, taken straight from the tutorials, did not work, still no idea.
Solution 1:[1]
file = open('fokSpider1.xml' % spider.name, 'w+b')
This doesn't look right. But why it is not raising error?
anyway, maybe this is the cause, your exporter param is incorrect, so it is not initialized.
Solution 2:[2]
the init fun. will make a difference in the whole map, you can try like this:
class PostPipeline(object):
def __init__(self):
self.ids_seen = set()
self.file = open('fokSpider1.xml', 'w+b')
self.exporter = XmlItemExporter(self.file)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Zhang Fan |
Solution 2 | dp0d |