'Scrapy returns ValueError SelectorList is not supported

I think the problem is when I try to enter each url spell with response.follow in the loop, but idk why, it passes the around 500 links perfectly to extract_xpath but only returns ValueError SelectorList is not supported

    def spell_parse(self, response):

        def extract_xpath(self, query):
            return response.xpath(query).get(Default = '')

    def parse(self, response):

        def extract_xpath(self, query):
            return response.xpath(query).get()

        for spell in response.xpath('//tr'):

            spell_def = spell.xpath('./td/a/@href')
            yield response.follow(spell_def, callback = self.spell_parse)

            yield {

                'Spell name': spell.xpath('./td[1]//text()').extract(),
                'School': spell.xpath('./td[2]//text()').extract(),
                'Casting time': spell.xpath('./td[3]//text()').extract(),
                'Range': spell.xpath('./td[4]//text()').extract(),
                'Duration': spell.xpath('./td[5]//text()').extract(),
                'Components': spell.xpath('./td[6]//text()').extract(),
                'Definition': extract_xpath('/p[4]//text()').extract(),
                'Levels': extract_xpath('/p[5]//text()').extract(),
            }


Solution 1:[1]

You need to use get() with spell_def = spell.xpath('./td/a/@href') like this spell_def = spell.xpath('./td/a/@href').get(), otherwise you will not pass the actual value of href to the follow function.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Sev