My Scrapy works just fine on my local machine, Windows. Then I try to run it on my AWS Linux server, but I got this
Traceback (most recent call last): File "run<spider_name>.py", line 12, in <module> spider_name).split()) File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 142, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 88, in _run_print_help func(*a, **kw) File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 149, in _run_command cmd.run(args, opts) File "/usr/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run self.crawler_process.crawl(spname, **opts.spargs) File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 162, in crawl crawler = self.create_crawler(crawler_or_spidercls) File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 190, in create_crawler return self._create_crawler(crawler_or_spidercls) File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 194, in _create_crawler spidercls = self.spider_loader.load(spidercls) File "/usr/lib/python2.7/site-packages/scrapy/spiderloader.py", line 51, in load raise KeyError("Spider not found: {}".format(spider_name)) KeyError: 'Spider not found: <spider_name>'
Why is that? How can I run it on my Linux Server?
Advertisement
Answer
Suddenly solved and I confused myself.
I solve it by updating all the requirements using pip install -r requirements.txt
. I added Scrapy Splash to the requirement and I forgot to install it.