网页爬虫 - python爬取分页问题

 傲慢的寒风呼啸_539 发布于 2022-10-28 13:02

我爬取的思路是先寻找所有网页,然后再请求所有网页,并将他们的内容用beautifulsoup解析出来,最后写进csv文件里面,但是却报错了.这是为什么呢?是我的思路出了问题吗?求各位大神帮助,我的代码如下:

# -*- coding:utf-8 -*-
import requests
from bs4 import BeautifulSoup
import csv

user_agent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
url = 'http://finance.qq.com'

def get_url(url):
    links = []
    page_number = 1
    while page_number <=36:
        link = url+'/c/gdyw_'+str(page_number)+'.htm'
        links.append(link)
        page_number = page_number + 1
    return links

all_link = get_url(url)

def get_data(all_link):
    response = requests.get(all_link)
    soup = BeautifulSoup(response.text,'lxml')
    soup = soup.find('p',{'id':'listZone'}).findAll('a')
    return soup

def main():
    with open("test.csv", "w") as f:
        f.write("url\t titile\n")
        for item in get_data(all_link):
            f.write("{}\t{}\n".format(url + item.get("href"), item.get_text()))

if __name__ == "__main__":
    main()

报错内容:

Traceback (most recent call last):
File "D:/Python34/write_csv.py", line 33, in
main()
File "D:/Python34/write_csv.py", line 29, in main
for item in get_data(all_link):
File "D:/Python34/write_csv.py", line 21, in get_data
response = requests.get(all_link)
File "D:Python34libsite-packagesrequestsapi.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "D:Python34libsite-packagesrequestsapi.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "D:Python34libsite-packagesrequestssessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "D:Python34libsite-packagesrequestssessions.py", line 579, in send
adapter = self.get_adapter(url=request.url)
File "D:Python34libsite-packagesrequestssessions.py", line 653, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)

3 个回答
  • 是你请求的url没有带上http://这样的头吧,打印一下url看看。

    2022-11-12 01:45 回答
  • 是直接报错还是已经处理过一些连接后报的错,你在每处理一个连接后输出一下序号和当前的URL

    2022-11-12 01:45 回答
  • 不能直接requests.get一个list的吧

    http://docs.python-requests.o...

    url – URL for the new Request object.

    应该来个for循环一个个来


    update:

    我给你改了下程序: 至少Python3可以跑了 Python2试了下unicode问题懒的改了

    def get_data(all_link):
        for uri in all_link:
            response = requests.get(uri)
            soup = BeautifulSoup(response.text,'lxml')
            soup = soup.find('p',{'id':'listZone'}).findAll('a')
            for small_soup in soup:
                yield small_soup
    

    重写这段

    2022-11-12 01:45 回答
撰写答案
今天,你开发时遇到什么问题呢?
立即提问
热门标签
PHP1.CN | 中国最专业的PHP中文社区 | PNG素材下载 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有