当我尝试从Python 打开http://www.comicbookdb.com/browse.php(在我的浏览器中工作正常)时,我得到一个空响应:
>>> import urllib.request >>> content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php') >>> print(content.read()) b''
设置User-agent时也会发生同样的情况:
>>> opener = urllib.request.build_opener() >>> opener.addheaders = [('User-agent', 'Mozilla/5.0')] >>> content = opener.open('http://www.comicbookdb.com/browse.php') >>> print(content.read()) b''
或者当我使用httplib2时:
>>> import httplib2 >>> h = httplib2.Http('.cache') >>> response, content = h.request('http://www.comicbookdb.com/browse.php') >>> print(content) b'' >>> print(response) {'cache-control': 'no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'content-location': 'http://www.comicbookdb.com/browse.php', 'expires': 'Thu, 19 Nov 1981 08:52:00 GMT', 'content-length': '0', 'set-cookie': 'PHPSESSID=590f5997a91712b7134c2cb3291304a8; path=/', 'date': 'Wed, 25 Dec 2013 15:12:30 GMT', 'server': 'Apache', 'pragma': 'no-cache', 'content-type': 'text/html', 'status': '200'}
或者当我尝试使用cURL下载它时:
C:\>curl -v http://www.comicbookdb.com/browse.php * About to connect() to www.comicbookdb.com port 80 * Trying 208.76.81.137... * connected * Connected to www.comicbookdb.com (208.76.81.137) port 80 > GET /browse.php HTTP/1.1 User-Agent: curl/7.13.1 (i586-pc-mingw32msvc) libcurl/7.13.1 zlib/1.2.2 Host: www.comicbookdb.com Pragma: no-cache Accept: */* < HTTP/1.1 200 OK < Date: Wed, 25 Dec 2013 15:20:06 GMT < Server: Apache < Expires: Thu, 19 Nov 1981 08:52:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 < Pragma: no-cache < Set-Cookie: PHPSESSID=0a46f2d390639da7eb223ad47380b394; path=/ < Content-Length: 0 < Content-Type: text/html * Connection #0 to host www.comicbookdb.com left intact * Closing connection #0
在浏览器中打开URL或使用Wget下载它似乎工作正常,但是:
C:\>wget http://www.comicbookdb.com/browse.php --16:16:26-- http://www.comicbookdb.com/browse.php => `browse.php' Resolving www.comicbookdb.com... 208.76.81.137 Connecting to www.comicbookdb.com[208.76.81.137]:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] [ <=> ] 40,687 48.75K/s 16:16:27 (48.75 KB/s) - `browse.php' saved [40687]
与从同一服务器下载不同的文件一样:
>>> content = urllib.request.urlopen('http://www.comicbookdb.com/index.php') >>> print(content.read(100)) b'那么为什么其他URL不起作用呢?
看来服务器需要一个Connection: keep-alive
标头,例如curl(我希望其他失败的客户端也是如此)在默认情况下不会添加。
使用curl可以使用以下命令,该命令将显示非空响应:
curl -v -H 'Connection: keep-alive' http://www.comicbookdb.com/browse.php
使用Python,您可以使用以下代码:
import httplib2 h = httplib2.Http('.cache') response, content = h.request('http://www.comicbookdb.com/browse.php', headers={'Connection':'keep-alive'}) print(content) print(response)