导入错误:没有模块名称urllib2


468

这是我的代码:

import urllib2.request

response = urllib2.urlopen("http://www.google.com")
html = response.read()
print(html)

有什么帮助吗?


4
我看到您再次编辑了您的答案,所以我再次编辑了我的答案以回应:您当前的问题是您在说的urllib.urlopen("http://www.google.com/")不只是urlopen("http://www.google.com/")
Eli Courtwright 2010年

Answers:


631

urllib2文档中所述:

urllib2模块已在Python 3中分为几个名为urllib.request和的模块urllib.error2to3在将源转换为Python 3时,该工具将自动调整导入。

所以你应该说

from urllib.request import urlopen
html = urlopen("http://www.google.com/").read()
print(html)

您当前正在编辑的代码示例不正确,因为您是在说urllib.urlopen("http://www.google.com/")而不是urlopen("http://www.google.com/")


1
仍然出现错误,请参见编辑。编辑:从urllib.request中使用时仍然出现错误

7
@Sergio:urllib.request不是urllib2.request。的urlliburllib2模块在Python 2.X已合并到urllib在Python 3.模块
礼Courtwright

1
这对我有用。谢谢Eli。但是,尝试访问URL时出现超时错误。我也无法ping google.com。看来我的网络正在使用代理。
Vaibhav

oo,向后兼容!
user2589273

104

对于使用Python 2(测试版2.7.3和2.6.8)和Python 3(3.2.3和3.3.2+)的脚本,请尝试:

#! /usr/bin/env python

try:
    # For Python 3.0 and later
    from urllib.request import urlopen
except ImportError:
    # Fall back to Python 2's urllib2
    from urllib2 import urlopen

html = urlopen("http://www.google.com/")
print(html.read())

65

上面的内容在3.3中对我不起作用。改试试这个(YMMV等)

import urllib.request
url = "http://www.google.com/"
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
print (response.read().decode('utf-8'))

24

一些制表符补全显示了Python 2 vs Python 3中软件包的内容。

在Python 2中:

In [1]: import urllib

In [2]: urllib.
urllib.ContentTooShortError      urllib.ftpwrapper                urllib.socket                    urllib.test1
urllib.FancyURLopener            urllib.getproxies                urllib.splitattr                 urllib.thishost
urllib.MAXFTPCACHE               urllib.getproxies_environment    urllib.splithost                 urllib.time
urllib.URLopener                 urllib.i                         urllib.splitnport                urllib.toBytes
urllib.addbase                   urllib.localhost                 urllib.splitpasswd               urllib.unquote
urllib.addclosehook              urllib.noheaders                 urllib.splitport                 urllib.unquote_plus
urllib.addinfo                   urllib.os                        urllib.splitquery                urllib.unwrap
urllib.addinfourl                urllib.pathname2url              urllib.splittag                  urllib.url2pathname
urllib.always_safe               urllib.proxy_bypass              urllib.splittype                 urllib.urlcleanup
urllib.base64                    urllib.proxy_bypass_environment  urllib.splituser                 urllib.urlencode
urllib.basejoin                  urllib.quote                     urllib.splitvalue                urllib.urlopen
urllib.c                         urllib.quote_plus                urllib.ssl                       urllib.urlretrieve
urllib.ftpcache                  urllib.re                        urllib.string                    
urllib.ftperrors                 urllib.reporthook                urllib.sys  

在Python 3中:

In [2]: import urllib.
urllib.error        urllib.parse        urllib.request      urllib.response     urllib.robotparser

In [2]: import urllib.error.
urllib.error.ContentTooShortError  urllib.error.HTTPError             urllib.error.URLError

In [2]: import urllib.parse.
urllib.parse.parse_qs          urllib.parse.quote_plus        urllib.parse.urldefrag         urllib.parse.urlsplit
urllib.parse.parse_qsl         urllib.parse.unquote           urllib.parse.urlencode         urllib.parse.urlunparse
urllib.parse.quote             urllib.parse.unquote_plus      urllib.parse.urljoin           urllib.parse.urlunsplit
urllib.parse.quote_from_bytes  urllib.parse.unquote_to_bytes  urllib.parse.urlparse

In [2]: import urllib.request.
urllib.request.AbstractBasicAuthHandler         urllib.request.HTTPSHandler
urllib.request.AbstractDigestAuthHandler        urllib.request.OpenerDirector
urllib.request.BaseHandler                      urllib.request.ProxyBasicAuthHandler
urllib.request.CacheFTPHandler                  urllib.request.ProxyDigestAuthHandler
urllib.request.DataHandler                      urllib.request.ProxyHandler
urllib.request.FTPHandler                       urllib.request.Request
urllib.request.FancyURLopener                   urllib.request.URLopener
urllib.request.FileHandler                      urllib.request.UnknownHandler
urllib.request.HTTPBasicAuthHandler             urllib.request.build_opener
urllib.request.HTTPCookieProcessor              urllib.request.getproxies
urllib.request.HTTPDefaultErrorHandler          urllib.request.install_opener
urllib.request.HTTPDigestAuthHandler            urllib.request.pathname2url
urllib.request.HTTPErrorProcessor               urllib.request.url2pathname
urllib.request.HTTPHandler                      urllib.request.urlcleanup
urllib.request.HTTPPasswordMgr                  urllib.request.urlopen
urllib.request.HTTPPasswordMgrWithDefaultRealm  urllib.request.urlretrieve
urllib.request.HTTPRedirectHandler     


In [2]: import urllib.response.
urllib.response.addbase       urllib.response.addclosehook  urllib.response.addinfo       urllib.response.addinfourl

21

Python 3:

import urllib.request

wp = urllib.request.urlopen("http://google.com")
pw = wp.read()
print(pw)

Python 2:

import urllib
import sys

wp = urllib.urlopen("http://google.com")
for line in wp:
    sys.stdout.write(line)

虽然我已经测试了各自版本中的两个代码。


8

所有解决方案中最简单的:

在Python 3.x中:

import urllib.request
url = "https://api.github.com/users?since=100"
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
data_content = response.read()
print(data_content)

6

在python 3中,要获取文本输出:

import io
import urllib.request

response = urllib.request.urlopen("http://google.com")
text = io.TextIOWrapper(response)

5

那在python3中对我有用:

import urllib.request
htmlfile = urllib.request.urlopen("http://google.com")
htmltext = htmlfile.read()
print(htmltext)
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.