【发布时间】:2018-01-31 12:24:42
【问题描述】:
我正在尝试使用以下代码对网站进行网页抓取。
import http.client
from bs4 import BeautifulSoup
import urllib.request
from lxml.html import fromstring
from http.client import HTTPConnection #as _HTTPConnection, HTTPException
base_url = "https://apct.gov.in/apportal/Search/ViewAPVATDealers.aspx"
page = urllib.request.urlopen(base_url)
soup = BeautifulSoup(page, "html.parser")
path = fromstring(soup.decode('utf-8'))
header = {
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Accept-Encoding":"gzip, deflate, br",
"Accept-Language":"en-US,en;q=0.9",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36",
}
url = "https://apct.gov.in/apportal/Search/ViewAPVATDealers.aspx"
form_data={}
form_data["__EVENTTARGET"] = ""
form_data["__EVENTARGUMENT"] = ""
form_data["__LASTFOCUS"] = ""
form_data["__VIEWSTATE"] = path.xpath('//*[@id="__VIEWSTATE"]/@value')
form_data["__EVENTVALIDATION"] = path.xpath('//*[@id="__EVENTVALIDATION"]/@value')
form_data["ctl00$ContentPlaceHolder1$dropact"] = "LT"
form_data["ctl00$ContentPlaceHolder1$Ddl_Divisions"] = "GUNTUR"
form_data["ctl00$ContentPlaceHolder1$Ddl_Circles"] = "All Circles"
form_data["ctl00$ContentPlaceHolder1$ddlbusines"] = "Agent"
conn = http.client.HTTPConnection('apct.gov.in')
url_params = urllib.parse.urlencode(header)
# 1 #
# conn.request("POST", url, url_params, header)
# response = conn.getresponse()
# print(response.status, response.reason)
# data = response.read()
# print(data)
# conn.close()
# 2 #
# r = requests.post(url,form_data,url_params)
# #import pdb; pdb.set_trace()
# print(r.status_code, r.reason)
当我运行第一个评论部分以检索响应时,它显示403 forbidden,当我运行第二个评论部分时,它显示internal server error。
任何人都可以在我收到此错误的任何行中找到任何错误。我知道信息如此有限,很难识别错误,但这是我最后的选择。
提前致谢。
【问题讨论】:
-
403表示服务器拒绝您的请求,因为您未经过身份验证。 内部服务器错误发生在服务器遇到错误(通常是异常)并返回http状态码500时。 -
感谢您的回复,但我知道这些事情我仍然无法在这里发现我的错误..
-
这些不是你的(直接)错误。请注意这两种情况下的术语服务器。
标签: python web-scraping python-requests httprequest lxml