【问题标题】:Web scraping a table and passing it to a Pandas dataframeWeb 抓取表格并将其传递给 Pandas 数据框
【发布时间】:2019-07-24 00:39:21
【问题描述】:

我想在网上刮一张关于白银价格的表格。我阅读了文本数据,但无法将它们导入 pandas 数据框。

特别是,我首先尝试将内容写入 txt 文件,然后将其读取到 Pandas 数据框。那时我遇到了一个例外。

有没有办法将数据直接传递给 Pandas 数据框,而无需先将它们保存在文本文件中?

我的代码如下:

import requests

from bs4 import BeautifulSoup

url = 'https://www.usagold.com/reference/prices/silverhistory.php'

headers = {'User-Agent': 'Mozilla/5.0'}

response = requests.get(url, headers=headers)

soup = BeautifulSoup(response.content, 'html.parser')
tables = soup.find_all('table', rules = 'all')

table = tables[0]

with open ('silver_prices.txt', 'w') as r:


    for row in table.find_all('tr'):
        for cell in row.find_all('td'):
            r.write(cell.text)
        r.write('\n')

pd.read_csv('silver_prices.txt')
---------------------------------------------------------------------------
UnicodeDecodeError                        Traceback (most recent call last)
<ipython-input-113-9cad0b29d24e> in <module>()
----> 1 pd.read_csv('silver_prices.txt')

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
    700                     skip_blank_lines=skip_blank_lines)
    701 
--> 702         return _read(filepath_or_buffer, kwds)
    703 
    704     parser_f.__name__ = name

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
    427 
    428     # Create the parser.
--> 429     parser = TextFileReader(filepath_or_buffer, **kwds)
    430 
    431     if chunksize or iterator:

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
    893             self.options['has_index_names'] = kwds['has_index_names']
    894 
--> 895         self._make_engine(self.engine)
    896 
    897     def close(self):

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
   1120     def _make_engine(self, engine='c'):
   1121         if engine == 'c':
-> 1122             self._engine = CParserWrapper(self.f, **self.options)
   1123         else:
   1124             if engine == 'python':

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
   1851         kwds['usecols'] = self.usecols
   1852 
-> 1853         self._reader = parsers.TextReader(src, **kwds)
   1854         self.unnamed_cols = self._reader.unnamed_cols
   1855 

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._get_header()

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 0: invalid start byte

【问题讨论】:

  • 编辑此问题以隐藏 URL 没有任何价值,因为它已在下面的答案中提及。无论如何,它也可用in the edit history。一般来说,一旦给出了问题的答案,此时最好只进行少量编辑(黄金标准是确保答案是对新版本问题的合乎逻辑的回答 -如果编辑使旧答案令人困惑,那么它不是一个好的编辑)。

标签: python-3.x pandas web-scraping beautifulsoup


【解决方案1】:

您可以使用 StringIO 来避免保存到文件

from bs4 import BeautifulSoup
from StringIO import StringIO
import pandas as pd
import requests

url = 'https://www.usagold.com/reference/prices/silverhistory.php'

headers = {'User-Agent': 'Mozilla/5.0'}

response = requests.get(url, headers=headers)

soup = BeautifulSoup(response.content, 'html.parser')
tables = soup.find_all('table', rules = 'all')

table = tables[0]

df = pd.read_html(StringIO(table), skiprows=2, flavor='bs4')[0]
print(df.head())

这个答案的大部分是从上一个问题的答案中偷来的:Pandas read_html results in TypeError 可能是重复的

编辑

以上方法适用于 python2,但 StringIO 在 python3 中由于某种原因失败。不过,您不需要 StringIO,只需将 table 转换为字符串即可:

from bs4 import BeautifulSoup
import pandas as pd
import requests

url = 'https://www.usagold.com/reference/prices/silverhistory.php'

headers = {'User-Agent': 'Mozilla/5.0'}

response = requests.get(url, headers=headers)

soup = BeautifulSoup(response.content, 'html.parser')
tables = soup.find_all('table', rules = 'all')

table = str(tables[0]) #cast table to string

df = pd.read_html(table, skiprows=2, flavor='bs4')[0]
print(df.head())

【讨论】:

  • 运行最后一个命令时 (df = ...) 我得到一个异常:------------------------- -------------------------------------------------- TypeError Traceback (最近一次调用最后一次) in () ----> 1 df = pd.read_html(StringIO(table), skiprows=2, flavor='bs4') [0] TypeError: initial_value 必须是 str 或 None,而不是 Tag
  • 对不起,我使用的是 python2。我进行了编辑,现在它适用于我在 python 3 上。你甚至不需要 StringIO
猜你喜欢
  • 2020-09-16
  • 1970-01-01
  • 2020-03-07
  • 2015-10-27
  • 1970-01-01
  • 1970-01-01
  • 2021-06-19
  • 2016-08-28
  • 1970-01-01
相关资源
最近更新 更多