【发布时间】:2019-07-24 00:39:21
【问题描述】:
我想在网上刮一张关于白银价格的表格。我阅读了文本数据,但无法将它们导入 pandas 数据框。
特别是,我首先尝试将内容写入 txt 文件,然后将其读取到 Pandas 数据框。那时我遇到了一个例外。
有没有办法将数据直接传递给 Pandas 数据框,而无需先将它们保存在文本文件中?
我的代码如下:
import requests
from bs4 import BeautifulSoup
url = 'https://www.usagold.com/reference/prices/silverhistory.php'
headers = {'User-Agent': 'Mozilla/5.0'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, 'html.parser')
tables = soup.find_all('table', rules = 'all')
table = tables[0]
with open ('silver_prices.txt', 'w') as r:
for row in table.find_all('tr'):
for cell in row.find_all('td'):
r.write(cell.text)
r.write('\n')
pd.read_csv('silver_prices.txt')
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-113-9cad0b29d24e> in <module>()
----> 1 pd.read_csv('silver_prices.txt')
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
700 skip_blank_lines=skip_blank_lines)
701
--> 702 return _read(filepath_or_buffer, kwds)
703
704 parser_f.__name__ = name
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
427
428 # Create the parser.
--> 429 parser = TextFileReader(filepath_or_buffer, **kwds)
430
431 if chunksize or iterator:
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
893 self.options['has_index_names'] = kwds['has_index_names']
894
--> 895 self._make_engine(self.engine)
896
897 def close(self):
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
1120 def _make_engine(self, engine='c'):
1121 if engine == 'c':
-> 1122 self._engine = CParserWrapper(self.f, **self.options)
1123 else:
1124 if engine == 'python':
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1851 kwds['usecols'] = self.usecols
1852
-> 1853 self._reader = parsers.TextReader(src, **kwds)
1854 self.unnamed_cols = self._reader.unnamed_cols
1855
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._get_header()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 0: invalid start byte
【问题讨论】:
-
编辑此问题以隐藏 URL 没有任何价值,因为它已在下面的答案中提及。无论如何,它也可用in the edit history。一般来说,一旦给出了问题的答案,此时最好只进行少量编辑(黄金标准是确保答案是对新版本问题的合乎逻辑的回答 -如果编辑使旧答案令人困惑,那么它不是一个好的编辑)。
标签: python-3.x pandas web-scraping beautifulsoup