【问题标题】:Read scraped data vertically out of table instead horizontally Python从表中垂直读取抓取的数据,而不是水平读取 Python
【发布时间】:2019-07-22 10:20:24
【问题描述】:

我正在使用 python 和 beautifulsoup 编写一个网络爬虫,以从网页的表格中获取数据。 表格链接在代码中(url01)

我想知道是否有可能从表格中垂直而不是水平地读取数据

这是我的代码

import requests
import json
from bs4 import BeautifulSoup
from itertools import islice

#URL declaration
url01 = 'https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html'

#BeautifulSoup4
response = requests.get(url01, timeout=5)
content = BeautifulSoup(response.content, 'html.parser')

#deletes all the empty tags
empty_tags = content.find_all(lambda tag: not tag.contents)
[empty_tag.extract() for empty_tag in empty_tags]

#Find all td in class body in div table table-hover
data = content.find_all('td')
#print (data)

numbers = [d.text.encode('utf-8') for d in data]
#print (numbers)

#create string
str1 = ''.join(str(e) for e in numbers)
#print (str1)

str_splt = str1.split('b')
#print (str_splt)

#Split list into several sublists
length_to_split = [45, 45, 45, 110, 110, 110, 188, 188, 188, 253, 253, 253, 383, 383, 383]
Input = iter(str_splt)
Output = [list(islice(Input, elem))
          for elem in length_to_split]
print (Output[3])


#Python dictionary
dataDict = {
    '2015 Lohn': None,
    '2015 Sonstiges': None,
    '2015 Insgesamt': None,
    'Insgesamt': None
    }

dataDict['Insgesamt'] = str_splt
#print (dataDict)

#save dictionary in json file
with open('indexData.json', 'w') as f:
    json.dump(dataDict, f)

当我执行程序并想打印出我的第一个子列表时,这些就是结果。它具有所需的长度(45),但它是从表格中水平读取的,这使得它无用

['', "'108,6'", "'110,8'", "'109,8'", "'122,1'", "'114,3'", "'118,0'", "'140,6'", "'131,9'", "'136,0'", "'162,0'", "'166,3'", "'165,2'", "'261,9'", "'189,8'", "'222,5'", "'108,6'", "'111,4'", "'110,1'", "'122,1'", "'115,0'", "'118,4'", "'140,6'", "'132,6'", "'136,4'", "'162,0'", "'167,2'", "'165,7'", "'261,9'", "'190,8'", "'223,1'", "'105,2'", "'111,9'", "'108,9'", "'118,2'", "'115,5'", "'117,1'", "'136,2'", "'133,2'", "'134,9'", "'157,0'", "'168,0'", "'163,9'", "'253,7'", "'191,7'"]

【问题讨论】:

标签: python web-scraping beautifulsoup


【解决方案1】:

一种可能的解决方案,没有pandas。函数get_column()将列作为元组返回,索引从0开始:

import requests
import json
from bs4 import BeautifulSoup
from itertools import islice

#URL declaration
url01 = 'https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html'

#BeautifulSoup4
response = requests.get(url01, timeout=5)
content = BeautifulSoup(response.content, 'html.parser')

rows = []
for tr in content.select('tr')[:-1]: # [:-1] because we don't want the last info row
    data = [td.get_text(strip=True) for td in tr.select('td')]
    if data:
        rows.append(data)

def get_column(rows, col_num):
    return [*zip(*rows)][col_num]

print('2015 Lohn:')
print(get_column(rows, 0))

print('2015 Sonstiges:')
print(get_column(rows, 1))

print('2015 Insgesamt:')
print(get_column(rows, 2))

打印:

2015 Lohn:
('108,6', '108,6', '105,2', '105,2', '105,2', '105,2', '104,4', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '102,9', '102,9', '102,9', '102,9', '102,6', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '101,9', '101,9', '101,9', '101,9', '101,5', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '100,8', '100,8', '100,8', '100,8', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')
2015 Sonstiges:
('110,8', '111,4', '111,9', '111,0', '111,6', '112,4', '112,6', '113,1', '114,6', '114,8', '114,3', '113,8', '113,0', '113,3', '112,7', '111,4', '110,5', '109,9', '110,0', '106,3', '108,9', '108,9', '108,3', '107,3', '105,7', '105,0', '105,2', '106,1', '106,5', '105,1', '104,3', '104,1', '97,7', '101,6', '99,6', '99,1', '98,5', '98,5', '98,3', '98,9', '98,5', '96,2', '94,1', '93,9', '94,9', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')
2015 Insgesamt:
('109,8', '110,1', '108,9', '108,4', '108,7', '109,1', '108,9', '109,5', '110,4', '110,4', '110,2', '109,9', '109,5', '109,6', '109,3', '107,6', '107,1', '106,8', '106,8', '104,6', '106,2', '106,2', '105,9', '105,4', '104,5', '104,1', '104,2', '104,7', '104,4', '103,6', '103,2', '103,1', '99,4', '101,7', '100,6', '100,4', '100,0', '100,0', '99,9', '100,2', '100,0', '98,2', '97,1', '97,0', '97,6', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')

【讨论】:

    【解决方案2】:

    使用pandas 库:

    • pd.read_html() - 生成数据框列表(HTML 源代码中可能有多个表),通过索引获取所需的表。
    • df.to_csv() - 将数据保存到 csv 文件中。

    import pandas as pd
    
    #read html page table data.
    table = pd.read_html("https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html")
    #saved data into csv file
    print(table[0].to_csv("indexData.csv"))
    

    【讨论】:

      猜你喜欢
      • 2017-09-01
      • 1970-01-01
      • 2021-11-04
      • 1970-01-01
      • 2022-12-10
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多