【问题标题】:How to scrape multiple webpages without overwriting the results?如何在不覆盖结果的情况下抓取多个网页?
【发布时间】:2019-09-25 23:19:22
【问题描述】:

刚开始抓取并尝试从 Transfermarkt 抓取多个网页而不覆盖之前的网页。

知道以前有人问过这个问题,但我无法解决这个问题。

from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools

headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']

for url in urls:
    r = requests.get(url,  headers = headers)
    soup = bs(r.content, 'html.parser')


    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]

df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
print(df)

df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv')

在抓取后能够区分网页也很有帮助。

任何帮助将不胜感激。

【问题讨论】:

  • 为什么不在 csv 文件中附加时间戳以区分并避免每次执行中的重叠。 i.e "bayern-munich123_"+TS+".csv" TS 变量保存今天的日期。
  • 其他人似乎将“不覆盖”解释为您不想用另一个 CSV 覆盖您的输出 CSV。从您的代码来看,您的问题更多地与以下事实有关:当您的程序循环通过urls 时,每个熊猫的抓取数据数据帧都被后续数据帧覆盖。你能为我们澄清一下你的意图吗?
  • 很抱歉。我的意图是为每个 url 生成一个数据帧(不会被过去的一个覆盖),以便为每个数据帧获取输出 CSV。

标签: python web-scraping beautifulsoup xml-parsing html-parsing


【解决方案1】:

两种可能的方法:

  1. 您可以在文件名中添加时间戳,以便每次运行脚本时创建不同的 CSV 文件

    from datetime import datetime
    
    timestamp = datetime.now().strftime("%Y-%m-%d %H.%m.%s")
    df.to_csv(rf'Uljanas-MacBook-Air-2:~ uljanadufour$\{timestamp}  bayern-munich123.csv')
    

    这将为您提供以下格式的文件:

    "2019-05-08 10.39.05  bayern-munich123.csv"
    

    通过使用年月日格式,您的文件将自动按时间顺序排序。

  2. 或者,您可以使用附加模式添加到现有的 CSV 文件:

    df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv', mode='a')
    

最后,您当前的代码仅保存最后一个 URL,如果要将每个 URL 保存为不同的文件,则需要在循环中缩进最后两行。您可以在文件名中添加一个数字以区分每个 URL,例如12 如下。 Python 的enumerate() 函数可用于为每个 URL 提供一个数字:

from datetime import datetime
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools


headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']

urls = [
    'https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 
    'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1'
]

for index, url in enumerate(urls, start=1):
    r = requests.get(url,  headers=headers)
    soup = bs(r.content, 'html.parser')

    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]

    df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)

    timestamp = datetime.now().strftime("%Y-%m-%d %H.%M.%S")
    df.to_csv(rf'{timestamp}  bayern-munich123_{index}.csv')    

这会给你文件名,例如:

"2019-05-08 11.44.38  bayern-munich123_1.csv"

【讨论】:

    【解决方案2】:

    上面的代码会为每个 URL 抓取数据,解析它而不将其放入数据框中,然后转到下一个 URL。由于您对 pd.DataFrame() 的调用发生在循环之外,因此您正在从 urls 中的最后一个 URL 构建页面数据的数据框。

    您需要在 for 循环之外创建一个数据框,然后将每个 URL 的传入数据附加到该数据框。

    from bs4 import BeautifulSoup as bs
    import requests
    import re
    import pandas as pd
    import itertools
    
    headers = {'User-Agent' : 'Mozilla/5.0'}
    df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
    urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']
    
    #### Add this before for-loop. ####
    # Create empty dataframe with expected column names.
    df_full = pd.DataFrame(columns = df_headers)
    
    for url in urls:
        r = requests.get(url,  headers = headers)
        soup = bs(r.content, 'html.parser')
    
    
        position_number = [item.text for item in soup.select('.items .rn_nummer')]
        position_description = [item.text for item in soup.select('.items td:not([class])')]
        name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
        dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
        nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
        height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
        foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
        joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
        signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                       for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
        contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
    
    
        #### Add this to for-loop. ####
    
        # Create a dataframe for page data.
        df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
    
        # Add page URL to index of page data.
        df.index = [url] * len(df)
    
        # Append page data to full data.
        df_full = df_full.append(df)
    
    print(df_full)
    

    【讨论】:

    • 感谢您的解释,现在完全有道理了。
    猜你喜欢
    • 2019-05-23
    • 1970-01-01
    • 2014-07-18
    • 2019-01-19
    • 1970-01-01
    • 2023-03-25
    • 2017-12-25
    • 1970-01-01
    • 2021-09-11
    相关资源
    最近更新 更多