【问题标题】:Scraping reviews from tripadvisor从 tripadvisor 抓取评论
【发布时间】:2018-05-31 01:50:00
【问题描述】:

假设我正在从 url 中抓取评论

https://www.tripadvisor.com/Hotel_Review-g562819-d289642-Reviews-Hotel_Caserio-Playa_del_Ingles_Maspalomas_Gran_Canaria_Canary_Islands.html

它不包含包含我要抓取的评论的页面。那么如何才能抓取所有下一页的评论。

我使用了下面的代码,但仍然只显示第一页的评论!

from bs4 import BeautifulSoup
import requests

URL_BASE = "https://www.tripadvisor.com/Hotel_Review-g562819-d289642-Reviews-Hotel_Caserio-Playa_del_Ingles_Maspalomas_Gran_Canaria_Canary_Islands.html"
MAX_PAGES = 30
counter = 0

for i in range(1, MAX_PAGES):

if i > 1:
    url = "%spage/%d/" % (URL_BASE, i)
else:
    url = URL_BASE

req = requests.get(url)
statusCode = req.status_code
if statusCode == 200:

    html = BeautifulSoup(req.text, "html.parser")
    resultsoup = html.find_all('P', {'class': 'partial_entry'})

else:
    break

for review in resultsoup:
review_list = review.get_text()
print(review_list)

【问题讨论】:

  • 你尝试了什么?
  • 您尝试为页码创建的 URL。 1 起似乎不起作用...
  • 此页面上的评论解决方案是几天前的 - 但可能适用于 scrapypython-request。我不记得了。此页面使用JavaScript 加载数据,BS 不运行 JS。您可能需要Selenium 来控制将加载页面并运行 JS 的 Web 浏览器。或者在 Chrome/Firefox (tab Network->XHR) 中使用DevTools 来查找 JS 用来获取数据的 url。
  • scrapy 的解决方案 - scrapy-tripadvisor-reviews - 您可以阅读代码来为requests + beautifulsoup 创建解决方案

标签: python python-3.x python-2.7 beautifulsoup


【解决方案1】:

基于example for scrapy

服务器添加到url(在.html之前的任何地方)

  • -or5获取第二页,
  • -or10获取第三页,

等等

您甚至可以跳过单词(用于SEO)并仅使用

https://www.tripadvisor.com/g562819-d289642-or5.html
https://www.tripadvisor.com/g562819-d289642-or10.html

获取带有评论的下一页。

from bs4 import BeautifulSoup
import requests
import re
#import webbrowser

def get_soup(url):

    headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0'}

    r = s.get(url, headers=headers)

    #with open('temp.html', 'wb') as f:
    #    f.write(r.content)
    #    webbrowser.open('temp.html')

    if r.status_code != 200:
        print('status code:', r.status_code)
    else:
        return BeautifulSoup(r.text, 'html.parser')

def parse(url, response):

    if not response:
        print('no response:', url)
        return

    # get number of reviews
    num_reviews = response.find('span', class_='reviews_header_count').text
    num_reviews = num_reviews[1:-1] # remove `( )`
    num_reviews = num_reviews.replace(',', '') # remove `,`
    num_reviews = int(num_reviews)
    print('num_reviews:', num_reviews, type(num_reviews))

    # create template for urls to pages with reviews
    url = url.replace('.html', '-or{}.html')
    print('template:', url)

    # load pages with reviews
    for offset in range(0, num_reviews, 5):
        print('url:', url.format(offset))
        url_ = url.format(offset)
        parse_reviews(url_, get_soup(url_))
        return # for test only - to stop after first page

def parse_reviews(url, response):
    print('review:', url)

    if not response:
        print('no response:', url)
        return

    # get every review
    for idx, review in enumerate(response.find_all('div', class_='review-container')):
        item = {
            'hotel_name': response.find('h1', class_='heading_title').text,
            'review_title': review.find('span', class_='noQuotes').text,
            'review_body': review.find('p', class_='partial_entry').text,
            'review_date': review.find('span', class_='relativeDate')['title'],#.text,#[idx],
            'num_reviews_reviewer': review.find('span', class_='badgetext').text,
            'reviewer_name': review.find('span', class_='scrname').text,
            'bubble_rating': review.select_one('div.reviewItemInline span.ui_bubble_rating')['class'][1][7:],
        }

        results.append(item) # <--- add to global list

        #~ yield item
        for key,val in item.items():
            print(key, ':', val)
        print('----')
        #return # for test only - to stop after first review


# --- main ---

s = requests.Session()

start_urls = [
    'https://www.tripadvisor.com/Hotel_Review-g562819-d289642-Reviews-Hotel_Caserio-Playa_del_Ingles_Maspalomas_Gran_Canaria_Canary_Islands.html',
    #'https://www.tripadvisor.com/Hotel_Review-g60795-d102542-Reviews-Courtyard_Philadelphia_Airport-Philadelphia_Pennsylvania.html',
    #'https://www.tripadvisor.com/Hotel_Review-g60795-d122332-Reviews-The_Ritz_Carlton_Philadelphia-Philadelphia_Pennsylvania.html',
]

results = [] # <--- global list for items

for url in start_urls:
    parse(url, get_soup(url))

import pandas as pd

df = pd.DataFrame(results) # <--- convert list to DataFrame
df.to_csv('output.csv')    # <--- save in file

【讨论】:

  • 您提供的上述代码显示了如何完美地获取所有页面!如何将这些结果导出或保存为 CSV?
  • python 有模块 csvpandas 与表格一起使用。将所有item 保留在全局列表中,解析所有页面后,您可以使用csv 或pandas` 保存它。或者您可以在解析评论时使用csv 写。
  • BTW:如果你使用scrapy,那么它会自动保存在CSVXMLJSON
  • @Lachie 我添加了使用pandas 保存在文件中的代码。
  • @Lachie 我也会把这段代码放到 GitHub 上——python-examples/scraping
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 2019-09-06
  • 1970-01-01
  • 2018-10-29
  • 1970-01-01
  • 1970-01-01
  • 2019-12-04
  • 2019-05-08
相关资源
最近更新 更多