【问题标题】:Scraping Amazon reviews using Beautiful Soup使用 Beautiful Soup 抓取亚马逊评论
【发布时间】:2020-07-30 22:00:44
【问题描述】:

我需要从这个亚马逊页面上抓取一些信息:

https://www.amazon.com/dp/B07Q6H83VY/ref=sspa_dk_detail_6?pd_rd_i=B07Q6H83VY&pd_rd_w=n4cqh&pf_rd_p=48d372c1-f7e1-4b8b-9d02-4bd86f5158c5&pd_rd_wg=8d6Pd&pf_rd_r=AES6X22PPPPREK5DD60G&pd_rd_r=2a4ff4e6-f8ce-4d62-8106-cffd53838b9e&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyTTZUQzQ0Q05TOVZJJmVuY3J5cHRlZElkPUEwMDU2MjE0Q05HOUFSMkdQTkhPJmVuY3J5cHRlZEFkSWQ9QTA4NTIyNzAxOVZYM1dISEVBUk1DJndpZGdldE5hbWU9c3BfZGV0YWlsJmFjdGlvbj1jbGlja1JlZGlyZWN0JmRvTm90TG9nQ2xpY2s9dHJ1ZQ&th=1

具体来说,我会对这些领域感兴趣:

Author | Star | Date | Title | Review

例如:

    Gi
1.0 out of 5 stars Unacceptable Launch State for PS4

Reviewed in the United States on September 14, 2019

Platform: PlayStation 4Edition: Super DeluxeVerified Purchase

I'm a huge fan of this franchise. Own all of the games, for both PS4 and PC. Waited a very long time for this game and I'm speechless. You can find many reviews of the gameplay and other aspects of the game, but I'll focus on my initial thoughts and will update accordingly. First and foremost, the performance on the PS4 Slim is terrible. Frames per second is unacceptable for a split screen configuration, where scrolling between screens and reviewing the map and fighting a screen full of NPCs is horrendous. Take 2 / Gearbox couldn't even get the scaling correct with the menus, loot menus, and any text (aside from subtitles) and it's similar to reading 8 pt font on a 65 inch screen. There is no vertical split screen and no other options to improve performance. Missions are uneventful and no concise storyline that enables campaign mode truly enjoyable. In many aspects, you'd wish this game was more linear than it is, but it's storyline isn't inspiring at all. Only after a few hours of gameplay, we decided it's not worth our time until the developers make significant improvements with performance. I wish we could refund this garbage.

由于我以前从未这样做过,我想知道我是否可以使用 Scrapy/BeautifulSoup/Selenium 来做一些事情,或者我是否需要一个 API,尽管这些信息来自

Author under <span class="a-profile-name">Gi</span>

Rating <span class="a-icon-alt">1.0 out of 5 stars</span>

Review <div data-hook="review-collapsed" aria-expanded="false" class="a-expander-content a-expander-partial-collapse-content" style="padding-bottom: 19px;"> ...TEXT...</div>

【问题讨论】:

    标签: python web-scraping beautifulsoup


    【解决方案1】:

    Scrapy 将是此任务的不错选择。 这将是一个非常简单的蜘蛛,能够收集所需的信息。

    import scrapy
    
    
    class TestSpider(scrapy.Spider):
        name = 'test'
        start_urls = ['https://www.amazon.com/dp/B07Q6H83VY']
    
        def parse(self, response):
            for row in response.css('div.review'):
                item = {}
    
                item['author'] = row.css('span.a-profile-name::text').extract_first()
    
                rating = row.css('i.review-rating > span::text').extract_first().strip().split(' ')[0]
                item['rating'] = int(float(rating.strip().replace(',', '.')))
    
                item['title'] = row.css('span.review-title > span::text').extract_first()
                created_date = row.css('span.review-date::text').extract_first().strip()
                item['created_date'] = created_date
    
                review_content = row.css('div.reviewText ::text').extract()
                review_content = [rc.strip() for rc in review_content if rc.strip()]
                item['content'] = ', '.join(review_content)
    
                yield item
    

    输出示例:

    {
            "author": "Jhona Diaz",
            "rating": 4,
            "title": "Recomendable solo si eres fan ya que si está algo caro",
            "created_date": "Reviewed in Mexico on November 23, 2019",
            "content": "Buena calidad y pues muy completo"
        },
        {
            "author": "MANUEL MENDOZA OLVERA",
            "rating": 5,
            "title": "Perfecto Estado",
            "created_date": "Reviewed in Mexico on September 28, 2019",
            "content": "excelente, la edición es de caja  metálica y llegó intacta"
        },
    

    【讨论】:

    • 谢谢@Roman。我怎样才能得到这些结果?我试过test=TestSpider(scrapy.Spider) print(test.item),但它不起作用。
    • 安装scrapy>创建scrapy项目(scrapy startproject project_name)>创建蜘蛛(scrapy genspider spider_name domain)>运行scrapy(scrapy crawl spider_name -o result.json)创建的蜘蛛你可以用我在答案中提供的代码替换更多细节你可以在官方文档中找到scrapy - docs.scrapy.org/en/latest/intro/tutorial.html
    【解决方案2】:

    首先做 pip install selenium

    第二次使用 Python 库 dryscrape 抓取 javascript 驱动的网站。在这个网址上https://phantomjs.org/download.html

    from selenium import webdriver
    #the path below  from dryscrape  folder  from step2 
     driver = webdriver.PhantomJS(executable_path='C:\\Users\\nayef\\Desktop\\New folder\\phantomjs-2.1.1-windows\\bin\\phantomjs')
    driver.get('https://www.amazon.com/dp/B07Q6H83VY')
    p_element = driver.find_element_by_id('deliveryMessageMirId')
    
    driver.get(my_url)
    p_element = driver.find_element_by_id(id_='intro-text')
    print(p_element.text)
    
    # result:
    Arrives: Friday, Aug 7 Details

    【讨论】:

      猜你喜欢
      • 2017-07-28
      • 2020-01-26
      • 2018-07-21
      • 2020-12-04
      • 1970-01-01
      • 2020-07-07
      • 1970-01-01
      • 1970-01-01
      • 2012-04-27
      相关资源
      最近更新 更多