【问题标题】:Web Scraping with rvest and R使用 rvest 和 R 进行网页抓取
【发布时间】:2023-09-25 15:50:01
【问题描述】:

我正在尝试从http://www.morningstar.com/funds/xnas/adafx/quote.html 网站抓取特定基金的总资产(在这种情况下为 ADAFX)。但结果总是字符(空);我做错了什么?

我以前使用过 rvest,结果好坏参半,所以我想有时间从值得信赖的大师(就是你)社区获得专家帮助。

library(rvest)      
Symbol.i ="ADAFX"
url <-Paste("http://www.morningstar.com/funds/xnas/",Symbol.i,"/quote.html",sep="")
  tryCatch(NetAssets.i <- url %>%
             read_html() %>%
             html_nodes(xpath='//*[@id="gr_total_asset_wrap"]/span/span') %>%
             html_text(), error = function(e) NetAssets.i = NA)

提前谢谢你, 干杯,

亚伦·索德斯特罗姆

【问题讨论】:

  • 你知道有一个 Morningstar api,对吧?示例见here
  • 谢谢,我知道 API,但我正在尝试构建 API 不包含的自定义基金筛选器。

标签: r web-scraping rvest


【解决方案1】:

它是一个动态页面,通过 XHR 请求为各个部分加载数据,因此您必须查看 Developer Tools Network 选项卡以获取目标内容 URL。

library(httr)
library(rvest)

res <- GET(url = "http://quotes.morningstar.com/fundq/c-header",
           query = list(
             t="XNAS:ADAFX",
             region="usa",
             culture="en-US",
             version="RET",
             test="QuoteiFrame"
           )
)

content(res) %>%
  html_nodes("span[vkey='TotalAssets']") %>%
  html_text() %>%
  trimws()
## [1] "20.6  mil"

【讨论】:

  • 看来我的作业已经完成了,谢谢你给我一个开始!
  • 后续问题,当通过循环运行代码时,它会出现错误:Error in UseMethod("content", x) : no applicable method for 'content' applied to an object of class "response" 任何关于发生了什么的想法。一旦发生,即使您的原始代码也会出现此错误。我必须重新启动我的 R 会话来修复它。
  • 需要查看循环。可能是后续 SO q 的候选人。
【解决方案2】:

Here 是它调用的 csv 文件。

library(httr)
library(rvest)
library(tm)
library(plyr)
require("dplyr")

MF.List <- read.csv("C:/Users/Aaron/Documents/Investment Committee/Screener/Filtered Funds.csv")
Category.list <- read.csv("C:/Users/Aaron/Documents/Investment Committee/Screener/Category.csv")
Category.list <- na.omit(Category.list)

Category.name <- "Financial"
MF.Category.List <- filter(MF.List, Category == Category.name)

morningstar.scrape <- list()

for(i in 1:nrow(MF.Category.List)){

  Symbol.i =as.character(MF.Category.List[i,"Symbol"])
  res <- GET(url = "http://quotes.morningstar.com/fundq/c-header",
             query = list(
               t=paste("XNAS:",Symbol.i,sep=""),
               region="usa",
               culture="en-US",
               version="RET",
               test="QuoteiFrame"
             )
  )

  tryCatch(
    TTM.Yield <- content(res) %>%
      html_nodes("span[vkey='ttmYield']") %>%
      html_text() %>%
      trimws()
    , error = function(e) TTM.Yield<-NA)

  tryCatch(
    Load <- content(res) %>%
      html_nodes("span[vkey='Load']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Load = NA)

  tryCatch(
    Total.Assets <- content(res) %>%
      html_nodes("span[vkey='TotalAssets']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Total.Assets = NA)

  tryCatch(
    Expense.Ratio <- content(res) %>%
      html_nodes("span[vkey='ExpenseRatio']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Expense.Ratio = NA)

  tryCatch(
    Fee.Level <- content(res) %>%
      html_nodes("span[vkey='FeeLevel']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Fee.Level = NA)

  tryCatch(
    Turnover <- content(res) %>%
      html_nodes("span[vkey='Turnover']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Turnover = NA)

  tryCatch(
    Status <- content(res) %>%
      html_nodes("span[vkey='Status']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Status = NA)

  tryCatch(
    Min.Investment <- content(res) %>%
      html_nodes("span[vkey='MinInvestment']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Min.Investment = NA)

  tryCatch(
    Yield.30day <- content(res) %>%
      html_nodes("span[vkey='Yield']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Yield.30day = NA)

  tryCatch(
    Investment.Style <- content(res) %>%
      html_nodes("span[vkey='InvestmentStyle']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Investment.Style = NA)

  tryCatch(
    Bond.Style <- content(res) %>%
      html_nodes("span[vkey='BondStyle']") %>%
      html_text() %>%
      trimws()
    , error = function(e) Bond.Style = NA)

  x.frame <- c(Symbol =as.character(Symbol.i),TTM.Yield = as.character(TTM.Yield), Load = as.character(Load),
               Total.Assets = as.character(Total.Assets),Expense.Ratio = as.character(Expense.Ratio),
               Turnover = as.character(Turnover), Status = as.character(Status), 
               Yield.30day = as.character(Yield.30day), 
               Investment.Style = as.character(Investment.Style),Bond.Style = as.character(Bond.Style))

  morningstar.scrape[[i]] = x.frame
  x.frame = NULL
}

MS.scrape <- do.call(rbind, morningstar.scrape)

【讨论】:

  • 看起来添加 library(tm) 会导致问题,我从代码中删除了 tm 包并使用 gerpl 而不是过滤器。现在循环工作正常。所以问题是这两个包玩得不好;至于为什么,我不知道。
【解决方案3】:

工作代码,

我在网络抓取中添加了一个功能并删除了库(tm)。

library(httr)
library(rvest)


    get.morningstar <- function(Symbol.i,htmlnode){
      res <- GET(url = "http://quotes.morningstar.com/fundq/c-header",
                 query = list(
                   t=paste("XNAS:",Symbol.i,sep=""),
                   region="usa",
                   culture="en-US",
                   version="RET",
                   test="QuoteiFrame"
                 )
      )

      x <- content(res) %>%
        html_nodes(htmlnode) %>%
        html_text() %>%
        trimws()

      return(x)
    }



    MF.List <- read.csv("C:/Users/Aaron/Documents/Bitrix24/Investment Committee/Screener/Filtered Funds.csv")
    Category.list <- read.csv("C:/Users/Aaron/Documents/Bitrix24/Investment Committee/Screener/Category.csv")
    Category.list <- na.omit(Category.list)

    Category.name <- "Small Growth"
    MF.Category.List <- MF.List[grepl(Category.name,MF.List$Category), ]
    morningstar.scrape <- list()

    for(i in 1:nrow(MF.Category.List)){
      Symbol.i =as.character(MF.Category.List[i,"Symbol"])
      try(Total.Assets <- get.morningstar(Symbol.i,"span[vkey='TotalAssets']"))
      print(Total.Assets)
    }

【讨论】:

    最近更新 更多