【问题标题】:How can I web scraping without the problem of null website in R?如何在没有 R 中的空网站问题的情况下进行网页抓取?
【发布时间】:2020-04-29 06:57:35
【问题描述】:

我需要提取有关物种的信息并编写以下代码。但是,我对一些缺失的物种有疑问。如何避免这个问题。

Q<-c("rvest","stringr","tidyverse","jsonlite")
lapply(Q,require,character.only=TRUE)

#This part was obtained by pagination that I not provided to have a short code
sp1<-as.matrix(c("https://www.gulfbase.org/species/Acanthilia-intermedia", "https://www.gulfbase.org/species/Achelous-floridanus",                                      "https://www.gulfbase.org/species/Achelous-ordwayi", "https://www.gulfbase.org/species/Achelous-spinicarpus","https://www.gulfbase.org/species/Achelous-spinimanus",                       
"https://www.gulfbase.org/species/Agolambrus-agonus",                         
"https://www.gulfbase.org/species/Agononida-longipes",                        
"https://www.gulfbase.org/species/Amphithrax-aculeatus",                      
"https://www.gulfbase.org/species/Anasimus-latus"))    
> sp1

GiveMeData<-function(url){ 
  sp1<-read_html(url)

  sp1selmax<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div.figures--joined > div:nth-child(1)"
  Mindepth<-html_node(sp1,sp1selmax)
  mintext<-html_text(Mindepth)
  mintext

  sp1selmax<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div.figures--joined > div:nth-child(2)"
  Maxdepth<-html_node(sp1,sp1selmax)
  maxtext<-html_text(Maxdepth)
  maxtext

  sp1seldist<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div:nth-child(2) > div:nth-child(2) > div"
  Distr<-html_node(sp1,sp1seldist)
  distext<-html_text(Distr)
  distext

  sp1habitat<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div:nth-child(3) > ul"
  Habit<-html_node(sp1,sp1habitat)
  habtext<-html_text(Habit)
  habtext

  sp1habitat2<-"#block-beaker-content > article > div > main > section.node--full__main > div.node--full__figures > div.field > ul > li"
  Habit2<-html_node(sp1,sp1habitat2)
  habtext2<-html_text(Habit2)
  habtext2

  sp1ref<-"#block-beaker-content > article > div > main > section.node--full__related"
  Ref<-html_node(sp1,sp1ref)
  reftext<-html_text(Ref)
  reftext

  mintext<-gsub("\n                  \n      Min Depth\n      \n                            \n                      ","",mintext)
  mintext<-gsub(" meters\n                  \n                    ","",mintext)
  maxtext<-gsub("\n                  \n      Max Depth\n      \n                            \n                      ","",maxtext)
  maxtext<-gsub(" meters\n                  \n","",maxtext)
  habtext<-gsub("\n",",",habtext)
  habtext<-gsub("\\s","",habtext)
  reftext<-gsub("\n\n",";",reftext)
  reftext<-gsub("\\s","",reftext)


  Info<-rbind(Info=c("Min", "Max", "Distribution", "Habitat", "MicroHabitat", "References"),Data=c(mintext,maxtext,distext,habtext,habtext2,reftext))
}

doit<-lapply(pag[1:10],GiveMeData)

问题在于缺少的物种。我尝试了一个小循环,但我不工作。

【问题讨论】:

    标签: r loops pagination data-science rvest


    【解决方案1】:

    我想可能有一些方法可以改进GiveMeData 函数,但是使用已经存在的函数我们可以使用tryCatch 来忽略返回错误的网站。

    output <- lapply(c(sp1), function(x) tryCatch(GiveMeData(x), error = function(e){}))
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2015-11-28
      • 1970-01-01
      • 2021-08-17
      • 1970-01-01
      • 2019-04-03
      • 1970-01-01
      • 2014-07-18
      相关资源
      最近更新 更多