【发布时间】:2023-03-10 08:51:01
【问题描述】:
我有一个包含多个 url 的 chr 列表。我想从这些网址中的每一个下载内容。
为了避免写出数百条命令,我希望使用 lapply 循环自动执行该过程。
但是我的命令返回错误。是否可以从多个网址中抓取?
目前的方法
长方法:有效,但我希望自动化
urls <-c("https://en.wikipedia.org/wiki/Belarus","https://en.wikipedia.org/wiki/Russia","https://en.wikipedia.org/wiki/England")
library(rvest)
library(httr) # required for user_agent command
uastring <- "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
session <- html_session("https://en.wikipedia.org/wiki/Main_Page", user_agent(uastring))
session2<-jump_to(session, "https://en.wikipedia.org/wiki/Belarus")
session3<-jump_to(session, "https://en.wikipedia.org/wiki/Russia")
writeBin(session2$response$content, "test1.txt")
writeBin(session3$response$content, "test2.txt")
自动/循环:不起作用。
urls <-c("https://en.wikipedia.org/wiki/Belarus","https://en.wikipedia.org/wiki/Russia","https://en.wikipedia.org/wiki/England")
library(rvest)
library(httr) # required for user_agent command
uastring <- "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
session <- html_session("https://en.wikipedia.org/wiki/Main_Page", user_agent(uastring))
lapply(urls, .%>% jump_to(session))
Error: is.session(x) is not TRUE
总结
我希望自动化以下两个过程,jump_to() 和 writeBin(),如下面的代码所示
session2<-jump_to(session, "https://en.wikipedia.org/wiki/Belarus")
session3<-jump_to(session, "https://en.wikipedia.org/wiki/Russia")
writeBin(session2$response$content, "test1.txt")
writeBin(session3$response$content, "test2.txt")
【问题讨论】:
标签: r web-scraping rcurl rvest httr