这与使用错误的选择器无关。您正在抓取的网站在首次访问时会做一些非常有趣的事情:
当您点击页面时,它会设置一个 cookie,然后刷新页面(这是我见过的强制“会话”最愚蠢的方法之一)。
除非您使用代理服务器来捕获 Web 请求,否则即使在浏览器开发人员工具的网络选项卡中,您也永远无法真正看到这一点。不过,您也可以通过查看最初的 read_html() 调用返回的内容来查看它(它只有 javascript+redirect)。
read_html() 和 httr::GET() 都无法直接帮助您解决此问题,因为设置 cookie 的方式是通过 javascript。
但是!所有的希望都不会消失,也不需要像 Selenium 或 Splash 这样的愚蠢的第三方要求(我很震惊,驻地专家还没有建议,因为这似乎是这些天的默认响应)。
让我们获取 cookie(确保这是自 libcurl 以来的 FRESH、RESTARTED、NEW R 会话——curl? 使用它,而 httr::GET() 使用它,read_html() 最终使用它——维护cookie(我们将使用此功能继续抓取页面,但如果出现任何问题,您可能需要重新开始会话)。
library(xml2)
library(httr)
library(rvest)
library(janitor)
# Get access cookie
httr::GET(
url = "http://www.rca.gov.rw/wemis/registration/all.php",
query = list(
start = "0",
status = "approved"
)
) -> res
ckie <- httr::content(res, as = "text", encoding = "UTF-8")
ckie <- unlist(strsplit(ckie, "\r\n"))
ckie <- grep("cookie", ckie, value = TRUE)
ckie <- gsub("^document.cookie = '_accessKey2=|'$", "", ckie)
现在,我们将设置 cookie 并获取我们的 PHP 会话 cookie,这两者都会在之后持续存在:
httr::GET(
url = "http://www.rca.gov.rw/wemis/registration/all.php",
httr::set_cookies(`_accessKey2` = ckie),
query = list(
start = "0",
status = "approved"
)
) -> res
现在,有 400 多个页面,因此我们将缓存原始 HTML,以防您抓取错误并需要重新解析页面。这样您就可以遍历文件并再次访问该站点。为此,我们将为它们创建一个目录:
dir.create("rca-temp-scrape-dir")
现在,创建分页起始编号:
pgs <- seq(0L, 8920L, 20)
然后,遍历它们。注意:我不需要全部 400 多页,所以我只做了 10 页。删除 [1:10] 以获取全部。另外,除非您喜欢伤害他人,否则请保持睡眠,因为您无需为 CPU/带宽付费,而且该网站可能非常脆弱。
lapply(pgs[1:10], function(pg) {
Sys.sleep(5) # Please don't hammer servers you don't pay for
httr::GET(
url = "http://www.rca.gov.rw/wemis/registration/all.php",
query = list(
start = pg,
status = "approved"
)
) -> res
# YOU SHOULD USE httr FUNCTIONS TO CHECK FOR STATUS
# SINCE THERE CAN BE HTTR ERRORS THAT YOU MAY NEED TO
# HANDLE TO AVOID CRASHING THE ITERATION
out <- httr::content(res, as = "text", encoding = "UTF-8")
# THIS CACHES THE RAW HTML SO YOU CAN RE-SCRAPE IT FROM DISK IF NECESSARY
writeLines(out, file.path("rca-temp-scrape-dir", sprintf("rca-page-%s.html", pg)))
out <- xml2::read_html(out)
out <- rvest::html_node(out, "table.primary")
out <- rvest::html_table(out, header = TRUE, trim = TRUE)
janitor::clean_names(out) # makes better column names
}) -> recs
最后,我们将这 20 个数据帧合并为一个:
recs <- do.call(rbind.data.frame, recs)
str(recs)
## 'data.frame': 200 obs. of 9 variables:
## $ s_no : num 1 2 3 4 5 6 7 8 9 10 ...
## $ code : chr "BUG0416" "RBV0494" "GAS0575" "RSZ0375" ...
## $ name : chr "URUMURI RWA NGERUKA" "BADUKANA IBAKWE NYAKIRIBA" "UBUDASA COOPERATIVE" "KODUKB" ...
## $ certificate: chr "RCA/0734/2018" "RCA/0733/2018" "RCA/0732/2018" "RCA/0731/2018" ...
## $ reg_date : chr "10.12.2018" "-" "10.12.2018" "07.12.2018" ...
## $ province : chr "East" "West" "Mvk" "West" ...
## $ district : chr "Bugesera" "Rubavu" "Gasabo" "Rusizi" ...
## $ sector : chr "Ngeruka" "Nyakiliba" "Remera" "Bweyeye" ...
## $ activity : chr "ubuhinzi (Ibigori, Ibishyimbo)" "ubuhinzi (Imboga)" "transformation (Amasabuni)" "ubworozi (Amafi)" ...
如果您是tidyverse 用户,您也可以这样做:
purrr::map_df(pgs[1:10], ~{
Sys.sleep(5)
httr::GET(
url = "http://www.rca.gov.rw/wemis/registration/all.php",
httr::set_cookies(`_accessKey2` = ckie),
query = list(
start = .x,
status = "approved"
)
) -> res
out <- httr::content(res, as = "text", encoding = "UTF-8")
writeLines(out, file.path("rca-temp-scrape-dir", sprintf("rca-page-%s.html", pg)))
out <- xml2::read_html(out)
out <- rvest::html_node(out, "table.primary")
out <- rvest::html_table(out, header = TRUE, trim = TRUE)
janitor::clean_names(out)
}) -> recs
与lapply/do.call/rbind.data.frame 方法相比。