问题是当你调用xml_find_all(df_xml, '//feed/entry/ author')时,搜索找不到你要找的节点,因为它们都在一个xml命名空间内。
uri <- "https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=1/xml"
my_xml <- read_xml(uri)
xml_find_all(my_xml, "//feed")
#> {xml_nodeset (0)}
您可以像这样找出文档中使用了哪些命名空间:
xml_ns(my_xml)
#> d1 <-> http://www.w3.org/2005/Atom
#> im <-> http://itunes.apple.com/rss
因此,您可以指定要在 xpath 中使用的命名空间,您将获得您正在寻找的节点,如下所示:
xml_find_all(my_xml, "//d1:feed")
#> {xml_nodeset (1)}
#> [1] <feed xmlns:im="http://itunes.apple.com/rss" xmlns="http://www.w3.org/2005/Atom ...
这显然有点烦人,因为您必须在 xpath 中的所有标签前加上 d1: 前缀,并且您的文档的结构是这样的,您可以在没有名称空间的情况下进行操作,因此最好忽略它们。
我发现最简单的方法是使用read_html 而不是read_xml,因为除此之外,它会自动去除命名空间并且更能容忍错误。但是,如果您愿意,可以在阅读 read_xml 后调用 xml_ns_strip 函数。
因此,您在本文档中处理命名空间的三个选项是:
- 在所有标签名称前加上
d1:
- 在
read_xml 之后使用xml_ns_strip
- 使用
read_html
此代码将遍历 xml 的所有页面,并为您提供所有 365 评论的字符向量。你会发现虽然xml的每一页有100个content标签,那是因为每个entry标签里面有两个content标签。其中一个具有评论的原始文本,另一个具有相同的内容但以 html 字符串的形式。因此,循环会丢弃包含字符串的 html,转而使用原始文本:
library("tidyverse")
library("xml2")
base <- "https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page="
reviews <- author <- review_date <- character()
max_pages <- 100
for(i in seq(max_pages))
{
cat("Trying", paste0(base, i, "/xml"), "\n")
my_xml <- paste0(base, i, "/xml") %>% read_xml() %>% xml_ns_strip()
next_reviews <- xml_find_all(my_xml, xpath = '//feed/entry/content') %>%
xml_text() %>%
subset(seq_along(.) %% 2 == 1)
if(length(next_reviews) == 0){
result <- tibble(review_date, author, reviews)
break
}
reviews <- c(reviews, next_reviews)
next_author <- xml_text(xml_find_all(my_xml, xpath = '//feed/entry/author/name'))
author <- c(author, next_author)
next_date <- xml_text(xml_find_all(my_xml, xpath = '//feed/entry/updated'))
review_date <- c(review_date, next_date)
}
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=1/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=2/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=3/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=4/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=5/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=6/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=7/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=8/xml
#> Trying https://itunes.apple.com/gb/rss/customerreviews/id=1388411277/page=9/xml
现在result 将包含一个tibble,其中包含三个感兴趣的字段:
result
#> # A tibble: 367 x 3
#> review_date author reviews
#> <chr> <chr> <chr>
#> 1 2020-05-05T02:38:35~ **stace** "Really good and useful app. Nice to be able to g~
#> 2 2020-05-05T01:51:49~ fire-hazza~ "Not for Scotland or Wales cmon man"
#> 3 2020-05-04T23:45:59~ Adz-Coco "Unable to register due to NHS number. My number ~
#> 4 2020-05-04T23:34:50~ Matthew ba~ "Probably spent about £5 developing this applicat~
#> 5 2020-05-04T16:40:17~ Jenny19385~ "Why it is so complicated to sign up an account? ~
#> 6 2020-05-04T14:39:54~ Sienna hea~ "Thankyou NHS for this excellent app I feel a lot~
#> 7 2020-05-04T13:09:45~ Raresole "A great app that lets me book appointments and a~
#> 8 2020-05-04T12:28:56~ chanters934 "Unable to login. App doesn’t recognise the code ~
#> 9 2020-05-04T11:26:44~ Ad_T "Unfortunately my surgery must not be participati~
#> 10 2020-05-04T08:25:17~ tonyproctor "It’s a good app although would be better with a ~
#> # ... with 357 more rows