【问题标题】:data.table equivalent of dplyr::filter_atdata.table 等效于 dplyr::filter_at
【发布时间】:2018-02-07 16:19:10
【问题描述】:

考虑数据:

library(data.table)
library(magrittr)

vec1 <- c("Iron", "Copper")

vec2 <- c("Defective", "Passed", "Error")

set.seed(123)
a1 <- sample(x = vec1, size = 20, replace = T)
b1 <- sample(x = vec2, size = 20, replace = T)

set.seed(1234)
a2 <- sample(x = vec1, size = 20, replace = T)
b2 <- sample(x = vec2, size = 20, replace = T)

DT <- data.table(
  c(1:20), a1, b1, a2, b2
) %>% .[order(V1)]

names(DT) <- c("id", "prod_name_1", "test_1", "prod_name_2", "test_2")

我需要过滤test_1 OR test_2 的值为"Passed" 的行。因此,如果这些列都没有指定值,则删除该行。对于dplyr,我们可以使用filter_at()动词:

> # dplyr solution...
> 
> cols <- grep(x = names(DT), pattern = "test", value = T, ignore.case = T)
> 
> 
> DT %>% 
+   dplyr::filter_at(.vars = grep(x = names(DT), pattern = "test", value = T, ignore.case = T), 
+                    dplyr::any_vars(. == "Passed")) -> DT.2
> 
> DT.2
  id prod_name_1 test_1 prod_name_2    test_2
1  3        Iron Passed      Copper Defective
2  5      Copper Passed      Copper Defective
3  7      Copper Passed        Iron    Passed
4  8      Copper Passed        Iron     Error
5 11      Copper  Error      Copper    Passed
6 14      Copper  Error      Copper    Passed
7 16      Copper Passed      Copper     Error

酷。 data.table有没有类似的方法来执行这个操作?

这是我得到的最接近的:

> lapply(seq_along(cols), function(x){
+   
+   setkeyv(DT, cols[[x]])
+   
+   DT["Passed"]
+   
+ }) %>% 
+   do.call(rbind,.) %>% 
+   unique -> DT.3
> 
> DT.3
   id prod_name_1 test_1 prod_name_2    test_2
1:  3        Iron Passed      Copper Defective
2:  5      Copper Passed      Copper Defective
3:  8      Copper Passed        Iron     Error
4: 16      Copper Passed      Copper     Error
5:  7      Copper Passed        Iron    Passed
6: 11      Copper  Error      Copper    Passed
7: 14      Copper  Error      Copper    Passed
> 
> identical(data.table(DT.2)[order(id)], DT.3[order(id)])
[1] TRUE

你们有没有更优雅的解决方案?最好是包含在诸如dplyr::filter_at() 之类的动词中。

【问题讨论】:

  • 例如DT[rowSums(DT[, ..cols] == "Passed") &gt; 0] 其中cols 包含感兴趣的列
  • 您应该在随机化更改所需输出的示例中使用set.seed(不确定此处是否是这种情况)
  • 谢谢@Frank。为了重现性的目的,我用set.seeds 编辑了这个问题。
  • @docendodiscimus 。谢谢,这实际上效果很好,而且速度非常快!

标签: r dplyr data.table


【解决方案1】:

我们可以在.SDcols中指定'cols',循环遍历Data.table的子集(.SD)来比较值是否“通过”,Reduce它与单个vector和@ 987654325@ 和子集的行

res2 <- DT[DT[,  Reduce(`|`, lapply(.SD, `==`, "Passed")), .SDcols = cols]]

与 OP 帖子中的 dplyr 输出比较

identical(as.data.table(res1), res2)
#[1] TRUE

【讨论】:

    【解决方案2】:

    我会转换数据...

    # store the data in long form...
    
    m = melt(DT, id = "id", 
      meas = patterns("prod_name", "test"), 
      value.name = c("prod_name", "test"), variable.name = "prod_num")
    
    setorder(m, id, prod_num)      
    
    # store binary test variable as logical...
    
    testmap = data.table(
      old = c("Defective", "Passed", "Error"), 
      new = c(FALSE, TRUE, NA))
    m[testmap, on=.(test = old), passed := i.new]
    
    m[, test := NULL]
    

    所以数据现在看起来像

        id prod_num prod_name passed
     1:  1        1      Iron     NA
     2:  1        2      Iron  FALSE
     3:  2        1    Copper     NA
     4:  2        2    Copper  FALSE
     5:  3        1      Iron   TRUE
     6:  3        2    Copper  FALSE
     7:  4        1    Copper     NA
     8:  4        2    Copper  FALSE
     9:  5        1    Copper   TRUE
    10:  5        2    Copper  FALSE
    11:  6        1      Iron     NA
    12:  6        2    Copper     NA
    13:  7        1    Copper   TRUE
    14:  7        2      Iron   TRUE
    15:  8        1    Copper   TRUE
    16:  8        2      Iron     NA
    17:  9        1    Copper  FALSE
    18:  9        2    Copper     NA
    19: 10        1      Iron  FALSE
    20: 10        2    Copper  FALSE
    21: 11        1    Copper     NA
    22: 11        2    Copper   TRUE
    23: 12        1      Iron     NA
    24: 12        2    Copper  FALSE
    25: 13        1    Copper     NA
    26: 13        2      Iron  FALSE
    27: 14        1    Copper     NA
    28: 14        2    Copper   TRUE
    29: 15        1      Iron  FALSE
    30: 15        2      Iron  FALSE
    31: 16        1    Copper   TRUE
    32: 16        2    Copper     NA
    33: 17        1      Iron     NA
    34: 17        2      Iron  FALSE
    35: 18        1      Iron  FALSE
    36: 18        2      Iron  FALSE
    37: 19        1      Iron  FALSE
    38: 19        2      Iron     NA
    39: 20        1    Copper  FALSE
    40: 20        2      Iron     NA
        id prod_num prod_name passed
    

    然后,您可以过滤到具有传递产品的 id,例如...

    res = m[, if(isTRUE(any(passed))) .SD, by=id]
    
        id prod_num prod_name passed
     1:  3        1      Iron   TRUE
     2:  3        2    Copper  FALSE
     3:  5        1    Copper   TRUE
     4:  5        2    Copper  FALSE
     5:  7        1    Copper   TRUE
     6:  7        2      Iron   TRUE
     7:  8        1    Copper   TRUE
     8:  8        2      Iron     NA
     9: 11        1    Copper     NA
    10: 11        2    Copper   TRUE
    11: 14        1    Copper     NA
    12: 14        2    Copper   TRUE
    13: 16        1    Copper   TRUE
    14: 16        2    Copper     NA
    

    为了便于浏览...

    dcast(res, id ~ prod_num, value.var = c("prod_name", "passed"))
    
       id prod_name_1 prod_name_2 passed_1 passed_2
    1:  3        Iron      Copper     TRUE    FALSE
    2:  5      Copper      Copper     TRUE    FALSE
    3:  7      Copper        Iron     TRUE     TRUE
    4:  8      Copper        Iron     TRUE       NA
    5: 11      Copper      Copper       NA     TRUE
    6: 14      Copper      Copper       NA     TRUE
    7: 16      Copper      Copper     TRUE       NA
    

    【讨论】:

    • 谢谢,我会接受@akrun 的回复,但我很感激你的回答。我不确定您的解决方案是否对我的问题更具可读性和简洁的答案,但我喜欢您在其他情况下的方法。再次感谢
    • @JdM 是的,我们的想法是格式化它,以便分析更容易。为了浏览数据时的可读性,您可以执行 dcast(res, id ~ prod_num, value.var = c("prod_name", "passed")) 其中 res 是此答案的最终结果,但以这种方式存储数据会导致很多问题。如果您有兴趣,请进一步阅读:jstatsoft.org/article/view/v059i10
    猜你喜欢
    • 1970-01-01
    • 2021-03-02
    • 1970-01-01
    • 2023-02-02
    • 1970-01-01
    • 1970-01-01
    • 2016-03-09
    • 2021-06-21
    相关资源
    最近更新 更多