【问题标题】:Efficiently dealing with repeated values within a by group using data.table使用 data.table 有效地处理组内的重复值
【发布时间】:2020-01-28 05:25:03
【问题描述】:

从按组内重复(即每行中的相同值)的列 (variable) 获取单个值的首选方法是什么?我应该使用variable[1] 还是应该在by 语句中包含该变量并使用.BY$variable?假设我希望返回值包含 variable 作为列。

从以下测试中可以清楚地看出,在 by 语句中添加额外的变量会减慢速度,甚至可以降低新变量的键控成本(或使用诡计告诉 data.table 不需要额外的键控)。为什么额外的已键入 by 变量会减慢速度?

我想我曾希望包含已经键入的 by 变量将是一种方便的语法技巧,可以将这些变量包含在返回 data.table 中,而无需在 j 语句中显式命名它们,但似乎这是这是不可取的,因为有一些额外的变量相关的开销,即使它们已经被键入。所以我的问题是,是什么导致了这种开销?

一些示例数据:

library(data.table)
n <- 1e8
y <- data.table(sample(1:5,n,replace=TRUE),rnorm(n),rnorm(n))
y[,sumV2:=sum(V2),keyby=V1]

时间表明使用variable[1](在本例中为sumV2[1])的方法更快。

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])
system.time(x[, list(out=sum(V3*V2)/.BY$sumV2),keyby=list(V1,sumV2)])

我想这并不奇怪,因为data.table 无法知道 setkey(V1) 和 setkey(V1,sumV2) 定义的组实际上是相同的。

令我感到惊讶的是,即使 data.table 键入 setkey(V1,sumV2)(我们完全忽略了设置新密钥所需的时间),使用 sumV2[1] 仍然更快。这是为什么呢?

x <- copy(y)
setkey(x,V1,sumV2)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),by=V1])
system.time(x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)])

另外,执行setkey(x,V2,sumV2) 所花费的时间是不可忽略的。有什么方法可以欺骗 data.table 通过告诉它密钥实际上并没有发生实质性变化来跳过实际重新键入 x?

x <- copy(y)
system.time(setkey(x,V1,sumV2))

回答我自己的问题,似乎我们可以在设置键时跳过排序,只需分配“排序”属性。这是允许的吗?它会破坏东西吗?

x <- copy(y)
system.time({
  setattr(x, "sorted", c("V1","sumV2"))
  x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})

我不知道这是不好的做法还是可能会破坏事情。但是使用setattr 诡计比显式键入要快得多:

x <- copy(y)
system.time({
  setkey(x,V1,sumV2)
  x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})

但即使在 by 语句中使用 setattr 诡计并结合使用 sumV2 仍然不如将 sumV2 完全排除在 by 语句之外那么快:

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])

在我看来,使用通过属性设置键并使用 sumV2 作为长度为 1 的变量在每个组中应该比仅键入 V1 和使用 sumV2[1] 更快。如果sumV2 未指定为by 变量,则需要为每个组生成sumV2 中重复值的整个向量,然后再将其子集为sumV2[1]。将此与 sumV2by 变量时进行比较,每个组中的 sumV2 只有一个长度为 1 的向量。显然我在这里的推理是不正确的。谁能解释为什么?为什么 sumV2[1] 是最快的选项,甚至与使用 setattr 诡计后将 sumV2 设为变量相比也是最快的选择?

顺便说一句,我很惊讶地发现使用attr&lt;- 并不比setattr 慢(都是瞬时的,意味着根本没有复制)。这与我的理解相反,即基本 R foo&lt;- 函数会复制数据。

x <- copy(y)
system.time(setattr(x, "sorted", c("V1","sumV2")))
x <- copy(y)
system.time(attr(x,"sorted") <- c("V1","sumV2"))

用于此问题的相关SessionInfo()

data.table version 1.12.2
R version 3.5.3

【问题讨论】:

  • 我不确定我明白了。由于您使用:=keyby 创建sumV2xx[,sumV2:=sum(V2),keyby=V1] 之后已经被V1 键入,data.table 知道这一点。您可以通过调用key(x) 之前 调用setkey 或在调用verbose=TRUE 中设置verbose=TRUE 来证实setkey(在您的最后一个示例中)。
  • 您的示例不需要 by 子句。 V3*V2/sumV2 可以直接作为矢量化操作运行。
  • @Frank--oops 我忘了将 V3*V2 包装在 sum 函数中。这里的想法是通过 V1 定义的类别中 V2 定义的权重对 V3 的值进行加权平均。我编辑了问题来解决这个问题。
  • 我要补充一点,这有点像稻草人,因为最快的方法可能根本不创建 sumV2 作为 x 列,而是使用单个复合 j 语句其中sumV2被创建为j中的一个临时变量,用于计算加权平均值。但可以想象,您可能想要xsumV2 的重复值以及另一个包含所得加权平均值的短data.table。

标签: r data.table


【解决方案1】:

好的,所以我没有很好的技术答案,但我想我已经在 options(datatable.verbose=TRUE) 的帮助下从概念上解决了这个问题

创建数据

library(data.table)
n <- 1e8

y_unkeyed_5groups <- data.table(sample(1:10000,n,replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_5groups[,sumV2:=sum(V2),keyby=V1]
y_unkeyed_10000groups <- data.table(sample(1:10000,n,replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_10000groups[,sumV2:=sum(V2),keyby=V1]

慢跑

x <- copy(y)
system.time({
  setattr(x, "sorted", c("V1","sumV2"))
  x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})
# Detected that j uses these columns: V3,V2 
# Finding groups using uniqlist on key ... 1.050s elapsed (1.050s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V3 * V2)/.BY$sumV2)'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.305s for 6 groups
# eval(j) took 0.254s for 6 calls
# 0.560s elapsed (0.510s cpu) 
# user  system elapsed 
# 1.81    0.09    1.72 

跑得快:

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])
# Detected that j uses these columns: V3,V2,sumV2 
# Finding groups using uniqlist on key ... 0.060s elapsed (0.070s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V3 * V2)/sumV2[1], sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.328s for 6 groups
# eval(j) took 0.291s for 6 calls
# 0.610s elapsed (0.580s cpu) 
# user  system elapsed 
# 1.08    0.08    0.82 

finding groups 部分是造成差异的原因。我猜这里发生的事情是设置 key 实际上只是排序(我应该从属性的命名方式中猜到!)并且实际上并没有做任何事情来定义组的开始和结束位置。因此,即使data.table 知道sumV2 已排序,它也不知道它是相同的值,因此必须找到sumV2 中的组的开始和结束位置。

我的猜测是,在技术上可以编写data.table,其中键控不仅排序,而且实际上将每个组的开始/结束行存储在键控变量中,但这可能会占用很多时间具有大量组的 data.tables 的内存。

知道了这一点,似乎建议不要一遍又一遍地重复相同的 by 语句,而是在单个 by 语句中完成您需要做的所有事情。总的来说,这可能是一个很好的建议,但对于少数群体而言,情况并非如此。参见下面的反例:

我以我认为使用 data.table 最快的方式重写了这个(只有一个 by 语句,并使用了 GForce):

library(data.table)
n <- 1e8
y_unkeyed_5groups <- data.table(sample(1:5,n, replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_10000groups <- data.table(sample(1:10000,n, replace=TRUE),rnorm(n),rnorm(n))

x <- copy(y_unkeyed_5groups)
system.time({
  x[, product:=V3*V2]
  outDT <- x[,list(sumV2=sum(V2),sumProduct=sum(product)),keyby=V1]
  outDT[,`:=`(out=sumProduct/sumV2,sumProduct=NULL) ]
  setkey(x,V1)
  x[outDT,sumV2:=sumV2,all=TRUE]
  x[,product:=NULL]
  outDT
})

# Detected that j uses these columns: V3,V2 
# Assigning to all 100000000 rows
# Direct plonk of unnamed RHS, no copy.
# Detected that j uses these columns: V2,product 
# Finding groups using forderv ... 0.350s elapsed (0.810s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V2), sum(product))'
# GForce optimized j to 'list(gsum(V2), gsum(product))'
# Making each group and running j (GForce TRUE) ... 1.610s elapsed (4.550s cpu) 
# Detected that j uses these columns: sumProduct,sumV2 
# Assigning to all 5 rows
# RHS for item 1 has been duplicated because NAMED is 3, but then is being plonked. length(values)==2; length(cols)==2)
# forder took 0.98 sec
# reorder took 3.35 sec
# Starting bmerge ...done in 0.000s elapsed (0.000s cpu) 
# Detected that j uses these columns: sumV2 
# Assigning to 100000000 row subset of 100000000 rows
# Detected that j uses these columns: product 
# Assigning to all 100000000 rows
# user  system elapsed 
# 11.00    1.75    5.33 


x2 <- copy(y_unkeyed_5groups)
system.time({
  x2[,sumV2:=sum(V2),keyby=V1]
  outDT2 <- x2[, list(sumV2=sumV2[1],out=sum(V3*V2)/sumV2[1]),keyby=V1]
})
# Detected that j uses these columns: V2 
# Finding groups using forderv ... 0.310s elapsed (0.700s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'sum(V2)'
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# collecting discontiguous groups took 0.714s for 5 groups
# eval(j) took 0.079s for 5 calls
# 1.210s elapsed (1.160s cpu) 
# setkey() after the := with keyby= ... forder took 1.03 sec
# reorder took 3.21 sec
# 1.600s elapsed (3.700s cpu) 
# Detected that j uses these columns: sumV2,V3,V2 
# Finding groups using uniqlist on key ... 0.070s elapsed (0.070s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sumV2[1], sum(V3 * V2)/sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.347s for 5 groups
# eval(j) took 0.265s for 5 calls
# 0.630s elapsed (0.620s cpu) 
# user  system elapsed 
# 6.57    0.98    3.99 

all.equal(x,x2)
# TRUE
all.equal(outDT,outDT2)
# TRUE

好吧,事实证明,当只有 5 个组时,不重复语句和使用 GForce 所获得的效率并不重要。但是对于更多的组来说,这确实会有所不同,(尽管我没有写出这样一种方式来区分仅使用一个 by 语句而不是 GForce 与使用 GForce 和多个 by 语句的好处):

x <- copy(y_unkeyed_10000groups)
system.time({
  x[, product:=V3*V2]
  outDT <- x[,list(sumV2=sum(V2),sumProduct=sum(product)),keyby=V1]
  outDT[,`:=`(out=sumProduct/sumV2,sumProduct=NULL) ]
  setkey(x,V1)
  x[outDT,sumV2:=sumV2,all=TRUE]
  x[,product:=NULL]
  outDT
})
# 
# Detected that j uses these columns: V3,V2 
# Assigning to all 100000000 rows
# Direct plonk of unnamed RHS, no copy.
# Detected that j uses these columns: V2,product 
# Finding groups using forderv ... 0.740s elapsed (1.220s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V2), sum(product))'
# GForce optimized j to 'list(gsum(V2), gsum(product))'
# Making each group and running j (GForce TRUE) ... 0.810s elapsed (2.390s cpu) 
# Detected that j uses these columns: sumProduct,sumV2 
# Assigning to all 10000 rows
# RHS for item 1 has been duplicated because NAMED is 3, but then is being plonked. length(values)==2; length(cols)==2)
# forder took 1.97 sec
# reorder took 11.95 sec
# Starting bmerge ...done in 0.000s elapsed (0.000s cpu) 
# Detected that j uses these columns: sumV2 
# Assigning to 100000000 row subset of 100000000 rows
# Detected that j uses these columns: product 
# Assigning to all 100000000 rows
# user  system elapsed 
# 18.37    2.30    7.31 

x2 <- copy(y_unkeyed_10000groups)
system.time({
  x2[,sumV2:=sum(V2),keyby=V1]
  outDT2 <- x[, list(sumV2=sumV2[1],out=sum(V3*V2)/sumV2[1]),keyby=V1]
})

# Detected that j uses these columns: V2 
# Finding groups using forderv ... 0.770s elapsed (1.490s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'sum(V2)'
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# collecting discontiguous groups took 1.792s for 10000 groups
# eval(j) took 0.111s for 10000 calls
# 3.960s elapsed (3.890s cpu) 
# setkey() after the := with keyby= ... forder took 1.62 sec
# reorder took 13.69 sec
# 4.660s elapsed (14.4s cpu) 
# Detected that j uses these columns: sumV2,V3,V2 
# Finding groups using uniqlist on key ... 0.070s elapsed (0.070s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sumV2[1], sum(V3 * V2)/sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.395s for 10000 groups
# eval(j) took 0.284s for 10000 calls
# 0.690s elapsed (0.650s cpu) 
# user  system elapsed 
# 20.49    1.67   10.19 

all.equal(x,x2)
# TRUE
all.equal(outDT,outDT2)
# TRUE

更一般地说,data.table 的速度非常快,但为了提取最快和最有效的计算以充分利用底层 C 代码,您需要特别注意 data.table 的内部工作原理。我最近了解了 data.table 中的 GForce 优化,似乎特定形式的 j 语句(涉及简单函数,如 mean 和 sum)在有 by 语句时直接在 C 中解析和执行。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2019-04-14
    • 2013-05-12
    • 2019-02-05
    • 1970-01-01
    • 2018-11-16
    • 2017-06-28
    相关资源
    最近更新 更多