【问题标题】:Calculate average based on condition根据条件计算平均值
【发布时间】:2018-04-24 11:19:46
【问题描述】:

我有一张桌子

Country ClaimId ClaimItem   ClaimAmt
IN      C1      1           100
IN      C1      2           200
US      C2      1           100
US      C2      2           100
US      C2      3           100
US      C3      1           100
US      C3      2           100
UK      C4      1           100
UK      C4      2           200
UK      C1      1           100
UK      C1      2           200

在这里,我想计算每个国家/地区每个 claimID 的平均值,这样我的预期表格看起来像

Country ClaimId ClaimItem   ClaimAmt  Avg
IN      C1      1           100       300
IN      C1      2           200       300
US      C2      1           100       250
US      C2      2           100       250
US      C2      3           100       250
US      C3      1           100       250
US      C3      2           100       250
UK      C4      1           100       300
UK      C4      2           200       300
UK      C1      1           100       300
UK      C1      2           200       300

关于如何实现预期表格的任何想法。 谢谢

这里是示例

> dput(claims)
structure(list(Country = structure(c(1L, 1L, 3L, 3L, 3L, 3L, 
3L, 2L, 2L, 2L, 2L), .Label = c("IN", "UK", "US"), class = "factor"), 
    ClaimId = structure(c(1L, 1L, 2L, 2L, 2L, 3L, 3L, 4L, 4L, 
    1L, 1L), .Label = c("C1", "C2", "C3", "C4"), class = "factor"), 
    ClaimItem = c(1L, 2L, 1L, 2L, 3L, 1L, 2L, 1L, 2L, 1L, 2L), 
    ClaimAmt = c(100L, 200L, 100L, 100L, 100L, 100L, 100L, 100L, 
    200L, 100L, 200L)), .Names = c("Country", "ClaimId", "ClaimItem", 
"ClaimAmt"), class = "data.frame", row.names = c(NA, -11L))

【问题讨论】:

  • 为什么100200 的平均值是300? (ClaimId == 'C1')。
  • Rui Barradas,我的错,编辑了帖子
  • 正在寻找每个国家,每个 ClaimID 的平均值
  • @Parfait 我重新打开了
  • @Deepesh 你必须提到你想对每个(国家,ClaimId)求和,然后对每个国家的总和求平均。

标签: r dataframe average


【解决方案1】:

这是data.table的解决方案:

claims <- 
structure(list(Country = structure(c(1L, 1L, 3L, 3L, 3L, 3L, 3L, 2L, 2L, 2L, 2L), 
  .Label = c("IN", "UK", "US"), class = "factor"), 
ClaimId = structure(c(1L, 1L, 2L, 2L, 2L, 3L, 3L, 4L, 4L, 1L, 1L), 
 .Label = c("C1", "C2", "C3", "C4"), class = "factor"), 
ClaimItem = c(1L, 2L, 1L, 2L, 3L, 1L, 2L, 1L, 2L, 1L, 2L), 
ClaimAmt = c(100L, 200L, 100L, 100L, 100L, 100L, 100L, 100L, 200L, 100L, 200L)), 
 .Names = c("Country", "ClaimId", "ClaimItem", "ClaimAmt"), 
class = "data.frame", row.names = c(NA, -11L))

library("data.table")
setDT(claims)
claims[, Avg:=sum(ClaimAmt)/uniqueN(ClaimId), Country][]

# > claims[, Avg:=sum(ClaimAmt)/uniqueN(ClaimId), Country][]
#     Country ClaimId ClaimItem ClaimAmt Avg
#  1:      IN      C1         1      100 300
#  2:      IN      C1         2      200 300
#  3:      US      C2         1      100 250
#  4:      US      C2         2      100 250
#  5:      US      C2         3      100 250
#  6:      US      C3         1      100 250
#  7:      US      C3         2      100 250
#  8:      UK      C4         1      100 300
#  9:      UK      C4         2      200 300
# 10:      UK      C1         1      100 300
# 11:      UK      C1         2      200 300

【讨论】:

    【解决方案2】:

    考虑两个基本 R aveCountry 的总和的 R ave 调用的比率,然后是唯一 ClaimID 的长度也由 国家:

    claims$Avg <- with(claims, ave(ClaimAmt, Country, FUN=sum) /
                        ave(as.integer(ClaimId), Country, FUN=function(g) length(unique(g)))
                       )    
    claims
    
    #    Country ClaimId ClaimItem ClaimAmt Avg
    # 1       IN      C1         1      100 300
    # 2       IN      C1         2      200 300
    # 3       US      C2         1      100 250
    # 4       US      C2         2      100 250
    # 5       US      C2         3      100 250
    # 6       US      C3         1      100 250
    # 7       US      C3         2      100 250
    # 8       UK      C4         1      100 300
    # 9       UK      C4         2      200 300
    # 10      UK      C1         1      100 300
    # 11      UK      C1         2      200 300
    

    【讨论】:

      猜你喜欢
      • 2021-06-20
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-02-22
      • 2022-12-02
      • 2014-11-04
      相关资源
      最近更新 更多