在数据框中按组计算唯一/不同值


76

假设我有以下数据框:

> myvec
    name order_no
1    Amy       12
2   Jack       14
3   Jack       16
4   Dave       11
5    Amy       12
6   Jack       16
7    Tom       19
8  Larry       22
9    Tom       19
10  Dave       11
11  Jack       17
12   Tom       20
13   Amy       23
14  Jack       16

我想order_no为每个计数不同值的数量name。它应该产生以下结果:

name    number_of_distinct_orders
Amy     2
Jack    3
Dave    1
Tom     2
Larry   1

我怎样才能做到这一点?


有没有办法在R中使用SQL做同样的事情?
user3581800 '16

2
@ user3581800使用该程序包,sqldf您可以sqldf("SELECT name,COUNT(distinct(order_no)) FROM myvec GROUP BY name")
jogo

Answers:


31

这应该可以解决问题:

ddply(myvec,~name,summarise,number_of_distinct_orders=length(unique(order_no)))

这需要包装plyr。


11
这已经过时了,不应该被接受为答案:plyr自2014年以来就被封存。dplyr或data.table是最近使用的软件包。
smci

@smci同意。这是许多与R相关的答案的问题,尤其是当它们适用于我们今天使用Tidyverse所做的事情时。
Aren Cambre

79

一种data.table方法

library(data.table)
DT <- data.table(myvec)

DT[, .(number_of_distinct_orders = length(unique(order_no))), by = name]

data.tablev> = 1.9.5现在具有内置uniqueN功能

DT[, .(number_of_distinct_orders = uniqueN(order_no)), by = name]

以及如何获取每一列的唯一值的数量,例如sapply(mydata,function(x)length(unique(x)))?
skan

1
@skan dt [,lapply(.SD,uniqueN)]参见rawgit.com/wiki/Rdatatable/data.table/vignettes/…搜索.SD
seanv507 '17

我尝试过uniqueN,它比length + unique慢得多。这是我的时间:> system.time({+ dt [,。(fecha = uniqueN(fecha)),by = id_pedido] +})用户系统已使用69.91 48.36 112.10> system.time({+ dt [,。( fecha = length(unique(fecha))),由= id_pedido] +})用户系统经过了9.92 0.13 9.81
jormaga

66

dplyr您可能n_distinct用来“计算唯一值的数量”:

library(dplyr)
myvec %>%
  group_by(name) %>%
  summarise(n_distinct(order_no))

我不推荐这个。如果您想使用dplyr,请使用length(unique(order_no))而不是n_distinct(order_no),因为n_distinct确实很慢。
jormaga

@jormaga这是一个已知问题,请参见汇总许多组的性能下降情况,其中还显示了您的解决方法;“我们知道这一点,我们有一个计划,尽管它不会在1.0.0中发生。
Henrik

43

这是具有以下功能的简单解决方案aggregate

aggregate(order_no ~ name, myvec, function(x) length(unique(x)))

15

下面是@大卫Arenburg的解决方案的标杆存在,以及一些解决方案的回顾张贴在这里(@mnel@Sven海恩斯坦@Henrik):

library(dplyr)
library(data.table)
library(microbenchmark)
library(tidyr)
library(ggplot2)

df <- mtcars
DT <- as.data.table(df)
DT_32k <- rbindlist(replicate(1e3, mtcars, simplify = FALSE))
df_32k <- as.data.frame(DT_32k)
DT_32M <- rbindlist(replicate(1e6, mtcars, simplify = FALSE))
df_32M <- as.data.frame(DT_32M)
bench <- microbenchmark(
  base_32 = aggregate(hp ~ cyl, df, function(x) length(unique(x))),
  base_32k = aggregate(hp ~ cyl, df_32k, function(x) length(unique(x))),
  base_32M = aggregate(hp ~ cyl, df_32M, function(x) length(unique(x))),
  dplyr_32 = summarise(group_by(df, cyl), count = n_distinct(hp)),
  dplyr_32k = summarise(group_by(df_32k, cyl), count = n_distinct(hp)),
  dplyr_32M = summarise(group_by(df_32M, cyl), count = n_distinct(hp)),
  data.table_32 = DT[, .(count = uniqueN(hp)), by = cyl],
  data.table_32k = DT_32k[, .(count = uniqueN(hp)), by = cyl],
  data.table_32M = DT_32M[, .(count = uniqueN(hp)), by = cyl],
  times = 10
)

结果:

print(bench)

# Unit: microseconds
#            expr          min           lq         mean       median           uq          max neval  cld
#         base_32      816.153     1064.817 1.231248e+03 1.134542e+03     1263.152     2430.191    10 a   
#        base_32k    38045.080    38618.383 3.976884e+04 3.962228e+04    40399.740    42825.633    10 a   
#        base_32M 35065417.492 35143502.958 3.565601e+07 3.534793e+07 35802258.435 37015121.086    10    d
#        dplyr_32     2211.131     2292.499 1.211404e+04 2.370046e+03     2656.419    99510.280    10 a   
#       dplyr_32k     3796.442     4033.207 4.434725e+03 4.159054e+03     4857.402     5514.646    10 a   
#       dplyr_32M  1536183.034  1541187.073 1.580769e+06 1.565711e+06  1600732.034  1733709.195    10  b  
#   data.table_32      403.163      413.253 5.156662e+02 5.197515e+02      619.093      628.430    10 a   
#  data.table_32k     2208.477     2374.454 2.494886e+03 2.448170e+03     2557.604     3085.508    10 a   
#  data.table_32M  2011155.330  2033037.689 2.074020e+06 2.052079e+06  2078231.776  2189809.835    10   c 

情节:

as_tibble(bench) %>% 
  group_by(expr) %>% 
  summarise(time = median(time)) %>% 
  separate(expr, c("framework", "nrow"), "_", remove = FALSE) %>% 
  mutate(nrow = recode(nrow, "32" = 32, "32k" = 32e3, "32M" = 32e6),
         time = time / 1e3) %>% 
  ggplot(aes(nrow, time, col = framework)) +
  geom_line() +
  scale_x_log10() +
  scale_y_log10() + ylab("microseconds")

汇总VS-dplyr-VS-数据表

会话信息:

sessionInfo()
# R version 3.4.1 (2017-06-30)
# Platform: x86_64-pc-linux-gnu (64-bit)
# Running under: Linux Mint 18
# 
# Matrix products: default
# BLAS: /usr/lib/atlas-base/atlas/libblas.so.3.0
# LAPACK: /usr/lib/atlas-base/atlas/liblapack.so.3.0
# 
# locale:
# [1] LC_CTYPE=fr_FR.UTF-8       LC_NUMERIC=C               LC_TIME=fr_FR.UTF-8       
# [4] LC_COLLATE=fr_FR.UTF-8     LC_MONETARY=fr_FR.UTF-8    LC_MESSAGES=fr_FR.UTF-8   
# [7] LC_PAPER=fr_FR.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
# [10] LC_TELEPHONE=C             LC_MEASUREMENT=fr_FR.UTF-8 LC_IDENTIFICATION=C       
# 
# attached base packages:
# [1] stats     graphics  grDevices utils     datasets  methods   base     
# 
# other attached packages:
# [1] ggplot2_2.2.1          tidyr_0.6.3            bindrcpp_0.2           stringr_1.2.0         
# [5] microbenchmark_1.4-2.1 data.table_1.10.4      dplyr_0.7.1           
# 
# loaded via a namespace (and not attached):
# [1] Rcpp_0.12.11     compiler_3.4.1   plyr_1.8.4       bindr_0.1        tools_3.4.1      digest_0.6.12   
# [7] tibble_1.3.3     gtable_0.2.0     lattice_0.20-35  pkgconfig_2.0.1  rlang_0.1.1      Matrix_1.2-10   
# [13] mvtnorm_1.0-6    grid_3.4.1       glue_1.1.1       R6_2.2.2         survival_2.41-3  multcomp_1.4-6  
# [19] TH.data_1.0-8    magrittr_1.5     scales_0.4.1     codetools_0.2-15 splines_3.4.1    MASS_7.3-47     
# [25] assertthat_0.2.0 colorspace_1.3-2 labeling_0.3     sandwich_2.3-4   stringi_1.1.5    lazyeval_0.2.0  
# [31] munsell_0.4.3    zoo_1.8-0 

9

这是一个解决方案 sqldf

library("sqldf")

myvec <- read.table(header=TRUE, text=
"   name order_no
1    Amy       12
2   Jack       14
3   Jack       16
4   Dave       11
5    Amy       12
6   Jack       16
7    Tom       19
8  Larry       22
9    Tom       19
10  Dave       11
11  Jack       17
12   Tom       20
13   Amy       23
14  Jack       16")
sqldf("SELECT name,COUNT(distinct(order_no)) as number_of_distinct_orders FROM myvec GROUP BY name")
# > sqldf("SELECT name,COUNT(distinct(order_no)) as number_of_distinct_orders FROM myvec GROUP BY name")
#    name number_of_distinct_orders
# 1   Amy                         2
# 2  Dave                         1
# 3  Jack                         3
# 4 Larry                         1
# 5   Tom                         2

8

您只需将内置的R函数tapplylength

tapply(myvec$order_no, myvec$name, FUN = function(x) length(unique(x)))

该功能可能应该是length(uniqe())@Sven的答案。除此之外,这还展示了的正确使用tapply
RomanLuštrik2012年

抱歉,我没有注意到“明显”命令的警告。罗马是正确的,使用FUN = length(unique())将起作用。
杰弗里·埃文斯

3

这也可以工作,但是没有plyr解决方案那么雄辩:

x <- sapply(split(myvec, myvec$name),  function(x) length(unique(x[, 2]))) 
data.frame(names=names(x), number_of_distinct_orders=x, row.names = NULL)

抱歉,但这给了错误的结果:艾米:3戴夫:2杰克:5拉里:1汤姆:3
Mehper C. Palavuzlar 2012年

@Mehper对此感到抱歉,误解了您的追求。查看我编辑的解决方案;我仍然更喜欢plyr解决方案。
泰勒·林克

2
my.1 <- table(myvec)

my.1[my.1 != 0] <- 1

rowSums(my.1)

也许包裹rowSums(my.1)stackstack(rowSums(my.1))[2:1]得到一个数据帧回。
Jaap

您可以将它rowSums( table(myvec) != 0 )
设为单线

0

使用table

library(magrittr)
myvec %>% unique %>% '['(1) %>% table %>% as.data.frame %>%
  setNames(c("name","number_of_distinct_orders"))

#    name number_of_distinct_orders
# 1   Amy                         2
# 2  Dave                         1
# 3  Jack                         3
# 4 Larry                         1
# 5   Tom                         2

0

几岁..虽然有类似的要求,但最终写出了我自己的解决方案。在这里申请:

 x<-data.frame(
 
 "Name"=c("Amy","Jack","Jack","Dave","Amy","Jack","Tom","Larry","Tom","Dave","Jack","Tom","Amy","Jack"),
 "OrderNo"=c(12,14,16,11,12,16,19,22,19,11,17,20,23,16)
)

table(sub("~.*","",unique(paste(x$Name,x$OrderNo,sep="~",collapse=NULL))))

  Amy  Dave  Jack Larry   Tom
    2     1     3     1     2
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.