Inspired partly by this and this Stackoverflow questions, I wanted to test what is the fastest way to create a new column using dplyr as a combination of others.
First, let’s create some example data
We have a data frame with 6 binary columns, and we want to create another one which is the sum of these columns. The most straighforward way is using mutate() directly
This is probably going to be very fast, since it takes full advantage of R vectorized operations. The downside is that if we want to sum up say, 20 columns, we have to write down the name of all of them.
The second approach is to use tidy data principles to transform the previous data frame into long form and then perform the operation by group:
The downside of this approach is that we have as many groups as rows in the original data frame, and usually grouped operations are not very efficient when the number of groups is very large. Of course, depending on the meaning of the columns “A”, “B”, etc. the data frame df may not be a tidy dataset, and it is always a good idea to transform those using tidy data principles. However, it also may already be in tidy form.
The next possibility is to iterate over the rows of the original data, summing them up. Here we can use the functions apply() or rowSums() from base R and pmap() from the purrr package.
These function perform the same operation but differ in many aspects:
apply() coerces the data frame into a matrix, so care needs to be taken with non-numeric columns.
rowSums() can only be used if we want to perform the sum or the mean (rowMeans()), but not for other operations.
pmap() has variants that let you specifiy the type of the output (pmap_dbl(), pmap_lgl()) and thus are safer. If the output cannot be coerced to the given type an exception will be thrown.
Finally, we have the reduce() function from the purrr package (see this chapter from “Advanced R” by Hadley Wickham to learn more). This function lets us take full advantage of R vectorized operation and write the operation very concisely, whether it be 6 or 20 columns.
We can measure the running time of every snippet of code using the package microbenchmark.
The results are mostly as expected. The vectorized code is the fastest, but it is not very concise. The reduce() function is also very fast, and can be used with any number of columns. The slowest is the gather()approach, and it should probably be avoided unless you already need to tidy your data.
Two things were really surprising:
rowSums() is much faster than apply() and almost as good as reduce(). As mentioned before it can only be used when computing the sum or the mean.
apply() is twice as fast as pmap_dbl(), probably because of the extra checks needed by pmap(). However, I would expect them to be much closer.
We end this post with a violin plot of the results:
The Git Team maintains a bash script that sets a message in your prompt displaying the current branch and status. The script can be found here. To install th...
This small example aims to provide some use cases for the tidyr package. Let’s generate some example data first:
library(lubridate)
library(tibble)
library(...
The name for the different functions that work with probability distributions in R and SciPy is different, which is often confusing. The following table list...