Usage and Algorithm Explained

Introduction

We can distinguish several types of deduplication when thinking about table - across one column we could use duplicated() (or unique()) to check (or remove) which records are duplicated; across one row we could iterate over rows and use the same duplicated() (or unique()) or gather multiple columns into one (by changing table from wide format to long) and then perform deduplication across one, long column.

Finally, we can think of deduplication as a process of finding duplicated data across different columns and rows, using transitive relation:

Tel_1 Tel_2 Name
111 222 name1
222 666 name2
444 666 name3
555 555 name4

In the example above, we could treat records in rows 1, 2 and 3 as the same record. Why? Because row 1 and 2 has one the same phone number (222), row 2 and 3 has one the same phone number (666) and, because row 1 is duplicated with row 2, row 2 is duplicated with row 3, then row 1 is duplicated with row 3. User can see the result of this specific deduplication by calling (after installation of package):

example("dedupe_wide", "dedupewider") # look at the first example

This method was used in CATI surveys (especially on businesses databases) to minimize the chance that interviewers will call independently the same respondent and thus irritate her or him. It is a chance that the same, suitable person to participate in the survey, works in more than one company and that these companies exist as a separate records in the database (sometimes just as a separate companies, sometimes as a branches). When trying to find participant in company X, interviewer can be switched to company C to speak with employee E and the second interviewer, when calling company Y, can also be switched to company C to speak with employee E. If some data in database (like phone numbers) can be use to collapse companies X, Y and C into one record, the chance for this inconvenience will be much lower.

Usage

To attach the package, use: library(dedupewider). We will show the usage of dedupe_wide() function using the table presented below:

tel_1 tel_2 tel_3 tel_4 tel_5 name nace
111 222 NA NA NA name1 01.19
222 666 NA NA NA name2 01.64
444 666 NA NA NA name3 55.90
555 555 555 555 555 name4 09.10

We already know that rows 1, 2, 3 will be collapsed into one. Let’s see the default behavior of function:

library(dedupewider)

initial_table <- data.frame(tel_1 = c(111, 222, 444, 555),
                            tel_2 = c(222, 666, 666, 555),
                            tel_3 = c(NA, NA, NA, 555),
                            tel_4 = c(NA, NA, NA, 555),
                            tel_5 = c(NA, NA, NA, 555),
                            name = paste0("name", 1:4),
                            nace = c("01.19", "01.64", "55.90", "09.10"))

table_deduplicated <- dedupe_wide(initial_table, cols_dedupe = paste0("tel_", 1:5),
                                  cols_expand = "name")
table_deduplicated
tel_1….1 tel_1….2 tel_1….3 tel_1….4 name….1 name….2 name….3 nace
111 222 444 666 name1 name2 name3 01.19
555 NA NA NA name4 NA NA 09.10

We can see that:

Of course, columns passed to cols_expand are treat as separate columns, so data is not mixed:

table_deduplicated <- dedupe_wide(initial_table, cols_dedupe = paste0("tel_", 1:5),
                                  cols_expand = c("name", "nace"))
table_deduplicated
tel_1….1 tel_1….2 tel_1….3 tel_1….4 name….1 name….2 name….3 nace….1 nace….2 nace….3
111 222 444 666 name1 name2 name3 01.19 01.64 55.90
555 NA NA NA name4 NA NA 09.10 NA NA

Sometimes user can think that there is a fixed number of new columns which will be enough (for example no more than 2 phone numbers to company will be needed). Argument max_new_cols can be use to set the number of max. new columns to create and it will be used to limit number of columns from cols_dedupe and cols_expand:

table_deduplicated <- dedupe_wide(initial_table, cols_dedupe = paste0("tel_", 1:5),
                                  cols_expand = c("name", "nace"),
                                  max_new_cols = 2)
table_deduplicated
tel_1….1 tel_1….2 name….1 name….2 nace….1 nace….2
111 222 name1 name2 01.19 01.64
555 NA name4 NA 09.10 NA

Functions duplicated() or unique() can treat missing data as duplicated data, e.g.

duplicated(c(NA, NA)) # returns FALSE TRUE
unique(c(NA, NA)) # returns NA (vector of length 1)

But dedupe_wide cannot treat missing data as duplicated data, e.g. in case of this table:

tel_1 tel_2 name
111 222 name5
222 111 name6
NA NA name7
NA NA name8

Rows 1 and 2 will be collapsed to one, but the rest of rows remain unchanged:

tel_1….1 tel_1….2 name….1 name….2
111 222 name5 name6
NA NA name7 NA
NA NA name8 NA

It is also important to note that this function was implemented using data.table and thus parallel computing is possible. To enable this, user can just call data.table::setDTthreads() before using dedupe_wide():

data.table::setDTthreads()
dedupe_wide(initial_table, cols_dedupe = paste0("tel_", 1:5),
                                  cols_expand = "name")

Algorithm

The aim of this section is to show algorithm used to find duplicated records - this can be use for cross-checking or can be a inspiration to implement more efficient solution if needed. Table below will be our starting point:

tel_1 tel_2 tel_3 tel_4 name
111 777 999 102 name1
222 666 NA NA name2
333 NA NA NA name3
444 666 333 NA name4
333 NA NA NA name5
333 NA NA NA name6
666 NA NA NA name7
777 NA NA NA name8
333 NA NA NA name9
333 777 101 103 name10
333 888 102 NA name11
333 777 777 103 name12

We can split the algorithm into two main steps - finding duplicates and cascade them.

Find duplicates

At first, we need to gather all columns which will be used for deduplication into one column, but keeping indexes to know which records belong to the same row (below only first 6 rows are shown):

….idx value
1 111
2 222
3 333
4 444
5 333
6 333

Now we can group by value and summarize indexes (column V1):

V1 main_index
3, 5, 6, 9, 10, 11, 12, 4 3
7, 2, 4 2
8, 1, 10, 12 1
11, 1 1
10, 12 10

We want also to have a main index which will be used as a replace. It will be the lowest index, so later we can easily keep original order of rows.

We can see that now not all indexes are connected as they should, e.g. index 7 is not connected with index 3, but should (because 7 and 3 are connected with 4).

Cascade

To fix this, we will - step by step, for each index - replace rest indexes by main index. At first, let’s make a pairs: main index - one of the rests indexes:

main_index rest_indexes
10 12
3 12
1 12
3 11
1 11
3 10
1 10
3 9
1 8
2 7
3 6
3 5
3 4
2 4

We have also sorted them by descending order, at first by rest_indexes, then by main_index and removed duplicated pairs.

We can now group by rest_indexes and for each group replace the next rest_index by previous main_index, e.g. for 12 we will have:

main_index rest_indexes
10 12
3 10
1 3

After iterate over each index (each iterate needs to end by reorder the table - again we need to sort them by descending order, at first by rest_indexes, then by main_index), removing duplicated: pairs and more than one occurrence of rest_indexes, we will get:

main_index rest_indexes
10 12
3 11
3 10
3 9
1 8
2 7
3 6
3 5
3 4
2 3
1 2

The last thing to do is just to replace each rest_index by main_index in our main table (table passed to dedupe_wide), starting from the top of our pairs.