Python – The fastest way to perform cosine similarity on 10 million pairs of 1×20 vectors

The fastest way to perform cosine similarity on 10 million pairs of 1×20 vectors… here is a solution to the problem.

The fastest way to perform cosine similarity on 10 million pairs of 1×20 vectors

I have a 2-column pandas df with 2.7 million normalized vectors with a row length of 20 each.

I want to do cosine simulation for column1 – row1 and column2- row1, column1 – row2

and column2 – row2… And so on, until 2.7 million.

I’ve tried looping, but it’s very slow. What is the fastest method?

This is what I use now:

for index, row in df.iterrows():
   x =  1 - spatial.distance.cosine(tempdf['unit_vector'][index], 
tempdf['ave_unit_vector'][index])
   print(index,x)

Data:

tempdf['unit_vector']
Out[185]: 
0          [0.7071067811865475, 0.7071067811865475, 0.0, ...
1          [0.634997029655247, 0.634997029655247, 0.43995...
2          [0.5233710392524532, 0.5233710392524532, 0.552...
3          [0.4792468085399227, 0.4792468085399227, 0.505...
4          [0.4937468195427678, 0.4937468195427678, 0.492...
5          [0.49444897739151283, 0.49444897739151283, 0.5...
6          [0.49548793862403173, 0.49548793862403173, 0.4...
7          [0.5027211862475275, 0.5027211862475275, 0.495...
8          [0.5136216906905179, 0.5136216906905179, 0.489...
9          [0.5035958124287837, 0.5035958124287837, 0.508...
10         [0.5037995208120967, 0.5037995208120967, 0.493...

tempdf['ave_unit_vector']
Out[186]: 
0          [0.5024525269125278, 0.5024525269125278, 0.494...
1          [0.5010905514059507, 0.5010905514059507, 0.499...
2          [0.4993456468410199, 0.4993456468410199, 0.501...
3          [0.5005492367626839, 0.5005492367626839, 0.498...
4          [0.4999384715200533, 0.4999384715200533, 0.501...
5          [0.49836832120891517, 0.49836832120891517, 0.5...
6          [0.49842376222388335, 0.49842376222388335, 0.5...
7          [0.4984869391887457, 0.4984869391887457, 0.500...
8          [0.4990867844970344, 0.4990867844970344, 0.499...
9          [0.49977780370532715, 0.49977780370532715, 0.4...
10         [0.5003161478128204, 0.5003161478128204, 0.499...

This is not the same dataset, but a usable df is created. Columns “B” and “C”:

df = pd. DataFrame(list(range(0,1000)),columns = ['A'])

for i in range(0,5):
   df['New_{}'.format(i)] = df['A'].shift(i).tolist()

cols = len(df.columns)
start_col = cols - 6

df['B'] = df.iloc[:,start_col:cols].values.tolist()
df['C'] = df['B'] * 2

Solution

This is the fastest method I’ve ever tried. Reduce the calculation time in the loop from more than 30 minutes to about 5 seconds:

tempdf['vector_mult'] = np.multiply(tempdf['unit_vector'], tempdf['ave_unit_vector'])
tempdf['cosinesim'] = tempdf['vector_mult'].apply(lambda x: sum(x))

This is doable because my vector is already a unit vector.

The first function multiplies the vectors in two columns row by row. The second function summes them by row. The challenge here is that there are no pre-built functions that you want to solve line by line. Instead, it wants to sum the vectors in each column and then compute the result.

Related Problems and Solutions