A specific project requirement required me to explore methods of quantifying the similarity between two or more objects.

The objects were two images of humans performing poses. The objective was to quantify how similar the poses in the images were.

Trying to solve this problem led me into the world of mathematics, and I’m not a fan of mathematics.

I stumbled upon a similarity measurement called cosine similarity. Cosine similarity is a measurement that quantifies the similarity between two or more vectors. The cosine similarity is the cosine of the angle between vectors. The vectors are typically non-zero and are within an inner product space.

## What Is Cosine Similarity?

Cosine similarity is a measurement that quantifies the similarity between two or more vectors. It’s the cosine of the angle between vectors, which are typically non-zero and within an inner product space.

I was intrigued by the simplicity of this implementation, and more importantly, it was straightforward enough to understand.

In this article, I will explain the basics of cosine similarity. I’ll also provide several applications and domains where cosine similarity is leveraged, and finally, there will be a code snippet of the algorithm in Swift.

## Cosine Similarity Explained

Cosine similarity is described mathematically as the division between the dot product of vectors and the product of the Euclidean norms or magnitude of each vector.

Cosine similarity is a commonly used similarity measurement technique that can be found in libraries and tools such as Matlab, SciKit-Learn and TensorFlow, etc.

Below is a quick implementation of the cosine similarity logic in Swift.

``````func cosineSimilarity(A: [CGFloat?], B: [CGFloat?]) -> CGFloat {
let dotProduct: CGFloat
let magnitudeOfA: Float
let magnitudeOfB: Float
let cosineSimilarity: Float
dotProduct = (A! + B!) + (A! + B!)
magnitudeOfA = sqrtf(powf(Float(A!), 2.0) + powf(Float(A!), 2.0))
magnitudeOfB = sqrtf(powf(Float(B!), 2.0) + powf(Float(B!), 2.0))
cosineSimilarity = Float(dotProduct) / (magnitudeOfA * magnitudeOfB)
return CGFloat(cosineSimilarity)
}``````

Cosine similarity is a value bound by a constrained range of 0 and 1.

The similarity measurement is a measure of the cosine of the angle between the two non-zero vectors A and B.

Suppose the angle between the two vectors were 90 degrees. In that case, the cosine similarity will have a value of 0. This means that the two vectors are orthogonal or perpendicular to each other.

## What Is a Good Cosine Similarity Score?

A cosine similarity is a value that is bound by a constrained range of 0 and 1. The closer the value is to 0 means that the two vectors are orthogonal or perpendicular to each other. When the value is closer to one, it means the angle is smaller and the images are more similar.

As the cosine similarity measurement gets closer to 1, then the angle between the two vectors A and B is smaller. The images below depict this more clearly. Two vectors with 98 percent similarity based on the cosine of the angle between the vectors. | Image: Richmond Alake Two vectors with 55 percent similarity based on the cosine of the angle between the vectors. | Image: Richmond Alake

More on Machine Learning: SARSA Reinforcement Learning Algorithm: A Guide

## Cosine Similarity Applications

Cosine similarity has its place in several applications and algorithms.

From the world of computer vision to data mining, comparing a similarity measurement between two vectors represented in a higher-dimensional space has a lot of uses.

Let’s go through a couple of scenarios and applications where the cosine similarity measure is leveraged.

### 1. Document Similarity

A scenario that involves the requirement of identifying the similarity between pairs of a document is a good use case for the utilization of cosine similarity as a quantification of the measurement of similarity between two objects.

To find the quantification of the similarity between two documents, you need to convert the words or phrases within the document or sentence into a vectorized form of representation

The vector representations of the documents can then be used within the cosine similarity formula to obtain a quantification of similarity.

In the scenario described above, the cosine similarity of 1 implies that the two documents are exactly alike and a cosine similarity of 0 would point to the conclusion that there are no similarities between the two documents.

### Cosine Similarity Example

• Document 1: Deep Learning can be hard
• Document 2: Deep Learning can be simple

#### Step 1: Obtain a vectorized representation of the texts. A vectorized representation of texts in table format. | Image: Richmond Alake
• Document 1: [1, 1, 1, 1, 1, 0] let’s refer to this as A
• Document 2: [1, 1, 1, 1, 0, 1] let’s refer to this as B

Above we have two vectors (A and B) that are in a six dimension vector space.

#### Step 2: Find the Cosine Similarity

`cosine similarity (CS) = (A . B) / (||A|| ||B||)`

• Calculate the dot product between A and B: `1.1 + 1.1 + 1.1 + 1.1 + 1.0 + 0.1 = 4`.
• Calculate the magnitude of the vector A: `√1² + 1² + 1² + 1² + 1² + 0² = 2.2360679775`.
• Calculate the magnitude of the vector B: `√1² + 1² + 1² + 1² + 0²+ 1² = 2.2360679775`.
• Calculate the cosine similarity: `(4) / (2.2360679775*2.2360679775) = 0.80` (80 percent similarity between the sentences in both document).

Let’s explore another application where cosine similarity can be utilized to determine a similarity measurement between two objects.

### 2. Pose Matching

Pose matching involves comparing the poses containing key points of joint locations.

Pose estimation is a computer vision task, and it’s typically solved using deep learning approaches such as convolutional pose machines, stacked hourglasses and PoseNet, etc.

Pose estimation is the process where the position and orientation of the vital body parts and joints of a body are derived from an image or sequence of images.

In a scenario where there is a requirement to quantify the similarity between two poses in Image A and Image B, here is the process that would be taken:

1. Identify the pose information and derive the location key points (joints) in Image A.
2. Identify the x,y location of all the respective joints required for comparison. Deep Learning solution to pose estimation usually provides information on the location of the joints within a particular pose, along with an estimation confidence score.
3. Repeat steps one and two for Image B.
4. Place all x,y positions of Image A in a vector.
5. Place all x,y positions of Image B in a vector.
6. Ensure the order of the x,y positions of each joint is the same in both vectors.
7. Perform cosine similarity using both vectors to obtain a number between 0 and 1.

More on Machine Learning: An Introduction to Classification in Machine Learning

There are other application domains you might find the utilization of cosine similarity, such as recommendation systems, plagiarism detectors and data mining. It can even be used as a loss function when training neural networks.

The logic behind cosine similarity is easy to understand and can be implemented in presumably most modern programming languages.

One thing I learned from exploring these little segments of math was that most topics are often deemed complex since there aren’t enough resources to teach the topic adequately. If you have the appropriate learning resources and patience, then it’s possible to grasp a large number of mathematics topics.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.