Introduction

During the 5th project of the Master of Data Science, we had to do a segmentation of customers based on their behavior. In order to classify the type of articles they are interested in, a clustering has been done on the article. To merge products together some keywords or info has been removed (for example the quantity or the color). This allow to consider that a white lamp and a red lamp are the same article.

This clustering was poor. Nearly all articles were on 1 bucket. The silhouette score was also very low due to overlapping clusters (you can see result on the notebook @ this link).

In project 6, I discovered text processing principle and fex days ago, an article about a new way to reduce dimension called UMAP (Arxiv). In this Notebook, we gonna try this new algorithm compare to TSNE and improve the Clustering of articles.

I won't restart all the word done under the P5 but only do this part.

In [66]:
import pandas as pd
import nltk
import numpy as np
import time

import scipy
import scipy.sparse
import matplotlib.pyplot as plt

from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import Normalizer
from sklearn.metrics import silhouette_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.externals import joblib

import umap   # downloaded from Github but avaialabel with pip

%matplotlib inline
In [2]:
df = pd.read_excel("F:\\Nicolas\\PycharmProjects\\machine_learning\\datasets\\Online_Retail.xlsx")

Cleanup

As I restart from the origianl dataset, I'll do again the cleaning required to have clean articles.

In [3]:
df = df[~df["InvoiceNo"].astype(str).str.startswith("C")]
In [4]:
df = df[df["Description"].str.startswith("?") == False]
df = df[df["Description"].str.isupper() == True]
df = df[df["Description"].str.contains("LOST") == False]
df = df[df["CustomerID"].notnull()]
In [20]:
df = df["Description"].drop_duplicates().reset_index(drop=True)
In [21]:
df.to_csv("F:\\Nicolas\\PycharmProjects\\machine_learning\\datasets\\Online_Retail_clean.csv")

Now we have only unique items. This is 3800 items only. Now we will pre-process them to do the clustering afterward.

Pre-processing articles

In [7]:
df = pd.read_csv("F:\\Nicolas\\PycharmProjects\\machine_learning\\datasets\\Online_Retail_clean.csv", index_col=0)
In [22]:
df.head()
Out[22]:
0     WHITE HANGING HEART T-LIGHT HOLDER
1                    WHITE METAL LANTERN
2         CREAM CUPID HEARTS COAT HANGER
3    KNITTED UNION FLAG HOT WATER BOTTLE
4         RED WOOLLY HOTTIE WHITE HEART.
Name: Description, dtype: object
In [23]:
X = df #.values

You can take a look at the Tokenisation with nltk below but as we will do the TF-IDF matrix, all this processing will be done with sklearn

In [90]:
# X = X.str.lower()
# X = X.apply(nltk.word_tokenize)
# X = X.apply(lambda tokens : [stemmer.stem(token) for token in tokens if token not in stopword])
# X = X.apply(lambda tokens : [token for token in tokens if token.isalpha()])
# df["Token"] = X
In [25]:
stemmer = nltk.stem.porter.PorterStemmer()
stopword = nltk.corpus.stopwords.words('english')
In [26]:
def stem_and_filter(doc):
    tokens = [stemmer.stem(w) for w in analyzer(doc)]
    return [token for token in tokens if token.isalpha()]

TF Matrix

For tests, we can also generate the TF Matrix but we won't use it.

In [27]:
analyzer = CountVectorizer().build_analyzer()
CV = CountVectorizer(lowercase=True, stop_words="english", analyzer=stem_and_filter)
TF_matrix = CV.fit_transform(X)
In [28]:
joblib.dump(CV, 'CountVectorizer.pkl') 
Out[28]:
['CountVectorizer.pkl']
In [29]:
TF_matrix
Out[29]:
<3856x1688 sparse matrix of type '<class 'numpy.int64'>'
	with 16214 stored elements in Compressed Sparse Row format>
In [30]:
scipy.sparse.save_npz("TF_matrix.npz", TF_matrix)

TF-IDF Matrix

The code is the same for the TF-IDF matrix but we will ignore words appearing in more than 90% of items (in case there is a kind of StopWords not in "english" list)

In [31]:
analyzer = TfidfVectorizer().build_analyzer()
CV = TfidfVectorizer(lowercase=True, stop_words="english", analyzer=stem_and_filter, min_df=0.00, max_df=0.9)
TF_IDF_matrix = CV.fit_transform(X)
In [32]:
joblib.dump(CV, 'TfidfVectorizer.pkl') 
Out[32]:
['TfidfVectorizer.pkl']
In [33]:
TF_IDF_matrix
Out[33]:
<3856x1688 sparse matrix of type '<class 'numpy.float64'>'
	with 16214 stored elements in Compressed Sparse Row format>
In [35]:
scipy.sparse.save_npz("TF_IDF_matrix.npz", TF_IDF_matrix)

What we can see is that the threshold of 90 % of word in common removed no words. This is a good point beacuse that can be analyse as the fact there there is no word too common. Now, let's do the clustering.

Clustering

In this phase, we goona do the clustering with KMeans on the TF and TF-IDF Matrix to compare results. In we apply directly the clustering on those matrix, we will hav eissues as our matrices are very sparse and the computation of distances will be a mess. What we can do, is to performa LSA to reduce data to a dense matrix of dimension 100.

In [36]:
TF_matrix = scipy.sparse.load_npz("TF_matrix.npz").astype(np.uint8)
TF_IDF_matrix = scipy.sparse.load_npz("TF_IDF_matrix.npz")

Let's now apply the LSA and normalize every row afterward.

In [38]:
svd = TruncatedSVD(n_components = 100)
normalizer = Normalizer(copy=False)

TF_embedded = svd.fit_transform(TF_matrix)
TF_embedded = normalizer.fit_transform(TF_embedded)

TF_IDF_embedded = svd.fit_transform(TF_IDF_matrix)
TF_IDF_embedded = normalizer.fit_transform(TF_IDF_embedded)

Now, let's compute silhouette_score for both matrices

In [39]:
n_cluster_opti_tf = 0
n_cluster_opti_tf_idf = 0
score_tf = []
score_tfidf = []

max_silhouette = -1
for n_clusters in range(2, 100, 2):
    kmeans = KMeans(init='k-means++', n_clusters = n_clusters, n_init=10)
    kmeans.fit(TF_embedded)
    clusters = kmeans.predict(TF_embedded)
    silhouette_avg = silhouette_score(TF_embedded, clusters)
#     print("N clusters =", n_clusters, "Silhouette Score :", silhouette_avg)
    score_tf.append(silhouette_avg)
    if silhouette_avg > max_silhouette:
        n_cluster_opti = n_clusters

max_silhouette = -1
for n_clusters in range(2, 100, 2):
    kmeans = KMeans(init='k-means++', n_clusters = n_clusters, n_init=10)
    kmeans.fit(TF_IDF_embedded)
    clusters = kmeans.predict(TF_IDF_embedded)
    silhouette_avg = silhouette_score(TF_IDF_embedded, clusters)
#     print("N clusters =", n_clusters, "Silhouette Score :", silhouette_avg)
    score_tfidf.append(silhouette_avg)
    if silhouette_avg > max_silhouette:
        n_cluster_opti = n_clusters
In [44]:
x = list(range(2, 100, 2))
plt.figure(figsize=(20,8))
plt.plot(x, score_tf, label="TF matrix")
plt.plot(x, score_tfidf, label="TF-IDF matrix")
plt.legend()
plt.show()

Starting at 60 clusters, we have a sihouette score correct, We can see that starting at 60 clusters, the score is slightly better with the TF-IDF matrix but if we absolutely need less cluster, we should use TF matrix. Now let's do the final clustering with 60 clusters and TF-IDF matrix

In [55]:
n_clusters = 60
In [46]:
kmeans = KMeans(init='k-means++', n_clusters = n_cluster, n_init=30)
kmeans.fit(TF_IDF_embedded)
clusters = kmeans.predict(TF_IDF_embedded)

Result

Let's look at the content of clusters and the distribution

In [59]:
selected_cluster = 18
for obj, cluster in zip(X, clusters):
    if cluster == selected_cluster:
        print(obj)
PAPER CHAIN KIT 50'S CHRISTMAS 
SET/6 RED SPOTTY PAPER PLATES
PAPER CHAIN KIT RETROSPOT
PAPER CHAIN KIT VINTAGE CHRISTMAS
SET/6 RED SPOTTY PAPER CUPS
CUPCAKE LACE PAPER SET 6
PAPER BUNTING WHITE LACE
PINK HEARTS PAPER GARLAND
CALENDAR PAPER CUT DESIGN
WHITE BELL HONEYCOMB PAPER 
BLUE PAPER PARASOL 
PAPER CHAIN KIT LONDON
PAPER BUNTING COLOURED LACE
LARGE WHITE HONEYCOMB PAPER BELL  
GOLD PRINT PAPER BAG
PAPER BUNTING RETROSPOT
CHINESE DRAGON PAPER LANTERNS
PAPER CHAIN KIT EMPIRE
PAPER BUNTING VINTAGE PAISLEY
MAGIC TREE -PAPER FLOWERS
SET/20 STRAWBERRY PAPER NAPKINS 
YULETIDE IMAGES S/6 PAPER BOXES
PINK  HONEYCOMB PAPER BALL 
SET/6 COLLAGE PAPER PLATES
MULTICOLOUR HONEYCOMB FAN
SANDALWOOD FAN
WHITE BELL HONEYCOMB PAPER GARLAND 
PAPER POCKET TRAVELING FAN 
SET/6 GREEN SPRING PAPER CUPS
PAPER CHAIN KIT SKULLS 
PURPLE PAPER PARASOL
TROPICAL  HONEYCOMB PAPER GARLAND 
WHITE HONEYCOMB PAPER GARLAND 
3D HEARTS  HONEYCOMB PAPER GARLAND
MULTICOLOUR HONEYCOMB PAPER GARLAND
PINK  HONEYCOMB PAPER FAN
MAGIC SHEEP WOOL GROWING FROM PAPER
PINK PAPER PARASOL 
CONGRATULATIONS BUNTING
SET/6 COLLAGE PAPER CUPS
SET/6 FRUIT SALAD PAPER CUPS
SET/6 FRUIT SALAD  PAPER PLATES
SET 3 SONG BIRD PAPER EGGS ASSORTED
SET 3 PAPER VINTAGE CHICK PAPER EGG
RED PAPER PARASOL
SET/6 POSIES PAPER PLATES
RED DAISY PAPER LAMPSHADE
SET/6 POSIES PAPER CUPS
PAPER LANTERN 7 POINT SNOW STAR
PAPER LANTERN 9 POINT SNOW STAR
PAPER LANTERN 9 POINT SNOW STAR 
SPOTTY BUNTING
PAPER LANTERN 9 POINT HOLLY STAR 40
PAPER LANTERN 5 POINT STAR MOON 30
PAPER LANTERN 6 POINT SNOW STAR
BUNTING , SPOTTY 
PAPER LANTERN 9 POINT HOLLY STAR L
PAPER LANTERN 9 POINT DELUXE STAR
PAPER LANTERN 5 POINT SEQUIN STAR
PAPER LANTERN 9 POINT HOLLY STAR S
PAPER LANTERN 9 POINT HOLLY STAR 23
PAPER LANTERN 5 POINT STUDDED STAR
SET 6 PAPER TABLE LANTERN HEARTS 
SET 6 PAPER TABLE LANTERN STARS 
FRENCH CARRIAGE LANTERN
PAPER LANTERN 5 POINT STAR MOON 
PAPER BUNTING VINTAGE PARTY
PAPER BUNTING PAISLEY PARK
PAPER CRAFT , LITTLE BIRDIE
In [57]:
plt.hist(clusters, bins=n_clusters-1)
plt.show()

We can see that now, most of the clusters have the same amount of items except the cluster 8/34/59. If we look at content, it's pretty similar for all of them

Visualisation

Here, we will try the UMAP reduction model vs TSNE. On UMAP, there is a lot of parameter we can play with :

==>> Uniform Manifold Approximation and Projection <<==

Finds a low dimensional embedding of the data that approximates
an underlying manifold.

Parameters
----------
n_neighbors: float (optional, default 15)
    The size of local neighborhood (in terms of number of neighboring
    sample points) used for manifold approximation. Larger values
    result in more global views of the manifold, while smaller
    values result in more local data being preserved. In general
    values should be in the range 2 to 100.

n_components: int (optional, default 2)
    The dimension of the space to embed into. This defaults to 2 to
    provide easy visualization, but can reasonably be set to any
    integer value in the range 2 to 100.

metric: string or function (optional, default 'euclidean')
    The metric to use to compute distances in high dimensional space.
    If a string is passed it must match a valid predefined metric. If
    a general metric is required a function that takes two 1d arrays and
    returns a float can be provided. For performance purposes it is
    required that this be a numba jit'd function. Valid string metrics
    include:
        * euclidean
        * manhattan
        * chebyshev
        * minkowski
        * canberra
        * braycurtis
        * mahalanobis
        * wminkowski
        * seuclidean
        * cosine
        * correlation
        * haversine
        * hamming
        * jaccard
        * dice
        * russelrao
        * kulsinski
        * rogerstanimoto
        * sokalmichener
        * sokalsneath
        * yule
    Metrics that take arguments (such as minkowski, mahalanobis etc.)
    can have arguments passed via the metric_kwds dictionary. At this
    time care must be taken and dictionary elements must be ordered
    appropriately; this will hopefully be fixed in the future.

n_edge_samples: int (optional, default None)
    The number of edge/1-simplex samples to be used in optimizing the
    low dimensional embedding. Larger values result in more accurate
    embeddings. If None is specified a value will be selected based on
    the size of the input dataset (typically around dataset_size * 10**4).

alpha: float (optional, default 1.0)
    The initial learning rate for the embedding optimization.

init: string (optional, default 'spectral')
    How to initialize the low dimensional embedding. Options are:
        * 'spectral': use a spectral embedding of the fuzzy 1-skeleton
        * 'random': assign initial embedding positions at random.
        * A numpy array of initial embedding positions.

min_dist: float (optional, default 0.1)
    The effective minimum distance between embedded points. Smaller values
    will result in a more clustered/clumped embedding where nearby points
    on the manifold are drawn closer together, while larger values will
    result on a more even dispersal of points. The value should be set
    relative to the ``spread`` value, which determines the scale at which
    embedded points will be spread out.

spread: float (optional, default 1.0)
    The effective scale of embedded points. In combination with ``min_dist``
    this determines how clustered/clumped the embedded points are.

set_op_mix_ratio: float (optional, default 1.0)
    Interpolate between (fuzzy) union and intersection as the set operation
    used to combine local fuzzy simplicial sets to obtain a global fuzzy
    simplicial sets. Both fuzzy set operations use the product t-norm.
    The value of this parameter should be between 0.0 and 1.0; a value of
    1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy
    intersection.

local_connectivity: int (optional, default 1)
    The local connectivity required -- i.e. the number of nearest
    neighbors that should be assumed to be connected at a local level.
    The higher this value the more connected the manifold becomes
    locally. In practice this should be not more than the local intrinsic
    dimension of the manifold.

gamma: float (optional, default 1.0)
    Weighting applied to negative samples in low dimensional embedding
    optimization. Values higher than one will result in greater weight
    being given to negative samples.

bandwidth: float (optional, default 1.0)
    The effective bandwidth of the kernel if we view the algorithm as
    similar to Laplacian eigenmaps. Larger values induce more
    connectivity and a more global view of the data, smaller values
    concentrate more locally.

a: float (optional, default None)
    More specific parameters controlling the embedding. If None these
    values are set automatically as determined by ``min_dist`` and
    ``spread``.
b: float (optional, default None)
    More specific parameters controlling the embedding. If None these
    values are set automatically as determined by ``min_dist`` and
    ``spread``.

random_state: int, RandomState instance or None, optional (default: None)
    If int, random_state is the seed used by the random number generator;
    If RandomState instance, random_state is the random number generator;
    If None, the random number generator is the RandomState instance used
    by `np.random`.

metric_kwds: dict (optional, default {})
    Arguments to pass on to the metric, such as the ``p`` value for
    Minkowski distance.

angular_rp_forest: bool (optional, default False)
    Whether to use an angular random projection forest to initialise
    the approximate nearest neighbor search. This can be faster, but is
    mostly on useful for metric that use an angular style distance such
    as cosine, correlation etc. In the case of those metrics angular forests
    will be chosen automatically.

verbose: bool (optional, default False)
    Controls verbosity of logging.

I've tried some setting and I got the best visual result with euclidean distance

In [67]:
start = time.time()
model = umap.UMAP(metric="euclidean")
u = model.fit_transform(TF_IDF_embedded)
print(time.time()-start)
6.844224214553833
In [68]:
# start = time.time()
# tsne = TSNE(n_components=2)
# u2 = tsne.fit_transform(TF_IDF_embedded)
# print(time.time()-start)
86.86106824874878
In [69]:
selection = 61
In [72]:
plt.figure(figsize=(20,10))

a1 = u[clusters<selection, :]
a2 = u2[clusters<selection, :]
b = clusters[clusters<selection]

plt.subplot(1, 2, 1)
plt.scatter(a1[:,0], a1[:,1], c=b)
plt.xlim(-7, 7)
plt.ylim(-7, 7)
plt.title("UMAP", fontsize="25")

plt.subplot(1, 2, 2)
plt.scatter(a2[:,0], a2[:,1], c=b)
plt.title("TSNE", fontsize="25")

plt.show()

Conclusion

As a conclusion, we have improved a lot the clustering of articles based on the learning done during project 6. Now we have a balanced clustering and clearly more separated.

Regarding the learning with UMAP, this new algorithm provides excellent results when you take a look at their paper. Nevertheless, in our case, the result is not as good as TSNE but the algorithm is clearly faster (6.8s vs 86.9s with TSNE). The fact that we can play also with distances allow us to more represent elements in proper space.