【问题标题】:sklearn agglomerative clustering linkage matrixsklearn 凝聚聚类链接矩阵
【发布时间】:2015-01-07 05:08:48
【问题描述】:

我正在尝试绘制完整链接scipy.cluster.hierarchy.dendrogram,我发现scipy.cluster.hierarchy.linkagesklearn.AgglomerativeClustering 慢。

但是,sklearn.AgglomerativeClustering 不会返回集群之间的距离和 scipy.cluster.hierarchy.dendrogram 需要的原始观测值的数量。有没有办法拿走它们?

【问题讨论】:

标签: python scikit-learn cluster-analysis dendrogram


【解决方案1】:

我认为 AgglomerativeClustering 上 sklearn 的官方示例会有所帮助。

Plot Hierarchical Clustering Dendrogram:

import numpy as np

from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram
from sklearn.datasets import load_iris
from sklearn.cluster import AgglomerativeClustering


def plot_dendrogram(model, **kwargs):
    # Create linkage matrix and then plot the dendrogram

    # create the counts of samples under each node
    counts = np.zeros(model.children_.shape[0])
    n_samples = len(model.labels_)
    for i, merge in enumerate(model.children_):
        current_count = 0
        for child_idx in merge:
            if child_idx < n_samples:
                current_count += 1  # leaf node
            else:
                current_count += counts[child_idx - n_samples]
        counts[i] = current_count

    linkage_matrix = np.column_stack([model.children_, model.distances_,
                                      counts]).astype(float)

    # Plot the corresponding dendrogram
    dendrogram(linkage_matrix, **kwargs)


iris = load_iris()
X = iris.data

# setting distance_threshold=0 ensures we compute the full tree.
model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)

model = model.fit(X)
plt.title('Hierarchical Clustering Dendrogram')
# plot the top three levels of the dendrogram
plot_dendrogram(model, truncate_mode='level', p=3)
plt.xlabel("Number of points in node (or index of point if no parenthesis).")
plt.show()

NB 此解决方案依赖于distances_ 变量,该变量仅在使用distance_threshold 参数调用AgglomerativeClustering 时设置。

【讨论】:

【解决方案2】:

我在设置 n_clusters 时遇到了同样的问题。 我认为问题在于,如果您设置 n_clusters,则不会评估距离。 如果您设置 n_clusters = None 并设置 distance_threshold,则它适用于 sklearn 上提供的代码。 我知道这可能对您的情况没有帮助,但我希望正在修复中。

【讨论】:

    【解决方案3】:

    更新:我推荐这个解决方案 - https://stackoverflow.com/a/47769506/1333621,如果您发现我的尝试有用,请检查 Arjun 的解决方案并重新检查您的投票

    您需要从 children_ 数组生成一个“链接矩阵” 其中链接矩阵中的每一行的格式为 [idx1, idx2, distance, sample_count]。

    这并不是一个粘贴并运行的解决方案,我没有跟踪我需要导入的内容 - 但无论如何它应该很清楚。

    这是生成所需结构 Z 并可视化结果的一种方法

    X 是您的n_samples x n_features 输入数据

    集群

    agg_cluster = sklearn.cluster.AgglomerativeClustering(n_clusters=n)
    agg_labels = agg_cluster.fit_predict(X)
    

    一些空的数据结构

    Z = []
    # should really call this cluster dict
    node_dict = {}
    n_samples = len(X)
    

    编写一个递归函数来收集与给定集群关联的所有叶节点,计算距离和质心位置

    def get_all_children(k, verbose=False):
        i,j = agg_cluster.children_[k]
    
        if k in node_dict:
            return node_dict[k]['children']
    
        if i < leaf_count:
            left = [i]
        else:
            # read the AgglomerativeClustering doc. to see why I select i-n_samples
            left = get_all_children(i-n_samples)
    
        if j < leaf_count:
            right = [j]
        else:
            right = get_all_children(j-n_samples)
    
        if verbose:
            print k,i,j,left, right
        left_pos = np.mean(map(lambda ii: X[ii], left),axis=0)
        right_pos = np.mean(map(lambda ii: X[ii], right),axis=0)
    
        # this assumes that agg_cluster used euclidean distances
        dist = metrics.pairwise_distances([left_pos,right_pos],metric='euclidean')[0,1]
    
        all_children = [x for y in [left,right] for x in y]
        pos = np.mean(map(lambda ii: X[ii], all_children),axis=0)
    
        # store the results to speed up any additional or recursive evaluations
        node_dict[k] = {'top_child':[i,j],'children':all_children, 'pos':pos,'dist':dist, 'node_i':k + n_samples}
        return all_children
        #return node_di|ct
    

    填充node_dict 并生成Z - 每个节点的距离和n_samples

    for k,x in enumerate(agg_cluster.children_):   
        get_all_children(k,verbose=False)
    
    # Every row in the linkage matrix has the format [idx1, idx2, distance, sample_count].
    Z = [[v['top_child'][0],v['top_child'][1],v['dist'],len(v['children'])] for k,v in node_dict.iteritems()]
    # create a version with log scaled distances for easier visualization
    Z_log =[[v['top_child'][0],v['top_child'][1],np.log(1.0+v['dist']),len(v['children'])] for k,v in node_dict.iteritems()]
    

    使用 scipy dendrogram 绘制它

       from scipy.cluster import hierarchy
       plt.figure()
       dn = hierarchy.dendrogram(Z_log,p=4,truncate_mode='level')
       plt.show()
    

    对这种可视化的不透明程度感到失望,并希望您可以交互式地深入研究更大的集群并检查质心之间的方向(非标量)距离:( - 也许存在散景解决方案?

    参考

    http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html

    https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/#Selecting-a-Distance-Cut-Off-aka-Determining-the-Number-of-Clusters

    【讨论】:

    • 不要在问题之间复制答案。如果相同的答案确实适用于两个问题,请将较新的问题标记为重复。
    • 好的 - 将较新的问题标记为重复问题 - 并删除了我的答案 - 所以这个答案不再是多余的
    • 你首先在哪里定义leaf_count?
    【解决方案4】:

    我在不修改 sklearn 和递归函数的情况下做了一个 scipt。使用前请注意:

    • 相对于子级的合并距离有时会减小 合并距离。我添加了三种处理这些情况的方法: max,什么都不做或增加 l2 范数。 l2范数逻辑尚未验证。请检查自己最适合您的。

    导入包:

    from sklearn.cluster import AgglomerativeClustering
    import numpy as np
    import matplotlib.pyplot as plt
    from scipy.cluster.hierarchy import dendrogram
    

    计算权重和距离的函数:

    def get_distances(X,model,mode='l2'):
        distances = []
        weights = []
        children=model.children_
        dims = (X.shape[1],1)
        distCache = {}
        weightCache = {}
        for childs in children:
            c1 = X[childs[0]].reshape(dims)
            c2 = X[childs[1]].reshape(dims)
            c1Dist = 0
            c1W = 1
            c2Dist = 0
            c2W = 1
            if childs[0] in distCache.keys():
                c1Dist = distCache[childs[0]]
                c1W = weightCache[childs[0]]
            if childs[1] in distCache.keys():
                c2Dist = distCache[childs[1]]
                c2W = weightCache[childs[1]]
            d = np.linalg.norm(c1-c2)
            cc = ((c1W*c1)+(c2W*c2))/(c1W+c2W)
    
            X = np.vstack((X,cc.T))
    
            newChild_id = X.shape[0]-1
    
            # How to deal with a higher level cluster merge with lower distance:
            if mode=='l2':  # Increase the higher level cluster size suing an l2 norm
                added_dist = (c1Dist**2+c2Dist**2)**0.5 
                dNew = (d**2 + added_dist**2)**0.5
            elif mode == 'max':  # If the previrous clusters had higher distance, use that one
                dNew = max(d,c1Dist,c2Dist)
            elif mode == 'actual':  # Plot the actual distance.
                dNew = d
    
    
            wNew = (c1W + c2W)
            distCache[newChild_id] = dNew
            weightCache[newChild_id] = wNew
    
            distances.append(dNew)
            weights.append( wNew)
        return distances, weights
    

    用 2 个子集群制作 2 个集群的样本数据:

    # Make 4 distributions, two of which form a bigger cluster
    X1_1 = np.random.randn(25,2)+[8,1.5]
    X1_2 = np.random.randn(25,2)+[8,-1.5]
    X2_1 = np.random.randn(25,2)-[8,3]
    X2_2 = np.random.randn(25,2)-[8,-3]
    
    # Merge the four distributions
    X = np.vstack([X1_1,X1_2,X2_1,X2_2])
    
    # Plot the clusters
    colors = ['r']*25 + ['b']*25 + ['g']*25 + ['y']*25
    plt.scatter(X[:,0],X[:,1],c=colors)
    

    样本数据:

    拟合聚类模型

    model = AgglomerativeClustering(n_clusters=2,linkage="ward")
    model.fit(X)
    

    调用函数查找距离,并将其传递给树状图

    distance, weight = get_distances(X,model)
    linkage_matrix = np.column_stack([model.children_, distance, weight]).astype(float)
    plt.figure(figsize=(20,10))
    dendrogram(linkage_matrix)
    plt.show()
    

    输出树状图:

    【讨论】:

    【解决方案5】:

    这是可能的,但它并不漂亮。它需要(至少)对AgglomerativeClustering.fit (source) 进行小幅重写。困难在于该方法需要大量导入,因此最终看起来有点讨厌。要添加此功能:

    1. 在第 748 行之后插入以下行:

      kwargs['return_distance'] = True

    2. 将第 752 行替换为:

      self.children_, self.n_components_, self.n_leaves_, parents, self.distance = \

    这将为您提供一个新属性distance,您可以轻松调用它。

    需要注意的几点:

    1. 在执行此操作时,我在第 711 行遇到了有关 check_array 函数的 this 问题。这可以通过使用 check_arrays (from sklearn.utils.validation import check_arrays) 来解决。您可以将该行修改为X = check_arrays(X)[0]。这似乎是一个错误(我在最新版本的 scikit-learn 上仍然存在这个问题)。

    2. 根据您拥有的sklearn.cluster.hierarchical.linkage_tree 版本,您可能还需要将其修改为source 中提供的版本。

    为了方便大家,这里是您需要使用的完整代码:

    from heapq import heapify, heappop, heappush, heappushpop
    import warnings
    import sys
    
    import numpy as np
    from scipy import sparse
    
    from sklearn.base import BaseEstimator, ClusterMixin
    from sklearn.externals.joblib import Memory
    from sklearn.externals import six
    from sklearn.utils.validation import check_arrays
    from sklearn.utils.sparsetools import connected_components
    from sklearn.cluster import _hierarchical
    from sklearn.cluster.hierarchical import ward_tree
    from sklearn.cluster._feature_agglomeration import AgglomerationTransform
    from sklearn.utils.fast_dict import IntFloatDict
    
    def _fix_connectivity(X, connectivity, n_components=None,
                          affinity="euclidean"):
        """
        Fixes the connectivity matrix
            - copies it
            - makes it symmetric
            - converts it to LIL if necessary
            - completes it if necessary
        """
        n_samples = X.shape[0]
        if (connectivity.shape[0] != n_samples or
            connectivity.shape[1] != n_samples):
            raise ValueError('Wrong shape for connectivity matrix: %s '
                             'when X is %s' % (connectivity.shape, X.shape))
    
        # Make the connectivity matrix symmetric:
        connectivity = connectivity + connectivity.T
    
        # Convert connectivity matrix to LIL
        if not sparse.isspmatrix_lil(connectivity):
            if not sparse.isspmatrix(connectivity):
                connectivity = sparse.lil_matrix(connectivity)
            else:
                connectivity = connectivity.tolil()
    
        # Compute the number of nodes
        n_components, labels = connected_components(connectivity)
    
        if n_components > 1:
            warnings.warn("the number of connected components of the "
                          "connectivity matrix is %d > 1. Completing it to avoid "
                          "stopping the tree early." % n_components,
                          stacklevel=2)
            # XXX: Can we do without completing the matrix?
            for i in xrange(n_components):
                idx_i = np.where(labels == i)[0]
                Xi = X[idx_i]
                for j in xrange(i):
                    idx_j = np.where(labels == j)[0]
                    Xj = X[idx_j]
                    D = pairwise_distances(Xi, Xj, metric=affinity)
                    ii, jj = np.where(D == np.min(D))
                    ii = ii[0]
                    jj = jj[0]
                    connectivity[idx_i[ii], idx_j[jj]] = True
                    connectivity[idx_j[jj], idx_i[ii]] = True
    
        return connectivity, n_components
    
    # average and complete linkage
    def linkage_tree(X, connectivity=None, n_components=None,
                     n_clusters=None, linkage='complete', affinity="euclidean",
                     return_distance=False):
        """Linkage agglomerative clustering based on a Feature matrix.
        The inertia matrix uses a Heapq-based representation.
        This is the structured version, that takes into account some topological
        structure between samples.
        Parameters
        ----------
        X : array, shape (n_samples, n_features)
            feature matrix representing n_samples samples to be clustered
        connectivity : sparse matrix (optional).
            connectivity matrix. Defines for each sample the neighboring samples
            following a given structure of the data. The matrix is assumed to
            be symmetric and only the upper triangular half is used.
            Default is None, i.e, the Ward algorithm is unstructured.
        n_components : int (optional)
            Number of connected components. If None the number of connected
            components is estimated from the connectivity matrix.
            NOTE: This parameter is now directly determined directly
            from the connectivity matrix and will be removed in 0.18
        n_clusters : int (optional)
            Stop early the construction of the tree at n_clusters. This is
            useful to decrease computation time if the number of clusters is
            not small compared to the number of samples. In this case, the
            complete tree is not computed, thus the 'children' output is of
            limited use, and the 'parents' output should rather be used.
            This option is valid only when specifying a connectivity matrix.
        linkage : {"average", "complete"}, optional, default: "complete"
            Which linkage critera to use. The linkage criterion determines which
            distance to use between sets of observation.
                - average uses the average of the distances of each observation of
                  the two sets
                - complete or maximum linkage uses the maximum distances between
                  all observations of the two sets.
        affinity : string or callable, optional, default: "euclidean".
            which metric to use. Can be "euclidean", "manhattan", or any
            distance know to paired distance (see metric.pairwise)
        return_distance : bool, default False
            whether or not to return the distances between the clusters.
        Returns
        -------
        children : 2D array, shape (n_nodes-1, 2)
            The children of each non-leaf node. Values less than `n_samples`
            correspond to leaves of the tree which are the original samples.
            A node `i` greater than or equal to `n_samples` is a non-leaf
            node and has children `children_[i - n_samples]`. Alternatively
            at the i-th iteration, children[i][0] and children[i][1]
            are merged to form node `n_samples + i`
        n_components : int
            The number of connected components in the graph.
        n_leaves : int
            The number of leaves in the tree.
        parents : 1D array, shape (n_nodes, ) or None
            The parent of each node. Only returned when a connectivity matrix
            is specified, elsewhere 'None' is returned.
        distances : ndarray, shape (n_nodes-1,)
            Returned when return_distance is set to True.
            distances[i] refers to the distance between children[i][0] and
            children[i][1] when they are merged.
        See also
        --------
        ward_tree : hierarchical clustering with ward linkage
        """
        X = np.asarray(X)
        if X.ndim == 1:
            X = np.reshape(X, (-1, 1))
        n_samples, n_features = X.shape
    
        linkage_choices = {'complete': _hierarchical.max_merge,
                           'average': _hierarchical.average_merge,
                          }
        try:
            join_func = linkage_choices[linkage]
        except KeyError:
            raise ValueError(
                'Unknown linkage option, linkage should be one '
                'of %s, but %s was given' % (linkage_choices.keys(), linkage))
    
        if connectivity is None:
            from scipy.cluster import hierarchy  # imports PIL
    
            if n_clusters is not None:
                warnings.warn('Partial build of the tree is implemented '
                              'only for structured clustering (i.e. with '
                              'explicit connectivity). The algorithm '
                              'will build the full tree and only '
                              'retain the lower branches required '
                              'for the specified number of clusters',
                              stacklevel=2)
    
            if affinity == 'precomputed':
                # for the linkage function of hierarchy to work on precomputed
                # data, provide as first argument an ndarray of the shape returned
                # by pdist: it is a flat array containing the upper triangular of
                # the distance matrix.
                i, j = np.triu_indices(X.shape[0], k=1)
                X = X[i, j]
            elif affinity == 'l2':
                # Translate to something understood by scipy
                affinity = 'euclidean'
            elif affinity in ('l1', 'manhattan'):
                affinity = 'cityblock'
            elif callable(affinity):
                X = affinity(X)
                i, j = np.triu_indices(X.shape[0], k=1)
                X = X[i, j]
            out = hierarchy.linkage(X, method=linkage, metric=affinity)
            children_ = out[:, :2].astype(np.int)
    
            if return_distance:
                distances = out[:, 2]
                return children_, 1, n_samples, None, distances
            return children_, 1, n_samples, None
    
        if n_components is not None:
            warnings.warn(
                "n_components is now directly calculated from the connectivity "
                "matrix and will be removed in 0.18",
                DeprecationWarning)
        connectivity, n_components = _fix_connectivity(X, connectivity)
    
        connectivity = connectivity.tocoo()
        # Put the diagonal to zero
        diag_mask = (connectivity.row != connectivity.col)
        connectivity.row = connectivity.row[diag_mask]
        connectivity.col = connectivity.col[diag_mask]
        connectivity.data = connectivity.data[diag_mask]
        del diag_mask
    
        if affinity == 'precomputed':
            distances = X[connectivity.row, connectivity.col]
        else:
            # FIXME We compute all the distances, while we could have only computed
            # the "interesting" distances
            distances = paired_distances(X[connectivity.row],
                                         X[connectivity.col],
                                         metric=affinity)
        connectivity.data = distances
    
        if n_clusters is None:
            n_nodes = 2 * n_samples - 1
        else:
            assert n_clusters <= n_samples
            n_nodes = 2 * n_samples - n_clusters
    
        if return_distance:
            distances = np.empty(n_nodes - n_samples)
        # create inertia heap and connection matrix
        A = np.empty(n_nodes, dtype=object)
        inertia = list()
    
        # LIL seems to the best format to access the rows quickly,
        # without the numpy overhead of slicing CSR indices and data.
        connectivity = connectivity.tolil()
        # We are storing the graph in a list of IntFloatDict
        for ind, (data, row) in enumerate(zip(connectivity.data,
                                              connectivity.rows)):
            A[ind] = IntFloatDict(np.asarray(row, dtype=np.intp),
                                  np.asarray(data, dtype=np.float64))
            # We keep only the upper triangular for the heap
            # Generator expressions are faster than arrays on the following
            inertia.extend(_hierarchical.WeightedEdge(d, ind, r)
                           for r, d in zip(row, data) if r < ind)
        del connectivity
    
        heapify(inertia)
    
        # prepare the main fields
        parent = np.arange(n_nodes, dtype=np.intp)
        used_node = np.ones(n_nodes, dtype=np.intp)
        children = []
    
        # recursive merge loop
        for k in xrange(n_samples, n_nodes):
            # identify the merge
            while True:
                edge = heappop(inertia)
                if used_node[edge.a] and used_node[edge.b]:
                    break
            i = edge.a
            j = edge.b
    
            if return_distance:
                # store distances
                distances[k - n_samples] = edge.weight
    
            parent[i] = parent[j] = k
            children.append((i, j))
            # Keep track of the number of elements per cluster
            n_i = used_node[i]
            n_j = used_node[j]
            used_node[k] = n_i + n_j
            used_node[i] = used_node[j] = False
    
            # update the structure matrix A and the inertia matrix
            # a clever 'min', or 'max' operation between A[i] and A[j]
            coord_col = join_func(A[i], A[j], used_node, n_i, n_j)
            for l, d in coord_col:
                A[l].append(k, d)
                # Here we use the information from coord_col (containing the
                # distances) to update the heap
                heappush(inertia, _hierarchical.WeightedEdge(d, k, l))
            A[k] = coord_col
            # Clear A[i] and A[j] to save memory
            A[i] = A[j] = 0
    
        # Separate leaves in children (empty lists up to now)
        n_leaves = n_samples
    
        # # return numpy array for efficient caching
        children = np.array(children)[:, ::-1]
    
        if return_distance:
            return children, n_components, n_leaves, parent, distances
        return children, n_components, n_leaves, parent
    
    # Matching names to tree-building strategies
    def _complete_linkage(*args, **kwargs):
        kwargs['linkage'] = 'complete'  
        return linkage_tree(*args, **kwargs)
    
    def _average_linkage(*args, **kwargs):
        kwargs['linkage'] = 'average'
        return linkage_tree(*args, **kwargs)
    
    _TREE_BUILDERS = dict(
        ward=ward_tree,
        complete=_complete_linkage,
        average=_average_linkage,
        )
    
    def _hc_cut(n_clusters, children, n_leaves):
        """Function cutting the ward tree for a given number of clusters.
        Parameters
        ----------
        n_clusters : int or ndarray
            The number of clusters to form.
        children : list of pairs. Length of n_nodes
            The children of each non-leaf node. Values less than `n_samples` refer
            to leaves of the tree. A greater value `i` indicates a node with
            children `children[i - n_samples]`.
        n_leaves : int
            Number of leaves of the tree.
        Returns
        -------
        labels : array [n_samples]
            cluster labels for each point
        """
        if n_clusters > n_leaves:
            raise ValueError('Cannot extract more clusters than samples: '
                             '%s clusters where given for a tree with %s leaves.'
                             % (n_clusters, n_leaves))
        # In this function, we store nodes as a heap to avoid recomputing
        # the max of the nodes: the first element is always the smallest
        # We use negated indices as heaps work on smallest elements, and we
        # are interested in largest elements
        # children[-1] is the root of the tree
        nodes = [-(max(children[-1]) + 1)]
        for i in xrange(n_clusters - 1):
            # As we have a heap, nodes[0] is the smallest element
            these_children = children[-nodes[0] - n_leaves]
            # Insert the 2 children and remove the largest node
            heappush(nodes, -these_children[0])
            heappushpop(nodes, -these_children[1])
        label = np.zeros(n_leaves, dtype=np.intp)
        for i, node in enumerate(nodes):
            label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i
        return label
    
    class AgglomerativeClustering(BaseEstimator, ClusterMixin):
        """
        Agglomerative Clustering
        Recursively merges the pair of clusters that minimally increases
        a given linkage distance.
        Parameters
        ----------
        n_clusters : int, default=2
            The number of clusters to find.
        connectivity : array-like or callable, optional
            Connectivity matrix. Defines for each sample the neighboring
            samples following a given structure of the data.
            This can be a connectivity matrix itself or a callable that transforms
            the data into a connectivity matrix, such as derived from
            kneighbors_graph. Default is None, i.e, the
            hierarchical clustering algorithm is unstructured.
        affinity : string or callable, default: "euclidean"
            Metric used to compute the linkage. Can be "euclidean", "l1", "l2",
            "manhattan", "cosine", or 'precomputed'.
            If linkage is "ward", only "euclidean" is accepted.
        memory : Instance of joblib.Memory or string (optional)
            Used to cache the output of the computation of the tree.
            By default, no caching is done. If a string is given, it is the
            path to the caching directory.
        n_components : int (optional)
            Number of connected components. If None the number of connected
            components is estimated from the connectivity matrix.
            NOTE: This parameter is now directly determined from the connectivity
            matrix and will be removed in 0.18
        compute_full_tree : bool or 'auto' (optional)
            Stop early the construction of the tree at n_clusters. This is
            useful to decrease computation time if the number of clusters is
            not small compared to the number of samples. This option is
            useful only when specifying a connectivity matrix. Note also that
            when varying the number of clusters and using caching, it may
            be advantageous to compute the full tree.
        linkage : {"ward", "complete", "average"}, optional, default: "ward"
            Which linkage criterion to use. The linkage criterion determines which
            distance to use between sets of observation. The algorithm will merge
            the pairs of cluster that minimize this criterion.
            - ward minimizes the variance of the clusters being merged.
            - average uses the average of the distances of each observation of
              the two sets.
            - complete or maximum linkage uses the maximum distances between
              all observations of the two sets.
        pooling_func : callable, default=np.mean
            This combines the values of agglomerated features into a single
            value, and should accept an array of shape [M, N] and the keyword
            argument ``axis=1``, and reduce it to an array of size [M].
        Attributes
        ----------
        labels_ : array [n_samples]
            cluster labels for each point
        n_leaves_ : int
            Number of leaves in the hierarchical tree.
        n_components_ : int
            The estimated number of connected components in the graph.
        children_ : array-like, shape (n_nodes-1, 2)
            The children of each non-leaf node. Values less than `n_samples`
            correspond to leaves of the tree which are the original samples.
            A node `i` greater than or equal to `n_samples` is a non-leaf
            node and has children `children_[i - n_samples]`. Alternatively
            at the i-th iteration, children[i][0] and children[i][1]
            are merged to form node `n_samples + i`
        """
    
        def __init__(self, n_clusters=2, affinity="euclidean",
                     memory=Memory(cachedir=None, verbose=0),
                     connectivity=None, n_components=None,
                     compute_full_tree='auto', linkage='ward',
                     pooling_func=np.mean):
            self.n_clusters = n_clusters
            self.memory = memory
            self.n_components = n_components
            self.connectivity = connectivity
            self.compute_full_tree = compute_full_tree
            self.linkage = linkage
            self.affinity = affinity
            self.pooling_func = pooling_func
    
        def fit(self, X, y=None):
            """Fit the hierarchical clustering on the data
            Parameters
            ----------
            X : array-like, shape = [n_samples, n_features]
                The samples a.k.a. observations.
            Returns
            -------
            self
            """
            X = check_arrays(X)[0]
            memory = self.memory
            if isinstance(memory, six.string_types):
                memory = Memory(cachedir=memory, verbose=0)
    
            if self.linkage == "ward" and self.affinity != "euclidean":
                raise ValueError("%s was provided as affinity. Ward can only "
                                 "work with euclidean distances." %
                                 (self.affinity, ))
    
            if self.linkage not in _TREE_BUILDERS:
                raise ValueError("Unknown linkage type %s."
                                 "Valid options are %s" % (self.linkage,
                                                           _TREE_BUILDERS.keys()))
            tree_builder = _TREE_BUILDERS[self.linkage]
    
            connectivity = self.connectivity
            if self.connectivity is not None:
                if callable(self.connectivity):
                    connectivity = self.connectivity(X)
                connectivity = check_arrays(
                    connectivity, accept_sparse=['csr', 'coo', 'lil'])
    
            n_samples = len(X)
            compute_full_tree = self.compute_full_tree
            if self.connectivity is None:
                compute_full_tree = True
            if compute_full_tree == 'auto':
                # Early stopping is likely to give a speed up only for
                # a large number of clusters. The actual threshold
                # implemented here is heuristic
                compute_full_tree = self.n_clusters < max(100, .02 * n_samples)
            n_clusters = self.n_clusters
            if compute_full_tree:
                n_clusters = None
    
            # Construct the tree
            kwargs = {}
            kwargs['return_distance'] = True
            if self.linkage != 'ward':
                kwargs['linkage'] = self.linkage
                kwargs['affinity'] = self.affinity
            self.children_, self.n_components_, self.n_leaves_, parents, \
                self.distance = memory.cache(tree_builder)(X, connectivity,
                                           n_components=self.n_components,
                                           n_clusters=n_clusters,
                                           **kwargs)
            # Cut the tree
            if compute_full_tree:
                self.labels_ = _hc_cut(self.n_clusters, self.children_,
                                       self.n_leaves_)
            else:
                labels = _hierarchical.hc_get_heads(parents, copy=False)
                # copy to avoid holding a reference on the original array
                labels = np.copy(labels[:n_samples])
                # Reasign cluster numbers
                self.labels_ = np.searchsorted(np.unique(labels), labels)
            return self
    

    下面是一个简单的例子,展示了如何使用修改后的AgglomerativeClustering 类:

    import numpy as np
    import AgglomerativeClustering # Make sure to use the new one!!!
    d = np.array(
        [
            [1, 2, 3],
            [4, 5, 6],
            [7, 8, 9]
        ]
    )
    
    clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True,
        affinity='euclidean', linkage='complete')
    clustering.fit(d)
    print clustering.distance
    

    该示例具有以下输出:

    [  5.19615242  10.39230485]
    

    然后可以将其与scipy.cluster.hierarchy.linkage 实现进行比较:

    import numpy as np
    from scipy.cluster.hierarchy import linkage
    
    d = np.array(
            [
                [1, 2, 3],
                [4, 5, 6],
                [7, 8, 9]
            ]
    )
    print linkage(d, 'complete')
    

    输出:

    [[  1.           2.           5.19615242   2.        ]
     [  0.           3.          10.39230485   3.        ]]
    

    只是为了好玩,我决定跟进你关于性能的声明:

    import AgglomerativeClustering
    from scipy.cluster.hierarchy import linkage
    import numpy as np
    import time
    
    l = 1000; iters = 50
    d = [np.random.random(100) for _ in xrange(1000)]
    
    t = time.time()
    for _ in xrange(iters):
        clustering = AgglomerativeClustering(n_clusters=l-1,
            affinity='euclidean', linkage='complete')
        clustering.fit(d)
    scikit_time = (time.time() - t) / iters
    print 'scikit-learn Time: {0}s'.format(scikit_time)
    
    t = time.time()
    for _ in xrange(iters):
        linkage(d, 'complete')
    scipy_time = (time.time() - t) / iters
    print 'SciPy Time: {0}s'.format(scipy_time)
    
    print 'scikit-learn Speedup: {0}'.format(scipy_time / scikit_time)
    

    这给了我以下结果:

    scikit-learn Time: 0.566560001373s
    SciPy Time: 0.497740001678s
    scikit-learn Speedup: 0.878530077083
    

    据此,Scikit-Learn 的实现花费了 SciPy 实现的 0.88 倍的执行时间,即 SciPy 的实现快了 1.14 倍。需要注意的是:

    1. 我修改了原来的 scikit-learn 实现

    2. 我只做了少量的迭代

    3. 我只测试了少量的测试用例(应该测试集群大小以及每个维度的项目数)

    4. 我第二次运行 SciPy,因此它的优势是可以在源数据上获得更多的缓存命中

    5. 这两种方法做的事情并不完全相同。

    考虑到所有这些,您应该真正评估哪种方法更适合您的特定应用程序。使用一种实现优于另一种实现也有功能上的原因。

    【讨论】:

      猜你喜欢
      • 2018-04-29
      • 2017-08-06
      • 2017-12-03
      • 2016-10-01
      • 2016-11-18
      • 2023-03-12
      • 2019-02-05
      • 2018-04-10
      相关资源
      最近更新 更多