当您运行covariance_type="tied" 时,模型假定所有组件都有一个共同的协方差矩阵,因此上面的代码不成立。如果 covariance_type="tied" 那么它将是 clf.covariances_ 下的 1 个协方差矩阵。参考help page:
‘full’每个分量都有自己的通用协方差矩阵
“绑定”所有组件共享相同的通用协方差矩阵
使用pomegranate,它估计每个组件的协方差矩阵,因此可以很好地比较从sklearn 运行GaussianMixture 和covariance_type="full"
from sklearn import datasets
from sklearn.mixture import GaussianMixture
iris = datasets.load_iris()
clf = GaussianMixture(n_components=3, covariance_type="full", init_params='kmeans')
clf.fit(iris.data)
cov = []
means = []
for i in range(clf.n_components):
cov.append(clf.covariances_[i])
means.append(clf.means_[i])
所以对于组件或集群 0:
means[0]
array([5.006, 3.428, 1.462, 0.246])
cov[0]
array([[0.121765, 0.097232, 0.016028, 0.010124],
[0.097232, 0.140817, 0.011464, 0.009112],
[0.016028, 0.011464, 0.029557, 0.005948],
[0.010124, 0.009112, 0.005948, 0.010885]])
现在使用石榴:
from pomegranate import GeneralMixtureModel, MultivariateGaussianDistribution
mdl = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution,
n_components=3, X=iris.data)
mdl = mdl.fit(iris.data)
参数可以在distributions下访问,只要你的组件都有一个列表。第一个,你做distributions[0],第二个distributions[1]等等:
mdl.distributions[0].parameters[0]
[5.005999999999999, 3.4280000000000004, 1.462, 0.24599999999999986]
np.round(mdl.distributions[0].parameters[1],6)
array([[0.121764, 0.097232, 0.016028, 0.010124],
[0.097232, 0.140816, 0.011464, 0.009112],
[0.016028, 0.011464, 0.029556, 0.005948],
[0.010124, 0.009112, 0.005948, 0.010884]])