import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs from sklearn.linear_model import LogisticRegression # 构造一些数据点 centers = [[-5, 0], [0, 1.5], [5, -1]] X, y = make_blobs(n_samples=1000, centers=centers, random_state=40) print(X.shape) print(y.shape) print(y) transformation = [[0.4, 0.2], [-0.4, 1.2]] X = np.dot(X, transformation) print('______________') print(X.shape) clf = LogisticRegression().fit(X, y) print (clf.coef_) print (clf.intercept_)
上述代码输出:
(1000, 2) (1000,) [0 0 0 2 1 1 0 0 0 1 2 2 2 1 1 1 1 2 0 2 0 1 0 1 1 2 0 1 1 1 2 0 2 0 2 1 2 2 0 0 1 0 0 0 1 2 0 0 0 2 0 0 0 1 1 0 1 2 0 1 2 1 2 2 0 1 1 0 1 0 1 2 1 1 0 2 1 2 0 0 1 1 2 0 0 2 1 2 1 2 2 2 1 1 2 2 0 0 0 0 2 2 0 1 0 0 1 2 2 1 2 1 0 1 1 2 0 2 1 1 0 1 2 2 2 0 0 1 0 2 0 2 2 1 2 1 0 1 2 2 2 2 2 1 1 1 1 2 2 2 1 2 1 0 0 1 1 1 1 1 2 0 0 1 2 2 2 0 1 2 1 1 1 0 1 2 0 2 2 0 2 2 1 1 2 0 1 0 0 2 1 1 0 1 0 1 1 0 0 0 1 2 2 1 0 0 1 0 1 1 2 0 0 0 1 2 1 2 1 2 0 2 1 1 0 2 2 1 1 1 0 0 2 2 2 0 2 2 2 2 1 0 2 2 1 0 2 2 0 0 2 0 2 0 2 0 2 0 2 2 0 1 1 1 1 2 1 1 1 2 1 2 0 1 2 0 0 2 2 2 0 2 0 0 1 1 1 2 1 0 2 0 0 0 2 2 0 2 2 1 2 0 0 2 1 1 1 0 1 2 2 2 0 0 2 0 0 1 1 0 0 0 0 0 2 2 1 2 2 1 2 1 1 2 0 0 1 0 0 2 2 1 1 0 1 0 0 2 2 2 1 1 2 0 0 2 2 2 1 1 0 1 2 1 1 2 1 2 0 0 0 2 1 0 1 1 1 1 1 1 2 0 1 0 1 2 2 0 2 1 2 1 2 2 0 1 2 1 0 0 1 2 2 0 2 1 0 2 2 1 0 1 1 2 1 2 2 1 2 0 1 2 0 2 1 2 0 1 2 2 1 0 0 2 1 2 1 0 0 1 1 2 2 1 1 0 0 2 0 2 0 0 0 0 1 0 0 0 2 0 2 2 2 0 0 0 1 2 0 0 1 2 0 1 2 1 0 2 0 0 2 0 2 2 1 1 1 0 1 2 1 1 0 0 1 1 1 2 0 2 0 1 2 1 0 2 2 2 0 2 1 1 2 1 2 0 1 0 2 2 1 0 0 1 0 1 2 2 2 2 2 2 0 2 2 0 2 2 2 2 0 2 2 0 0 0 2 2 0 1 1 1 0 1 0 1 0 2 0 2 2 1 0 0 2 0 0 2 1 0 0 0 1 1 2 0 0 2 2 0 2 2 0 2 2 1 1 1 1 0 0 1 0 1 0 2 1 1 0 0 2 1 2 0 1 2 2 1 0 1 1 0 0 2 0 0 2 1 1 0 0 0 2 0 1 1 2 1 0 0 1 2 1 1 1 1 0 2 0 2 2 2 0 1 0 1 0 0 1 1 0 2 2 0 0 0 0 2 1 2 1 0 1 2 2 0 0 1 2 2 1 2 0 2 2 0 2 1 1 2 0 0 0 2 0 1 2 1 2 2 2 1 1 1 1 1 2 2 0 1 0 2 1 1 0 0 2 1 0 0 2 0 0 2 1 2 2 2 2 1 0 0 1 0 2 0 0 0 0 1 0 1 0 0 0 1 2 2 2 0 2 0 0 2 2 0 1 2 2 2 0 1 0 2 1 1 2 1 2 0 0 1 1 0 1 1 0 0 1 0 2 0 0 0 1 2 0 0 2 0 0 1 0 2 0 2 2 1 1 2 0 0 1 2 2 2 1 2 2 1 0 2 0 0 2 0 1 1 1 0 2 2 1 0 0 2 2 1 0 1 1 1 1 2 1 2 1 0 1 0 1 2 0 0 1 0 1 1 2 0 1 0 1 2 2 1 0 2 0 1 0 0 0 0 2 0 1 2 1 2 2 1 1 1 2 1 0 1 1 1 0 2 2 0 0 1 2 0 1 1 1 0 0 2 0 1 0 1 1 2 2 0 2 2 0 0 0 1 1 0 0 2 0 2 2 1 2 2 0 0 1 1 2 1 1 2 1 0 1 1 0 1 1 0 0 2 0 1 1 1 1 2 2 1 2 0 1 2 1 1 0 2 0 0 0 1 2 0 1 2 1 2 1 1 1 0 0 1 1 2 2 1 1 1 1 0 2 2 2 1 1 2 2 1 1 0 0 0 2 2 1 1 0 0 1 0 2 1 1 2 2 0 2 2 1 2 1 0 1 2] ______________ (1000, 2) [[-4. -1.] [-0.0 0.] [ 4. 0.]] [-1. 2. -1.]
大胆假设:
最后的coef是3×2的,
单看每一行,有2个数,代表2个特征的系数
一共3行,是因为总共3个类别。对于每一个类别,都学习一次系数
因此是3×2的。
refer:https://www.cnblogs.com/LCharles/p/12162077.html
发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/229781.html原文链接:https://javaforall.net
