首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >用Python实现的可视化单词包给出了可怕的准确性

用Python实现的可视化单词包给出了可怕的准确性
EN

Stack Overflow用户
提问于 2018-07-04 08:15:29
回答 2查看 9.7K关注 0票数 4

我正在尝试实现一包单词分类器来分类我拥有的数据集。为了确保我的实现是正确的,我只使用Caltech数据集(Datasets/Caltech101/)中的两个类来测试我的实现:大象和电吉他。由于它们在视觉上是完全不同的,我相信正确实现视觉单词包(BOVW)分类可以准确地对这些图像进行分类。

根据我的理解(如果我错了,请纠正我),正确的BOVW分类分为三个步骤:

  1. 从训练图像中检测SIFT 128维描述符,并用k均值聚类.
  2. 在k均值分类器中测试训练和测试图像SIFT描述符(在步骤1中训练),并对分类结果进行直方图。
  3. 利用这些直方图作为特征向量进行SVM分类。

正如我前面解释的,我试图解决一个非常简单的问题,即对两个非常不同的类进行分类。我从文本文件中读取训练和测试文件,使用训练图像SIFT描述符训练k均值分类器,利用训练和测试图像获取分类直方图,最后将它们作为分类的特征向量。

我的解决方案的源代码如下:

代码语言:javascript
复制
import numpy as np
from sklearn import svm
from sklearn.metrics import accuracy_score

#this function will get SIFT descriptors from training images and 
#train a k-means classifier    
def read_and_clusterize(file_images, num_cluster):

    sift_keypoints = []

    with open(file_images) as f:
        images_names = f.readlines()
        images_names = [a.strip() for a in images_names]

        for line in images_names:
        print(line)
        #read image
        image = cv2.imread(line,1)
        # Convert them to grayscale
        image =cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
        # SIFT extraction
        sift = cv2.xfeatures2d.SIFT_create()
        kp, descriptors = sift.detectAndCompute(image,None)
        #append the descriptors to a list of descriptors
        sift_keypoints.append(descriptors)

    sift_keypoints=np.asarray(sift_keypoints)
    sift_keypoints=np.concatenate(sift_keypoints, axis=0)
    #with the descriptors detected, lets clusterize them
    print("Training kmeans")    
    kmeans = MiniBatchKMeans(n_clusters=num_cluster, random_state=0).fit(sift_keypoints)
    #return the learned model
    return kmeans

#with the k-means model found, this code generates the feature vectors 
#by building an histogram of classified keypoints in the kmeans classifier 
def calculate_centroids_histogram(file_images, model):

    feature_vectors=[]
    class_vectors=[]

    with open(file_images) as f:
        images_names = f.readlines()
        images_names = [a.strip() for a in images_names]

        for line in images_names:
        print(line)
        #read image
        image = cv2.imread(line,1)
        #Convert them to grayscale
        image =cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
        #SIFT extraction
        sift = cv2.xfeatures2d.SIFT_create()
        kp, descriptors = sift.detectAndCompute(image,None)
        #classification of all descriptors in the model
        predict_kmeans=model.predict(descriptors)
        #calculates the histogram
        hist, bin_edges=np.histogram(predict_kmeans)
        #histogram is the feature vector
        feature_vectors.append(hist)
        #define the class of the image (elephant or electric guitar)
        class_sample=define_class(line)
        class_vectors.append(class_sample)

    feature_vectors=np.asarray(feature_vectors)
    class_vectors=np.asarray(class_vectors)
    #return vectors and classes we want to classify
    return class_vectors, feature_vectors


def define_class(img_patchname):

    #print(img_patchname)
    print(img_patchname.split('/')[4])

    if img_patchname.split('/')[4]=="electric_guitar":
        class_image=0

    if img_patchname.split('/')[4]=="elephant":
    class_image=1

    return class_image

def main(train_images_list, test_images_list, num_clusters):
    #step 1: read and detect SURF keypoints over the input image (train images) and clusterize them via k-means 
    print("Step 1: Calculating Kmeans classifier")
    model= bovw.read_and_clusterize(train_images_list, num_clusters)

    print("Step 2: Extracting histograms of training and testing images")
    print("Training")
    [train_class,train_featvec]=bovw.calculate_centroids_histogram(train_images_list,model)
    print("Testing")
    [test_class,test_featvec]=bovw.calculate_centroids_histogram(test_images_list,model)

    #vamos usar os vetores de treino para treinar o classificador
    print("Step 3: Training the SVM classifier")
    clf = svm.SVC()
    clf.fit(train_featvec, train_class)

    print("Step 4: Testing the SVM classifier")  
    predict=clf.predict(test_featvec)

    score=accuracy_score(np.asarray(test_class), predict)

    file_object  = open("results.txt", "a")
    file_object.write("%f\n" % score)
    file_object.close()

    print("Accuracy:" +str(score))

if __name__ == "__main__":
    main("train.txt", "test.txt", 1000)
    main("train.txt", "test.txt", 2000)
    main("train.txt", "test.txt", 3000)
    main("train.txt", "test.txt", 4000)
    main("train.txt", "test.txt", 5000)

正如您所看到的,我试图改变kmeans分类器中的簇数。然而,无论我如何尝试,准确率总是53.62%,这是可怕的,考虑到图像类别是相当不同的。

那么,我对BOVW的理解或实施有什么问题吗?我搞错什么了?

EN

回答 2

Stack Overflow用户

发布于 2018-07-04 09:01:20

解决办法比我想象的要简单。

在这一行:

代码语言:javascript
复制
  hist, bin_edges=np.histogram(predict_kmeans)

回收箱的数量是numpy的标准垃圾桶数(我相信它是10)。通过这样做:

代码语言:javascript
复制
   hist, bin_edges=np.histogram(predict_kmeans, bins=num_clusters)

使用1000簇,精度从53.62%提高到78.26%,从而达到1000维向量。

票数 4
EN

Stack Overflow用户

发布于 2018-07-04 08:38:41

看起来您正在为每个图像创建集群和直方图。但是,为了使其工作,您必须聚合所有图像的sift特征,然后对这些常见的集群进行聚类,以创建直方图。也可以查看https://github.com/shackenberg/Minimal-Bag-of-Visual-Words-Image-Classifier

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51168896

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档