Machine Learning :: Text feature extraction (tf-idf) – Part II

阅读本教程的第一部分:Text feature extraction (tf-idf) – Part I

This post is a延续在哪里,我们开始学习有关文本特征提取和向量空间模型表示的理论和实践的第一部分。我真的建议你阅读第一部分后一系列以遵循这个第二。

由于很多人喜欢这个教程的第一部分,该第二部分是比第一个长一点。

介绍

在第一篇文章中,我们学会了如何使用长期频以表示在矢量空间的文本信息。然而,与术语频率方法的主要问题是,它大大加快了频繁的条款和规模下降,这比高频方面经验更丰富罕见的条款。基本的直觉是,在许多文件中经常出现的一个术语不太好鉴别,真正有意义的(至少在许多实验测试);这里最重要的问题是:你为什么会在例如分类问题,强调术语,是在你的文档的整个语料库几乎礼物?

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

But let’s go back to our definition of the\ mathrm {TF}(T,d)which is actually the term count of the termŤ在文档中d。使用这种简单的词频可能导致我们一样的问题滥用关键字,这是当我们有一个文档中的术语重复以改善上的IR其排名的目的(Information Retrieval)系统,甚至对创建长文档偏见,使他们看起来比他们只是因为手册中出现的高频更重要。

为了克服这个问题,词频\ mathrm {TF}(T,d)上的矢量空间中的文件的通常也归一化。让我们来看看我们是如何规范这一载体。

矢量归

Suppose we are going to normalize the term-frequency vector\ {VEC V_ {D_4}}我们在本教程的第一部分已经计算。该文件D4from the first part of this tutorial had this textual representation:

D4:我们可以看到闪亮的阳光,明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

\ {VEC V_ {D_4}} =(0,2,1,0)

规范化的向量,是一样的说话g the单位向量of the vector, and they are denoted using the “hat” notation:\帽子{V}。The definition of the unit vector\帽子{V}of a vector\ VEC {V}是:

\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}

\帽子{V}是单位矢量,或者归一化矢量,所述\ VEC {V}在矢量将被归一化和\ | \ VEC {V} \ | _p是矢量的范数(大小,长度)\ VEC {V}in theL ^ pspace (don’t worry, I’m going to explain it all).

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

The normalization process (Source: http://processing.org/learning/pvector/)
The normalization process (Source: http://processing.org/learning/pvector/)

但这里的重要问题是如何向量的长度来计算,并明白这一点,你必须了解的动机L ^ p空间,也被称为勒贝格空间

勒贝格空间

多久这个载体?(来源:来源:http://processing.org/learning/pvector/)
多久这个载体?(来源:来源:http://processing.org/learning/pvector/)

通常,一个矢量的长度\ {VEC U】=(U_1,U_2,U_3,\ ldots,u_n)使用计算欧几里得范-一个准则是在矢量空间中分配一个严格正长度或大小于所有矢量的函数- ,其被定义为:

(Source: http://processing.org/learning/pvector/)
(Source: http://processing.org/learning/pvector/)

\ | \ VEC【U} \ |= \ SQRT【U ^ 2_1 + U ^ 2_2 + U ^ 2_3 + \ ldots + U ^ 2_n}

但是,这不是定义长度的唯一途径,这就是为什么你看到(有时)的数pŤogether with the norm notation, like in\ | \ VEC【U} \ |_p。That’s because it could be generalized as:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\左| U_1 \右| ^ P + \左| U_2 \右| ^ P + \左| U_3 \右| ^ P + \ ldots + \左|u_n \右| ^ p)^ \压裂{1} {p}

和simplified as:

\displaystyle \|\vec{u}\|_p = (\sum\limits_{i=1}^{n}\left|\vec{u}_i\right|^p)^\frac{1}{p}

所以,当你阅读有关L2-norm,你正在阅读关于欧几里得范,具有规范p = 2时用于测量的矢量的长度的最常用标准,通常称为“大小”;其实,当你有一个不合格的长度测量(不p号),你有L2-norm(欧几里得范数)。

当你阅读一L1范你正在阅读与规范P = 1,defined as:

\的DisplayStyle \ | \ VEC【U} \ | _1 =(\左| U_1 \右| + \左| U_2 \右| + \左| U_3 \右| + \ ldots + \左| u_n \右|)

这无非是向量的组件的简单相加,也被称为Taxicab distance,also called Manhattan distance.

出租车几何与欧几里得距离:在出租车几何所有三个描绘线具有对于相同的路径具有相同的长度(12)。在欧几里德几何,绿色的线有长度,6 \倍\ SQRT {2} \约8.48,并且是唯一的最短路径。
资源:维基百科::出租车几何

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of thescikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later)。

Back to vector normalization

现在你知道了矢量正常化进程是什么,我们可以尝试一个具体的例子,使用L2范数的过程(我们现在使用正确的术语),以规范我们的矢量\ {VEC V_ {D_4}} =(0,2,1,0)in order to get its unit vector\ {帽子V_ {D_4}}。为了做到这一点,我们将简单的将其插入单位矢量的定义,对其进行评估:

\帽子{V}= \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

这就是它!我们的法矢\ {帽子V_ {D_4}}现在有一个L2范\ | \帽子{V_ {D_4}} \ | _2 = 1.0

请注意,这里我们归我们词频文档向量,但后来我们要做的是,TF-IDF的计算后。

The term frequency – inverse document frequency (tf-idf) weight

现在你已经了解了矢量归在理论和实践是如何工作的,让我们继续我们的教程。假设你有你的收藏(从教程的第一部分拍摄)在下列文件:

火车文档集:D1:天空是蓝色的。D2:阳光灿烂。测试文档集:D3:在天空,阳光灿烂。D4:我们可以看到闪亮的阳光,明亮的阳光下。

Your document space can be defined then asd = \ {D_1,D_2,\ ldots,D_N \}whereñ是在你的文集文档的数量,并在我们的情况下,D_ {火车} = \ {D_1,D_2 \}D_ {测试} = \ {D_3,D_4 \}。我们的文档空间的基数被定义\左| {{D_火车}} \右|= 2\左| {{D_测试}} \右|= 2,since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

现在让我们看看,然后是如何IDF(逆文档频率)定义:

\displaystyle \mathrm{idf}(t) = \log{\frac{\left|D\right|}{1+\left|\{d : t \in d\}\right|}}

where\left|\{d : t \in d\}\right|是个文件数其中术语Ť看来,当term-frequency function satisfies\ mathrm {TF}(T,d)\neq 0,我们只加1代入公式,以避免零分。

为TF-IDF式则是:

\ mathrm {TF \ MBOX { - } IDF}(T)= \ mathrm {TF}(T,d)\倍\ mathrm {IDF}(t)的

和该公式具有重要的后果:当你有给定文档中高词频(TF)达到TF-IDF计算的高权重(本地参数)和整个集合中的术语的低文档频率(global parameter)。

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

M_ {}列车=  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

因为我们有4个特点,我们要计算\ mathrm {IDF}(T_1)\ mathrm {IDF}(T_2)\ mathrm {IDF}(t_3处)\mathrm{idf}(t_4)

\ mathrm {IDF}(T_1)= \log{\frac{\left|D\right|}{1+\left|\{d : t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718

\ mathrm {IDF}(T_2)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_2 \在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\ mathrm {IDF}(t_3处)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:t_3处\在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\ mathrm {IDF}(T_4)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_4 \在d \} \右|}} = \日志{\压裂{2} {2}} = 0.0

这些IDF权重可以由矢量作为表示:

vec {idf_ \{火车}}= (0.69314718, -0.40546511, -0.40546511, 0.0)

现在,我们有我们的词频矩阵(M_ {}列车)和表示我们的矩阵的每个特征的IDF(矢量vec {idf_ \{火车}}),我们可以计算出我们的TF-IDF权重。我们要做的是矩阵中的每一列的简单乘法M_ {}列车with the respectivevec {idf_ \{火车}}向量维度。要做到这一点,我们可以创建一个正方形diagonal matrixM_ {} IDFwith both the vertical and horizontal dimensions equal to the vectorvec {idf_ \{火车}}dimension:

M_ {} IDF=   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

然后将它乘到术语频率矩阵,因此最终结果然后可以定义为:

M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf}

请注意,矩阵乘法是不可交换的,结果A \乘以Bwill be different than the result of theB \times A,这就是为什么M_ {} IDF是对乘法的右侧,以完成每个IDF值到其对应的特征相乘的期望的效果:

\begin{bmatrix}   \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

现在让我们来看看这个乘法的一个具体的例子:

M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf} = \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

And finally, we can apply our L2 normalization process to theM_ {TF \ MBOX { - }} IDFmatrix. Please note that this normalization is“逐行”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

M_ {TF \ MBOX { - }} IDF= \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

这就是我们的我们的测试文档集,这实际上是单位向量的集合的漂亮归TF-IDF权重。如果你把矩阵的每一行的L2范数,你会发现它们都具有1的L2范数。

Python的实践

环境中使用Python的v.2.7.2NumPy的1.6.1SciPy的v.0.9.0Sklearn(Scikits.learn)v.0.9

现在,你在等待的部分!在本节中,我将使用Python的使用,以显示TF-IDF计算的每一步Scikit.learn特征提取模块。

第一步是创建我们的训练和测试文档集和计算词频矩阵:

从sklearn.feature_extraction.text进口CountVectorizer train_set =(“天空是蓝色的。”,“阳光灿烂”。)TEST_SET =(“在天空中的太阳是光明的。”,“我们可以看到闪耀的太阳,。明亮的太阳“)count_vectorizer = CountVectorizer()count_vectorizer.fit_transform(train_set)打印 ”词汇“,count_vectorizer.vocabulary#词汇:{ '蓝':0, '太阳':1, '鲜艳':2 '天空':3} freq_term_matrix = count_vectorizer.transform(TEST_SET)打印freq_term_matrix.todense()#[[0 1 1 1]#[0 2 1 0]]

现在,我们有频率项矩阵(称为freq_term_matrix), we can instantiate theTfidfTransformer,which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

从进口sklearn.feature_extraction.text TFIDF TfidfTransformer = TfidfTransformer(NORM = “L2”)tfidf.fit(freq_term_matrix)打印 “IDF:”,tfidf.idf_#IDF:[0.69314718 -0.40546511 -0.40546511 0]

请注意,我所指定的标准为L2,这是可选的(实际上默认为L2范数),但我已经添加了参数,使其明确向你表示,它会使用L2范数。还要注意的是,你可以通过访问称为内部属性看IDF计算权重idf_。现在适合()方法计算矩阵中的IDF上,让我们改造freq_term_matrix到TF-IDF权重矩阵:

tf_idf_matrix= tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, thetf_idf_matrix其实我们以前M_ {TF \ MBOX { - }} IDFmatrix. You can accomplish the same effect by using theVectorizer类Scikit.learn的这是一个矢量器自动结合CountVectorizer和ŤheTfidfTransformerŤo you. SeeŤhis example要知道如何使用它的文本分类过程。

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

引用本文为:基督教S. Perone,“机器学习::文本特征提取(TF-IDF) - 第二部分”,在亚洲金博宝未知领域,03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

References

理解逆文档频率:对IDF理论论证

维基百科:: TF-IDF

The classic Vector Space Model

Sklearn文本特征提取码

更新

13 Mar 2015-格式化,固定图像的问题。
2011 10月3日-Added the info about the environment used for Python examples

103个想法“机器学习::文本特征提取(TF-IDF) - 第二部分”

  1. Wow!
    Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
    (对不起,糟糕的英语,我正在努力对其进行改进,但仍然有很多工作要做的)

  2. 出色的工作基督徒!我期待着阅读的文档分类你的下一个职位,聚类和主题提取朴素贝叶斯,随机梯度下降,Minibatch-K均值和非负矩阵分解

    Also, the documentation of scikit-learn is really poor on the text feature extraction part (I am the main culprit…). Don’t hesitate to join the mailing list if you want to give a hand and improve upon the current situation.

    1. Great thanks Olivier. I really want to help sklearn, I just have to get some more time to do that, you guys have done a great work, I’m really impressed by the amount of algorithms already implemented in the lib, keep the good work !

  3. 我喜欢这个教程的新概念我在这里学习水平较好。
    这就是说,学习scikits您正在使用哪个版本?
    最新通过的easy_install安装似乎有不同的模块层次结构(即没有找到sklearn feature_extraction)。如果你能提到你使用的版本,我只是尝试用这些例子。

    1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

  4. 哪里是第3部分?我必须提交在4天内向量空间模型的分配。把它在周末的希望吗?

  5. 再次感谢这个完整和明确的教程和我在等待即将到来的部分。

  6. 由于基督徒!与s亚洲金博宝klearn向量空间很不错的工作。我只有一个问题,假设我已经计算了“tf_idf_matrix”,我想计算成对余弦相似性(每行之间)。我是有问题的稀疏矩阵格式,你可以请给出这样的例子?也是我的基质是相当大的,由60K说25K。非常感谢!

  7. 伟大的职位......我明白了什么TF-IDF以及如何与一个具体的例子实施。但我发现2周的事情,我不知道:
    1-你调用2维矩阵M_train,但它具有D3和D4文件的TF值,所以你应该已经给那矩阵M_test而不是M_train。由于D3和D4是我们的测试文档。
    2 - 当你计算IDF值的T2(这是“太阳”),它应该是日志(2/4)。因为文件的数目是2 D3有词“太阳” 1次,D4有它的2倍。这使得3,但是我们也加1到值摆脱0分的问题。这使得4 ...我说得对不对还是我失去了一些东西?
    谢谢。

    1. You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

      As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

      1. re: my ‘you are correct comment’ (above), I should have added:

        “… noting also Frédérique Passot’s comment (below) regarding the denominator:

        “......我们用的是什么确实是在发生的一个术语,无论任何给定的文档中出现的术语次数的文件数量。在这种情况下,然后,在用于T2(“太阳”)的IDF值分母确实2 + 1(2个文件具有“太阳”术语,1以避免潜在的零分割误差)。“

    2. 哈立德,
      This is a response to a very old question. However, I still want to respond to communicate what I understand from the article.
      Your question 2: “When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4)”
      My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

  8. 优秀的帖子!
    I have some question. From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents

  9. 非常感谢。你在这样一个简单的方法来解释它。这是非常有用的。再次感谢了很多。

  10. 我有同样的疑问,杰克(最后的评论)。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵来区分文档。

  11. 我有个问题..
    在TF-IDF操作后,我们得到与值的numpy的阵列。假设我们需要从阵列中获得最高50个值。我们怎样才能做到这一点?

    1. high value of f(idf) denotes that the particular vector(or Document) has high local strength and low global strength, in which case you can assume that the terms in it has high significance locally and cant be ignored. Comparing against funtion(tf) where only the term repeats high number of times are the ones given more importance,which most of the times is not a proper modelling technique.

  12. 嘿,
    感谢名单FR d code..was的确非亚洲金博宝常有帮助!

    1.适用于文档聚类,计算反相的术语频率之后,shud我使用任何关联性系数等Jaccards系数,然后应用聚类算法中像k均值或shud我计算反转术语频率后直接适用d k均值到文档向量?

    2.您是如何评价倒词频为calcuating文档向量文本聚类?

    由于一吨FR第四到来的答复!

  13. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

    I’d love to read the third installment of this series too! I’d be particularly interested in learning more about feature selection. Is there an idiomatic way to get a sorted list of the terms with the highest tf.idf scores? How would you identify those terms overall? How would you get the terms which are the most responsible for a high or low cosine similarity (row by row)?

    Thank you for the _great_ posts!

  14. 优秀文章和一个伟大的介绍TD-IDF正常化。

    你必须解释这些复杂的概亚洲金博宝念非常清晰,结构化的方法。

    谢谢!

      1. very good & infomative tutorial…. please upload more tutorials related to documents clustering process.

  15. Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

  16. 谢谢so much for this and for explaining the whole tf-idf thing thoroughly.

  17. Please correct me if i’m worng
    从“频率后的公式Calculated in the first tutorial:” should Mtest not Mtrain. also after starting ‘These idf weights can be represented by a vector as:” should be idf_test not idf_train.

    顺便说一句伟大的系列赛,你可以给如何实施分类的简单的方法?

  18. 亚洲金博宝很不错的职位。恭喜!!

    Showing your results, I have a question:

    我读了维基百科:
    成比例的TF-IDF值增加到的次数的字出现在文档中,但是通过在语料库中的字,这有助于控制的事实,一些词语通常比另一些更常见的频率偏移。

    When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

    然而,在结果中,“太阳”或“明亮”是比“天空”最重要的。

    我不知道的完全地理解它。

  19. Terrific! I was familiar with tf-idf before but I found your scikits examples helpful as I’m trying to learn that package.

  20. 优秀的帖子!一次偶然的机会找上CountVectorizer更多信息,无意中发现了这一点,但我很高兴我通过两个您的文章(第1部分和第2部分)的读取。

    现在用书签您的博客

  21. Does not seem to fit_transform() as you describe..
    任何想法,为什么?
    >>> TS
    (“天空是蓝色的”,“阳光灿烂”)
    >>> v7 = CountVectorizer()
    >>> v7.fit_transform(ts)
    <2×2 sparse matrix of type '’
    用4个存储元件在坐标格式>
    >>>打印v7.vocabulary_
    {u'is’:0,u'the”:1}

    1. 其实,还有第一个Python样本中的两个小错误。
      1. CountVectorizer should be instantiated like so:
      count_vectorizer = CountVectorizer(STOP_WORDS = '英语')
      这将确保“是”,“的”等被删除。

      2.要打印的词汇,你必须在末尾添加下划线。
      打印“词汇:” count_vectorizer.vocabulary_

      优秀的教程,只是小事情。hoep它可以帮助别人。

      1. 由于灰。虽然文章是相当自我解释的,您的评论使整个差异。

  22. 感谢您抽出时间来写这篇文章。发现它非常有用。亚洲金博宝

  23. 谢谢for the great explanation.

    I have a question about calculation of the idf(t#).
    In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

    谢谢。

    1. 你理解错了,分母你不把这个词的总和每个文档中,你只是总结所有具有词的至少一个aparition的文件。

  24. 这是很好的,如果你能提供的方式来知道如何使用FT-IDF中的文档分类。我看到示例(Python代码),但如果有算法是最好的,因为没有所有的人都能理解这种语言。

    谢谢

  25. 尼斯。一种解释有助于正确看待这个事情。是TF-IDF的好办法做聚类(例如,从已知的语料用杰卡德分析或方差相对于平均值设定)?

    Keep writing:)

  26. 嗨基督徒,

    这让我非常兴奋和幸运,读亚洲金博宝这篇文章。你理解的清晰反映了文件的清晰度。这让我重拾我的信心在机器学习领域。

    谢谢a ton for the beautiful explanation.

    想从你更多。

    谢谢,
    Neethu

  27. 谢谢你的良好的收官之作。你提到一些这比较L1和L2规范的论文,我计划研究,多一点深入。你还知道他们的名字?

  28. 我如何能计算TF IDF为自己的文本文件,它位于一些地方在我的电脑?

  29. Brilliant article.

    By far the easiest and most sound explanation of tf-tdf I’ve read. I really liked how you explained the mathematics behind it.

  30. 嗨,伟大的职位!我使用的是TfidVectorizer模块scikit学习产生与规范= L2的TF-IDF矩阵。我把它叫做tfidf_matrix语料的fit_transform后,我一直在检查TfidfVectorizer的输出。我总结了行,但他们并不总和为1的代码是VECT = TfidfVectorizer(use_idf =真,sublunar_tf =真,规范=” L2)。tfidf_matrix = vect.fit_transform(数据)。当我运行tfidf_matrix.sum(轴= 1)的载体是大于1也许我看错矩阵或我误解如何正常化的作品。我希望有人能澄清这一点!谢谢

  31. Can I ask when you calculated the IDF, for example, log(2/1), did you use log to base 10 (e) or some other value? I’m getting different calculations!

  32. 伟大的教程,刚开始在ML一份新工作,这很清楚,因为它应该是解释的事情。亚洲金博宝

  33. Execellent帖子...。!非常感谢这篇文章。

    但是,我需要更多的信息,当你展示实际使用python,你可以为它提供JAVA语言..

  34. 我有点困惑,为什么TF-IDF在这种情况下,给出了负数?我们如何解读?纠正我,如果我错了,但是当载体为正值,这意味着该组件的大小确定字是该文件中有多么重要。如果是负数,我不知道如何解释它。如果我是采取向量的点积与所有积极的部件和一个负组件,这将意味着,一些部件可能负点积贡献,即使在载体有一个特定的词非常高的重视。亚洲金博宝

  35. Hi,
    非常感谢您对这个主题这个详细的解释,真是太好了。无论如何,你可以给我一个提示,这可能是我的错误,我不断看到的来源:

    freq_term_matrix = count_vectorizer.transform(TEST_SET)
    AttributeError: ‘matrix’ object has no attribute ‘transform’

    我使用sklearn的版本错误?

  36. Awesome simple and effective explaination.Please post more topics with such awesome explainations.Looking forward for upcoming articles.
    谢谢

  37. 谢谢克里斯,你是唯一一个谁是明确了对角矩阵在网络上。

  38. Great tutorial for Tf-Idf. Excellent work . Please add for cosine similarity also:)

  39. 我明白了TF-IDF计算处理。不过这是什么矩阵均值,以及我们如何使用TFIDF矩阵计算相似度让我困惑。你能解释一下,我们如何利用TFIDF矩阵.thanks

  40. 最好的解释..非常有帮助。亚洲金博宝你能告诉我如何绘制矢量文本分类的SVM ..我在微博分类工作。我很困惑,请帮助我。

  41. 您好,我很抱歉,如果我有错,但我不明白是怎么|| VD4 || 2 = 1。
    D4 =的值(0.0,0.89,0.44,0.0),因此归一化将是= SQRT(正方形(0.89)+平方(0.44))= SQRT(0.193)= 0.44
    so what did i missed ? please help me to understand .

  42. 嗨,这是一个伟大的博客!
    如果我需要做双克的情况下,我该如何使用sklearn来完成呢?

  43. 这是非常大的亚洲金博宝。我喜欢你教。亚洲金博宝非常非常好

  44. 我没有得到相同的结果,当我执行相同的脚本。
    print (“IDF:”, tfidf.idf_) : IDF: [ 2.09861229 1. 1.40546511 1. ]

    My python version is: 3.5
    Scikit了解的版本是:o.18.1

    what does i need to change? what might be the possible error?

    谢谢,

    1. It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

  45. 完美的介绍!
    No hocus pocus. Clear and simple, as technology should be.
    亚洲金博宝很有帮助
    非常感谢你。亚洲金博宝
    请发帖!
    Obrigado

  46. 为什么| d |= 2,在IDF方程。它不应该是4,因为| d |代表的审议的文件数量,我们有2从测试,2个来自火车。

  47. hey , hii Christian
    your post is really helpful to me to understand tfd-idf from the basics. I’m working on a project of classification where I’m using vector space model which results in determining the categories where my test document should be present. its a part of machine learning . it would be great if you suggest me something related to that. I’m stuck at this point.
    Ťhank you

  48. 看到这个例子就知道如何使用它的文本分类过程。“这个”链接不起作用了。能否请您提供相关链接,例如。

    谢谢

  49. There is certainly a great deal to learn about this subject. I really like all the points you made.

  50. 1vbXlh你提出了一个非常美妙的细节,欣赏它的职位。亚洲金博宝

  51. 我知道这个网站提供基于高质量的文章或
    reviews and additional data, is there any other web page which presents these kinds of
    information in quality?

  52. 在第一个例子。IDF(T1),日志(2/1)由计算器= 0.3010。为什么他们获得0.69 ..请有什么不对?

发表回复Andrew EvansCancel reply

您的电子邮件地址不会被公开。

This site uses Akismet to reduce spam.了解您的意见如何处理数据