机器学习::文本特征提取(TF-IDF) - 第二部分

Read the first part of this tutorial:Text feature extraction (tf-idf) – Part I

这个职位是一个延续of the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend you阅读第一部分of the post series in order to follow this second post.

由于很多人喜欢这个教程的第一部分,该第二部分是比第一个长一点。

Introduction

In the first post, we learned how to use theterm-frequency以表示在矢量空间的文本信息。然而,与术语频率方法的主要问题是,它大大加快了频繁的条款和规模下降,这比高频方面经验更丰富罕见的条款。基本的直觉是,在许多文件中经常出现的一个术语不太好鉴别,真正有意义的(至少在许多实验测试);这里最重要的问题是:你为什么会在例如分类问题,强调术语,是在你的文档的整个语料库几乎礼物?

在TF-IDF权重来解决这个问题。什么TF-IDF给出的是如何重要的是一个集合中的文档的话,这就是为什么TF-IDF结合本地和全球的参数,因为它考虑到不仅需要隔离的期限,但也文献集内的术语。什么TF-IDF然后做来解决这个问题,是缩小,同时扩大了难得的条件频繁的条款;出现比其他的10倍以上期限不为10倍比它更重要的是,为什么TF-IDF采用对数刻度的做到这一点。

But let’s go back to our definition of the\mathrm{tf}(t,d)which is actually the term count of the termtin the documentd。使用这种简单的词频可能导致我们一样的问题keyword spamming,这是当我们有一个文档中的术语重复以改善上的IR其排名的目的(信息检索)系统,甚至对创建长文档偏见,使他们看起来比他们只是因为手册中出现的高频更重要。

To overcome this problem, the term frequency\mathrm{tf}(t,d)上的矢量空间中的文件的通常也归一化。让我们来看看我们是如何规范这一载体。

Vector normalization

Suppose we are going to normalize the term-frequency vector\vec{v_{d_4}}that we have calculated in the first part of this tutorial. The documentd4from the first part of this tutorial had this textual representation:

d4: We can see the shining sun, the bright sun.

And the vector space representation using the non-normalized term-frequency of that document was:

\vec{v_{d_4}} = (0,2,1,0)

规范化的向量,是一样的说话g the单位向量矢量,而他们使用的是“帽子”符号表示:\hat{v}。The definition of the unit vector\hat{v}of a vector\vec{v}is:

\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}

Where the\hat{v}is the unit vector, or the normalized vector, the\vec{v}is the vector going to be normalized and the\|\vec{v}\|_p是矢量的范数(大小,长度)\vec{v}in theL ^ p空间(别担心,我将所有的解释)。

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

The normalization process (Source: http://processing.org/learning/pvector/)
The normalization process (Source: http://processing.org/learning/pvector/)

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of theL ^ p空间,也被称为勒贝格空间

勒贝格空间

多久这个载体?(来源:来源:http://processing.org/learning/pvector/)
多久这个载体?(来源:来源:http://processing.org/learning/pvector/)

通常,一个矢量的长度\vec{u} = (u_1, u_2, u_3, \ldots, u_n)使用计算Euclidean norma norm is a function that assigns a strictly positive length or size to all vectors in a vector space- ,其被定义为:

(Source: http://processing.org/learning/pvector/)
(Source: http://processing.org/learning/pvector/)

\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}

但是,这不是定义长度的唯一途径,这就是为什么你看到(有时)的数ptogether with the norm notation, like in\|\vec{u}\|_p。That’s because it could be generalized as:

\displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}

并简化为:

\displaystyle \|\vec{u}\|_p = (\sum\limits_{i=1}^{n}\left|\vec{u}_i\right|^p)^\frac{1}{p}

所以,当你阅读有关L2-norm,你正在阅读关于Euclidean norm,具有规范p = 2, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without thep号),你有L2-norm(Euclidean norm).

When you read about aL1范,你正在阅读关于norm withp=1, defined as:

\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)

Which is nothing more than a simple sum of the components of the vector, also known asTaxicab distance,也被称为曼哈顿距离。

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length6 \times \sqrt{2} \approx 8.48,并且是唯一的最短路径。
Source:Wikipedia :: Taxicab Geometry

请注意,您也可以使用任何规范正常化的载体,但我们将使用最常用的规范,L2范数,这也是在0.9版本的默认scikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later)。

Back to vector normalization

Now that you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector\vec{v_{d_4}} = (0,2,1,0)in order to get its unit vector\hat{v_{d_4}}。为了做到这一点,我们将简单的将其插入单位矢量的定义,对其进行评估:

\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

And that is it ! Our normalized vector\hat{v_{d_4}}has now a L2-norm\|\hat{v_{d_4}}\|_2 = 1.0

Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf.

The term frequency – inverse document frequency (tf-idf) weight

现在你已经了解了矢量归在理论和实践是如何工作的,让我们继续我们的教程。假设你有你的收藏(从教程的第一部分拍摄)在下列文件:

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

Your document space can be defined then asD = \{ d_1, d_2, \ldots, d_n \}哪里n是在你的文集文档的数量,并在我们的情况下,D_ {火车} = \ {D_1,D_2 \}andD_{test} = \{d_3, d_4\}。The cardinality of our document space is defined by\左| {{D_火车}} \右|= 2and\左| {{D_测试}} \右|= 2, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

Let’s see now, how idf (inverse document frequency) is then defined:

\displaystyle \mathrm{idf}(t) = \log{\frac{\left|D\right|}{1+\left|\{d : t \in d\}\right|}}

哪里\left|\{d : t \in d\}\right|is thenumber of documents哪里the termt出现,术语频率函数满足当\mathrm{tf}(t,d) \neq 0, we’re only adding 1 into the formula to avoid zero-division.

The formula for the tf-idf is then:

\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)

and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (global parameter)。

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

M_{train} =  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

Since we have 4 features, we have to calculate\ mathrm {IDF}(T_1),\ mathrm {IDF}(T_2),\ mathrm {IDF}(t_3处),\mathrm{idf}(t_4):

\ mathrm {IDF}(T_1)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718

\ mathrm {IDF}(T_2)= \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\ mathrm {IDF}(t_3处)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:t_3处\在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\mathrm{idf}(t_4) = \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0

These idf weights can be represented by a vector as:

\ {VEC {idf_列车}}= (0.69314718, -0.40546511, -0.40546511, 0.0)

Now that we have our matrix with the term frequency (M_{train}) and the vector representing the idf for each feature of our matrix (\ {VEC {idf_列车}}), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrixM_{train}with the respective\ {VEC {idf_列车}}vector dimension. To do that, we can create a squarediagonal matrixM_{idf}with both the vertical and horizontal dimensions equal to the vector\ {VEC {idf_列车}}dimension:

M_{idf} =   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

然后将它乘到术语频率矩阵,因此最终结果然后可以定义为:

M_{tf\mbox{-}idf} = M_{train} \times M_{idf}

Please note that the matrix multiplication isn’t commutative, the result ofA \times Bwill be different than the result of the乙\一个时代, and this is why theM_{idf}是对乘法的右侧,以完成每个IDF值到其对应的特征相乘的期望的效果:

{bmatrix} \ \开始mathrm {tf} (t_1 d_1) & \ mathrm {tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

现在让我们来看看这个乘法的一个具体的例子:

M_{tf\mbox{-}idf} = M_{train} \times M_{idf} = \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

And finally, we can apply our L2 normalization process to theM_{tf\mbox{-}idf}matrix. Please note that this normalization is“row-wise”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

这就是我们的我们的测试文档集,这实际上是单位向量的集合的漂亮归TF-IDF权重。如果你把矩阵的每一行的L2范数,你会发现它们都具有1的L2范数。

Python practice

Environment Used:Python v.2.7.2,NumPy的1.6.1,SciPy的v.0.9.0,Sklearn (Scikits.learn) v.0.9

现在,你在等待的部分!在本节中,我将使用Python的使用,以显示TF-IDF计算的每一步Scikit.learnfeature extraction module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix),我们可以实例化TfidfTransformer,这将是负责来计算我们的词频矩阵TF-IDF权重:

从进口sklearn.feature_extraction.text TFIDF TfidfTransformer = TfidfTransformer(NORM = “L2”)tfidf.fit(freq_term_matrix)打印 “IDF:”,tfidf.idf_#IDF:[0.69314718 -0.40546511 -0.40546511 0]

请注意,我所指定的标准为L2,这是可选的(实际上默认为L2范数),但我已经添加了参数,使其明确向你表示,它会使用L2范数。还要注意的是,你可以通过访问称为内部属性看IDF计算权重idf_。Now that适合()方法计算矩阵中的IDF上,让我们改造freq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix= tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, thetf_idf_matrix其实我们以前M_{tf\mbox{-}idf}matrix. You can accomplish the same effect by using the矢量器Scikit的类。学习是一个vectorizer that automatically combines theCountVectorizerTfidfTransformerto you. See这个例子to know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," inTerra Incognita,03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

References

理解逆文档频率:对IDF理论论证

维基百科:: TF-IDF

The classic Vector Space Model

Sklearn text feature extraction code

Updates

13 Mar 2015格式化,固定图像的问题。
03 Oct 2011添加了有关使用Python示例环境信息

103 thoughts to “Machine Learning :: Text feature extraction (tf-idf) – Part II”

  1. Wow!
    Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
    (对不起,糟糕的英语,我努力改善它,but there is still a lot of job to do)

  2. 出色的工作基督徒!我期待着阅读的文档分类你的下一个职位,聚类和主题提取朴素贝叶斯,随机梯度下降,Minibatch-K均值和非负矩阵分解

    而且,scikit学习的文档上的文本特征提取部分(我是罪魁祸首?)真的很差。如果你想给一个手并改善目前的状况,不要犹豫,加入邮件列表。

    1. 十分感谢奥利弗。我真的想帮助sklearn,我只是得到一些更多的时间来做到这一点,你们都做了伟大的工作,我真的在lib中已经实现的算法量折服,保持良好的工作!

  3. 我喜欢这个教程的新概念我在这里学习水平较好。
    That said, which version of scikits-learn are you using?.
    The latest as installed by easy_install seems to have a different module hierarchy (i.e doesn’t find feature_extraction in sklearn). If you could mention the version you used, i will just try out with those examples.

    1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

  4. Where’s part 3? I’ve got to submit an assignment on Vector Space Modelling in 4 days. Any hope of putting it up over the weekend?

  5. 谢谢again for this complete and explicit tutorial and I am waiting for the coming section.

  6. 谢谢Christian! a very nice work on vector space with sklearn. I just have one question, suppose I have computed the ‘tf_idf_matrix’, and I would like to compute the pair-wise cosine similarity (between each rows). I was having problem with the sparse matrix format, can you please give an example on that? Also my matrix is pretty big, say 25k by 60k. Thanks a lot!

  7. Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:
    1-你调用2维矩阵M_train,但它具有D3和D4文件的TF值,所以你应该已经给那矩阵M_test而不是M_train。由于D3和D4是我们的测试文档。
    2 - 当你计算IDF值的T2(这是“太阳”),它应该是日志(2/4)。因为文件的数目是2 D3有词“太阳” 1次,D4有它的2倍。这使得3,但是我们也加1到值摆脱0分的问题。这使得4 ...我说得对不对还是我失去了一些东西?
    Thank you.

    1. You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

      As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

      1. 回复:我“你是正确的注释”(上),我应该补充:

        “......还注意到康斯登Passot的评论(下同)关于分母:

        ‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

    2. 哈立德,
      This is a response to a very old question. However, I still want to respond to communicate what I understand from the article.
      Your question 2: “When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4)”
      My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

  8. 优秀的帖子!
    I have some question. From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents

  9. Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

  10. I have same doubt as Jack(last comment). From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents.

  11. I have a question..
    After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

    1. F(IDF)的高值,表示特定载体(或文件)具有较高的局部强度和低全球实力,在这种情况下,你可以假设,在它的条款具有很高的重要性本地和不能忽视的。针对funtion(TF),其中只有长期重复大量的时间给予更多重视的那些,其中大部分时间是不正确的建模技术比较。

  12. Hey ,
    感谢名单FR d code..was的确非亚洲金博宝常有帮助!

    1.适用于文档聚类,计算反相的术语频率之后,shud我使用任何关联性系数等Jaccards系数,然后应用聚类算法中像k均值或shud我计算反转术语频率后直接适用d k均值到文档向量?

    2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

    由于一吨FR第四到来的答复!

  13. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

    我喜欢阅读本系列的第三批呢!我特别想了解更多有关特征选择。是否有一个惯用的方式来获得最高的分数TF.IDF条款的排序列表?你将如何确定这些方面的整体?你将如何得到这是最负责高或低的余弦相似度(逐行)的条款?

    Thank you for the _great_ posts!

  14. Excellent article and a great introduction to td-idf normalization.

    You have a very clear and structured way of explaining these difficult concepts.

    谢谢!

      1. very good & infomative tutorial…. please upload more tutorials related to documents clustering process.

  15. Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

  16. 请纠正我,如果我拨错
    从“频率后的公式calculated in the first tutorial:” should Mtest not Mtrain. also after starting ‘These idf weights can be represented by a vector as:” should be idf_test not idf_train.

    顺便说一句伟大的系列赛,你可以给如何实施分类的简单的方法?

  17. Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

  18. 亚洲金博宝很不错的职位。恭喜!!

    Showing your results, I have a question:

    I read in the wikipedia:
    The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

    When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

    However, in the results, the word “sun” or “bright” are most important than “sky”.

    I’m not sure of understand it completly.

  19. 你好,

    The explanation is awesome. I haven’t seen a better one yet. I have trouble reproducing the results. It might be because of some update of sklearn.
    Would it be possible for you to update the code?

    It seem that the formula for computing the tf-idf vector has changed a little bit. Is a typo or another formula. Below is the link to the source code.

    https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L954

    Many thanks

  20. Terrific! I was familiar with tf-idf before but I found your scikits examples helpful as I’m trying to learn that package.

  21. 优秀的帖子!一次偶然的机会找上CountVectorizer更多信息,无意中发现了这一点,但我很高兴我通过两个您的文章(第1部分和第2部分)的读取。

    Bookmarking your blog now

  22. Does not seem to fit_transform() as you describe..
    Any idea why ?
    >>> ts
    (‘The sky is blue’, ‘The sun is bright’)
    >>> v7 = CountVectorizer()
    >>> v7.fit_transform(ts)
    <2×2 sparse matrix of type '’
    with 4 stored elements in COOrdinate format>
    >>> print v7.vocabulary_
    {u’is’: 0, u’the’: 1}

    1. Actually, there are two small errors in the first Python sample.
      1. CountVectorizer should be instantiated like so:
      count_vectorizer = CountVectorizer(stop_words='english')
      这将确保“是”,“的”等被删除。

      2.要打印的词汇,你必须在末尾添加下划线。
      print "Vocabulary:", count_vectorizer.vocabulary_

      优秀的教程,只是小事情。hoep它可以帮助别人。

      1. 谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

  23. 谢谢for the great explanation.

    我有一个关于IDF(T#)的计算问题。
    In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

    谢谢。

    1. You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

  24. 这是很好的,如果你能提供的方式来知道如何使用FT-IDF中的文档分类。我看到示例(Python代码),但如果有算法是最好的,因为没有所有的人都能理解这种语言。

    谢谢

  25. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

    继续写:)

  26. Hi Christian,

    这让我非常兴奋和幸运,读亚洲金博宝这篇文章。你理解的清晰反映了文件的清晰度。这让我重拾我的信心在机器学习领域。

    由于一吨为美丽的解释。

    想从你更多。

    谢谢,

  27. Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

  28. how can i calculate tf idf for my own text file which is located some where in my pc?

  29. Brilliant article.

    By far the easiest and most sound explanation of tf-tdf I’ve read. I really liked how you explained the mathematics behind it.

  30. 嗨,伟大的职位!我使用的是TfidVectorizer模块scikit学习产生与规范= L2的TF-IDF矩阵。我把它叫做tfidf_matrix语料的fit_transform后,我一直在检查TfidfVectorizer的输出。我总结了行,但他们并不总和为1的代码是VECT = TfidfVectorizer(use_idf =真,sublunar_tf =真,规范=” L2)。tfidf_matrix = vect.fit_transform(数据)。当我运行tfidf_matrix.sum(轴= 1)的载体是大于1也许我看错矩阵或我误解如何正常化的作品。我希望有人能澄清这一点!谢谢

  31. Can I ask when you calculated the IDF, for example, log(2/1), did you use log to base 10 (e) or some other value? I’m getting different calculations!

  32. 伟大的教程,刚开始在ML一份新工作,这很清楚,因为它应该是解释的事情。亚洲金博宝

  33. Execellent post….!!! Thanks alot for this article.

    但是,我需要更多的信息,当你展示实际使用python,你可以为它提供JAVA语言..

  34. 我有点困惑,为什么TF-IDF在这种情况下,给出了负数?我们如何解读?纠正我,如果我错了,但是当载体为正值,这意味着该组件的大小确定字是该文件中有多么重要。如果是负数,我不知道如何解释它。如果我是采取向量的点积与所有积极的部件和一个负组件,这将意味着,一些部件可能负点积贡献,即使在载体有一个特定的词非常高的重视。亚洲金博宝

  35. Hi,
    谢谢so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

    freq_term_matrix = count_vectorizer.transform(TEST_SET)
    AttributeError: ‘matrix’ object has no attribute ‘transform’

    我使用sklearn的版本错误?

  36. 真棒简单而有效的explaination.Please发布更多的话题与这样真棒explainations.Looking着为即将到来的文章。
    谢谢

  37. Thank you Chris, you are the only one on the web who was clear about the diagonal matrix.

  38. I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

  39. best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

  40. I learned so many things. Thanks Christian. Looking forward for your next tutorial.

  41. Hi, I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.
    D4 =的值(0.0,0.89,0.44,0.0),因此归一化将是= SQRT(正方形(0.89)+平方(0.44))= SQRT(0.193)= 0.44
    所以我有没有遗漏了什么?请帮我明白了。

  42. Hi, it is a great blog!
    如果我需要做双克的情况下,我该如何使用sklearn来完成呢?

  43. I am not getting same result, when i am executing the same script.
    print (“IDF:”, tfidf.idf_) : IDF: [ 2.09861229 1. 1.40546511 1. ]

    我的Python版本:3.5
    Scikit了解的版本是:o.18.1

    what does i need to change? what might be the possible error?

    thanks,

    1. It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

  44. 完美的介绍!
    No hocus pocus. Clear and simple, as technology should be.
    亚洲金博宝很有帮助
    非常感谢你。亚洲金博宝
    Keep posting!
    Obrigado

  45. Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

  46. hey , hii Christian
    your post is really helpful to me to understand tfd-idf from the basics. I’m working on a project of classification where I’m using vector space model which results in determining the categories where my test document should be present. its a part of machine learning . it would be great if you suggest me something related to that. I’m stuck at this point.
    谢谢

  47. 看到这个例子就知道如何使用它的文本分类过程。“这个”链接不起作用了。能否请您提供相关链接,例如。

    谢谢

  48. 当然有很大的了解这个问题。我真的很喜欢所有的点,你做。

  49. 1vbXlh You have brought up a very wonderful details , appreciate it for the post.

  50. 我知道这个网站提供基于高质量的文章或
    评论和其他数据,还有没有其他的网页呈现这类
    information in quality?

  51. In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?

发表回复SebastianCancel reply

您的电子邮件地址不会被公开。

This site uses Akismet to reduce spam.Learn how your comment data is processed