Machine Learning :: Text feature extraction (tf-idf) – Part II

阅读本教程的第一部分:Text feature extraction (tf-idf) – Part I

This post is aContinuationof the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend youŤo read the first part后一系列以遵循这个第二。

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

Introduction

在第一篇文章中,我们学会了如何使用长期频Ťo represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

但是,让我们回到我们的定义\mathrm{tf}(t,d)which is actually the term count of the termŤin the documentd。The use of this simple term frequency could lead us to problems like滥用关键字,which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency\mathrm{tf}(t,d)of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

Vector normalization

Suppose we are going to normalize the term-frequency vector\vec{v_{d_4}}Ťhat we have calculated in the first part of this tutorial. The documentD4从本教程的第一部分中有这样的文字表示:

D4:We can see the shining sun, the bright sun.

和使用该文件的非归一化项频向量空间表示为:

\vec{v_{d_4}} = (0,2,1,0)

To normalize the vector, is the same as calculating theUnit Vectorof the vector, and they are denoted using the “hat” notation:\hat{v}。The definition of the unit vector\hat{v}一个向量的\vec{v}是:

\的DisplayStyle \帽子{V} = \压裂{\ vec的{V}} {\ | \ vec的{V} \ | _p}

\hat{v}is the unit vector, or the normalized vector, the\vec{v}is the vector going to be normalized and the\|\vec{v}\|_pis the norm (magnitude, length) of the vector\vec{v}in theL^pspace (don’t worry, I’m going to explain it all).

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

The normalization process (Source: http://processing.org/learning/pvector/)
The normalization process (Source: http://processing.org/learning/pvector/)

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of theL^pspaces, also calledLebesgue spaces

Lebesgue spaces

How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)
How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)

Usually, the length of a vector\vec{u} = (u_1, u_2, u_3, \ldots, u_n)is calculated using theEuclidean norm-a norm is a function that assigns a strictly positive length or size to all vectors in a vector space-, which is defined by:

(Source: http://processing.org/learning/pvector/)
(Source: http://processing.org/learning/pvector/)

\ | \ VEC【U} \ |= \ SQRT【U ^ 2_1 + U ^ 2_2 + U ^ 2_3 + \ ldots + U ^ 2_n}

But this isn’t the only way to define length, and that’s why you see (sometimes) a numberp符合规范的符号,就像在了一起\ | \ VEC【U} \ |_p。That’s because it could be generalized as:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\左| U_1 \右| ^ P + \左| U_2 \右| ^ P + \左| U_3 \右| ^ P + \ ldots + \左|u_n \右| ^ p)^ \压裂{1} {p}

and simplified as:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P)^ \压裂{1} {P}

So when you read about aL2-norm,you’re reading about theEuclidean norm,a norm withp = 2时用于测量的矢量的长度的最常用标准,通常称为“大小”;其实,当你有一个不合格的长度测量(不pñumber), you have theL2-norm(欧几里得范数)。

When you read about aL1-norm,you’re reading about the norm withP = 1,defined as:

\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)

Which is nothing more than a simple sum of the components of the vector, also known as出租汽车距离,also called Manhattan distance.

出租车几何与欧几里得距离:在出租车几何所有三个描绘线具有对于相同的路径具有相同的长度(12)。在欧几里德几何,绿色的线有长度,6 \times \sqrt{2} \approx 8.48,and is the unique shortest path.
Source:维基百科::出租车通用电气ometry

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of thescikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later).

返回矢量归

Now that you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector\vec{v_{d_4}} = (0,2,1,0)in order to get its unit vector\hat{v_{d_4}}。To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

And that is it ! Our normalized vector\hat{v_{d_4}}has now a L2-norm\|\hat{v_{d_4}}\|_2 = 1.0

注意,在这里我们归一化频率Cy document vector, but later we’re going to do that after the calculation of the tf-idf.

术语频率 - 逆文档频率(TF-IDF)重量

Now you have understood how the vector normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

您的文档空间可以那么作为被定义D = \{ d_1, d_2, \ldots, d_n \}whereñis the number of documents in your corpus, and in our case asD_{train} = \{d_1, d_2\}andD_{test} = \{d_3, d_4\}。我们的文档空间的基数被定义\left|{D_{train}}\right| = 2and\left|{D_{test}}\right| = 2,因为我们只有2两个用于训练和测试文档,但他们显然并不需要有相同的基数。

Let’s see now, how idf (inverse document frequency) is then defined:

\的DisplayStyle \ mathrm {IDF}(T)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:吨\在d \} \右|}}

where\left|\{d : t \in d\}\right|is the文件数其中术语Ťappears, when the term-frequency function satisfies\mathrm{tf}(t,d) \neq 0,我们只加1代入公式,以避免零分。

The formula for the tf-idf is then:

\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)

和该公式具有重要的后果:当你有给定文档中高词频(TF)达到TF-IDF计算的高权重(local parameter) and a low document frequency of the term in the whole collection (global parameter).

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

M_{train} =  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

因为我们有4个特点,我们要计算\mathrm{idf}(t_1)\mathrm{idf}(t_2)\mathrm{idf}(t_3)\mathrm{idf}(t_4)

\mathrm{idf}(t_1) = \log{\frac{\left|D\right|}{1+\left|\{d : t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718

\ mathrm {IDF}(T_2)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_2 \在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\mathrm{idf}(t_4) = \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0

These idf weights can be represented by a vector as:

\vec{idf_{train}} = (0.69314718, -0.40546511, -0.40546511, 0.0)

Now that we have our matrix with the term frequency (M_{train}) and the vector representing the idf for each feature of our matrix (\vec{idf_{train}}), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrixM_{train}with the respective\vec{idf_{train}}向量维度。要做到这一点,我们可以创建一个正方形diagonal matrixCalledM_{idf}with both the vertical and horizontal dimensions equal to the vector\vec{idf_{train}}尺寸:

M_{idf} =   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

and then multiply it to the term frequency matrix, so the final result can be defined then as:

M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf}

请注意,矩阵乘法是不可交换的,结果A \times Bwill be different than the result of theB \times A,and this is why theM_{idf}is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

\begin{bmatrix}   \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

Let’s see now a concrete example of this multiplication:

M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf} = \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

And finally, we can apply our L2 normalization process to theM_ {TF \ MBOX { - }} IDF矩阵。Please note that this normalization is“逐行”因为我们要处理矩阵的每一行作为一个分离向量进行归一化,而不是矩阵作为一个整体:

M_ {TF \ MBOX { - }} IDF= \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

Python practice

Environment UsedPython v.2.7.2Numpy 1.6.1Scipy v.0.9.0Sklearn (Scikits.learn) v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learnfeature extraction module.

第一步是创建我们的训练和测试文档集和计算词频矩阵:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix), we can instantiate theTfidfTransformer,which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。Now thatfit()method has calculated the idf for the matrix, let’s transform thefreq_term_matrix到TF-IDF权重矩阵:

Ťf_idf_matrix = tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, theŤf_idf_matrixis actually our previousM_ {TF \ MBOX { - }} IDF矩阵。您可以通过使用达到相同的效果Vectorizer类Scikit.learn的这是一个矢量器自动结合CountVectorizerand theTfidfTransformer给你。看到Ťhis exampleŤo know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

如果你喜欢,随时提出意见和建议,修改等。

引用本文为:基督教S. Perone,“机器学习::文本特征提取(TF-IDF) - 第二部分”,在亚洲金博宝未知领域,03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Wikipedia :: tf-idf

经典的向量空间模型

Sklearn文本特征提取码

Updates

13 Mar 2015-Formating, fixed images issues.
2011 10月3日-Added the info about the environment used for Python examples

103 thoughts to “Machine Learning :: Text feature extraction (tf-idf) – Part II”

  1. Wow!
    完美的前奏在TF-IDF,非常感谢你!亚洲金博宝亚洲金博宝很有意思,我想学这个领域很长一段时间,你的职位是一个真正的礼物。这将是非常有趣的阅读更多亚洲金博宝关于该技术的使用情况。而且可能是你有兴趣,请,摆脱对文本语料库表示的其他方法的一些光,如果他们存在?
    (sorry for bad English, I’m working to improve it, but there is still a lot of job to do)

  2. Excellent work Christian! I am looking forward to reading your next posts on document classification, clustering and topics extraction with Naive Bayes, Stochastic Gradient Descent, Minibatch-k-Means and Non Negative Matrix factorization

    Also, the documentation of scikit-learn is really poor on the text feature extraction part (I am the main culprit…). Don’t hesitate to join the mailing list if you want to give a hand and improve upon the current situation.

    1. Great thanks Olivier. I really want to help sklearn, I just have to get some more time to do that, you guys have done a great work, I’m really impressed by the amount of algorithms already implemented in the lib, keep the good work !

  3. I like this tutorial better for the level of new concepts i am learning here.
    That said, which version of scikits-learn are you using?.
    The latest as installed by easy_install seems to have a different module hierarchy (i.e doesn’t find feature_extraction in sklearn). If you could mention the version you used, i will just try out with those examples.

    1. 您好阿南德,我很高兴你喜欢它。我已经增加了大约只用一节“的Python惯例”之前,我使用的是scikits.learn 0.9(发布在几个星期前)环境的信息。

  4. Where’s part 3? I’ve got to submit an assignment on Vector Space Modelling in 4 days. Any hope of putting it up over the weekend?

  5. 谢谢again for this complete and explicit tutorial and I am waiting for the coming section.

  6. 由于基督教! a very nice work on vector space with sklearn. I just have one question, suppose I have computed the ‘tf_idf_matrix’, and I would like to compute the pair-wise cosine similarity (between each rows). I was having problem with the sparse matrix format, can you please give an example on that? Also my matrix is pretty big, say 25k by 60k. Thanks a lot!

  7. 伟大的职位......我明白了什么TF-IDF以及如何与一个具体的例子实施。但我发现2周的事情,我不知道:
    1- You called the 2 dimensional matrix M_train, but it has the tf values of the D3 and D4 documents, so you should’ve called that matrix M_test instead of M_train. Because D3 and D4 are our test documents.
    2- When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4). Because number of the documents is 2. D3 has the word ‘sun’ 1 time, D4 has it 2 times. Which makes it 3 but we also add 1 to that value to get rid of divided by 0 problem. And this makes it 4… Am I right or am I missing something?
    谢谢。

    1. 你是正确的:这些都是优秀的博客文章,但作者真的有责任/责任回去和纠正错误,这样的(和其他人,例如,第1部分; ...):缺席训练下划线;设置STOP_WORDS参数;还我的电脑上,词汇索引是不同的。

      正如我们赞赏的努力(荣誉的作者!),它也是一个显著伤害那些谁斗争过去在原有材料的(未修正)的错误。

      1. re: my ‘you are correct comment’ (above), I should have added:

        “… noting also Frédérique Passot’s comment (below) regarding the denominator:

        ‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

    2. 哈立德,
      This is a response to a very old question. However, I still want to respond to communicate what I understand from the article.
      Your question 2: “When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4)”
      我的理解:在数项的分母应该是(一些文件,其中术语出现+ 1),而不是长期的频率。术语“太阳”出现的文件的数目是2(1次在D3和D4中的2倍 - 完全出现3次在两个文件3是频率和2是文件号)。因此,分母为2 + 1 = 3。

  8. excellent post!
    I have some question. From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents

  9. 非常感谢。你在这样一个简单的方法来解释它。这是非常有用的。再次感谢了很多。

  10. I have same doubt as Jack(last comment). From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents.

  11. I have a question..
    在TF-IDF操作后,我们得到与值的numpy的阵列。假设我们需要从阵列中获得最高50个值。我们怎样才能做到这一点?

    1. high value of f(idf) denotes that the particular vector(or Document) has high local strength and low global strength, in which case you can assume that the terms in it has high significance locally and cant be ignored. Comparing against funtion(tf) where only the term repeats high number of times are the ones given more importance,which most of the times is not a proper modelling technique.

  12. 嘿,
    Thanx fr d code..was very helpful indeed !

    1.For document clustering,after calculating inverted term frequency, shud i use any associativity coefficient like Jaccards coefficient and then apply the clustering algo like k-means or shud i apply d k-means directly to the document vectors after calculating inverted term frequency ?

    2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

    谢谢a ton fr the forth coming reply!

  13. @Khalid:你在1-指出什么让我困惑过了一分钟(M_train VS M_test)。我想你误会了你的第二点,不过,因为我们用的是什么是真正发生的一个术语,无论任何给定的文档中出现的术语次数的文件数量。在这种情况下,那么,在为T2(“太阳”)的IDF值分母确实2 + 1(2个文件具有的术语“太阳”,1以避免潜在的零分割误差)。

    I’d love to read the third installment of this series too! I’d be particularly interested in learning more about feature selection. Is there an idiomatic way to get a sorted list of the terms with the highest tf.idf scores? How would you identify those terms overall? How would you get the terms which are the most responsible for a high or low cosine similarity (row by row)?

    谢谢你的帖子_美好的_!

  14. 优秀文章和一个伟大的介绍TD-IDF正常化。

    You have a very clear and structured way of explaining these difficult concepts.

    谢谢!

      1. very good & infomative tutorial…. please upload more tutorials related to documents clustering process.

  15. 您可以为使用TFIDF所以我们有TFIDF的矩阵,我们怎么可以用它来计算余弦做余弦相似度任何引用。感谢神奇的物品。

  16. Please correct me if i’m worng
    与启动后的公式“我们在第一个教程中计算出的频率:”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后:”应该是不idf_test idf_train。

    Btw great series, can you give an simple approach for how to implement classification?

  17. Very good post. Congrats!!

    Showing your results, I have a question:

    I read in the wikipedia:
    成比例的TF-IDF值增加到的次数的字出现在文档中,但是通过在语料库中的字,这有助于控制的事实,一些词语通常比另一些更常见的频率偏移。

    当我看到它,我明白,如果一个字中的所有文档apperars就是一个字只出现在一个文档中不太重要的:

    However, in the results, the word “sun” or “bright” are most important than “sky”.

    我不知道的完全地理解它。

  18. Hello,

    The explanation is awesome. I haven’t seen a better one yet. I have trouble reproducing the results. It might be because of some update of sklearn.
    Would it be possible for you to update the code?

    It seem that the formula for computing the tf-idf vector has changed a little bit. Is a typo or another formula. Below is the link to the source code.

    https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L954

    Many thanks

  19. Terrific! I was familiar with tf-idf before but I found your scikits examples helpful as I’m trying to learn that package.

  20. Excellent post! Stumbled on this by chance looking for more information on CountVectorizer, but I’m glad I read through both of your posts (part 1 and part 2).

    现在用书签您的博客

  21. 似乎没有fit_transform()为你描述..
    任何想法,为什么?
    >>> TS
    (‘The sky is blue’, ‘The sun is bright’)
    >>> V7 = CountVectorizer()
    >>> v7.fit_transform(ts)
    <2×2 sparse matrix of type '’
    with 4 stored elements in COOrdinate format>
    >>> print v7.vocabulary_
    {u'is’:0,u'the”:1}

    1. Actually, there are two small errors in the first Python sample.
      1. CountVectorizer应该被实例化,如下所示:
      count_vectorizer = CountVectorizer(STOP_WORDS = '英语')
      This will make sure the ‘is’, ‘the’ etc are removed.

      2. To print the vocabulary, you have to add an underscore at the end.
      print "Vocabulary:", count_vectorizer.vocabulary_

      Excellent tutorial, just small things. hoep it helps others.

      1. 由于灰。虽然文章是相当自我解释的,您的评论使整个差异。

  22. 感谢伟大的解释。

    I have a question about calculation of the idf(t#).
    在第一种情况下,你写的IDF(T1)=日志(2/1),因为我们没有我们收集此类条款,因此,我们添加1分母。现在,在T2的情况下,你写的日志(2/3),所以分母等于3,而不是4(= 1 + 2 + 1)?万一t3时,你写:日志(2/3),从而分母等于3(= 1 + 1 + 1)。我在这里看到的那种不一致性。你能不能,请解释一下,你是怎么计算的分母值。

    谢谢。

    1. You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

  23. it is good if you can provide way to know how use ft-idf in classification of document. I see that example (python code) but if there is algorithm that is best because no all people can understand this language.

    谢谢

  24. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

    Keep writing:)

  25. 嗨基督徒,

    It makes me very excited and lucky to have read this article. The clarity of your understanding reflects in the clarity of the document. It makes me regain my confidence in the field of machine learning.

    多谢Ťhe beautiful explanation.

    Would like to read more from you.

    谢谢,
    Neethu

  26. 谢谢你的良好的收官之作。你提到一些这比较L1和L2规范的论文,我计划研究,多一点深入。你还知道他们的名字?

  27. 我如何能计算TF IDF为自己的文本文件,它位于一些地方在我的电脑?

  28. Brilliant article.

    By far the easiest and most sound explanation of tf-tdf I’ve read. I really liked how you explained the mathematics behind it.

  29. 嗨,great post! I’m using the TfidVectorizer module in scikit learn to produce the tf-idf matrix with norm=l2. I’ve been examining the output of the TfidfVectorizer after fit_transform of the corpora which I called tfidf_matrix. I’ve summed the rows but they do not sum to 1. The code is vect = TfidfVectorizer(use_idf=True, sublunar_tf=True, norm=”l2). tfidf_matrix = vect.fit_transform(data). When I run tfidf_matrix.sum(axis=1) the vectors are larger than 1. Perhaps I’m looking at the wrong matrix or I misunderstand how normalisation works. I hope someone can clarify this point! Thanks

  30. Can I ask when you calculated the IDF, for example, log(2/1), did you use log to base 10 (e) or some other value? I’m getting different calculations!

  31. 伟大的教程,刚开始一份新工作在毫升和this explains things very clearly as it should be.

  32. Execellent post….!!! Thanks alot for this article.

    But I need more information, As you show the practical with python, Can you provide it with JAVA language..

  33. 我有点困惑为什么tf-idf negative numbers in this case? How do we interpret them? Correct me if I am wrong, but when the vector has a positive value, it means that the magnitude of that component determines how important that word is in that document. If the it is negative, I don’t know how to interpret it. If I were to take the dot product of a vector with all positive components and one with negative components, it would mean that some components may contribute negatively to the dot product even though on of the vectors has very high importance for a particular word.

  34. 嗨,
    谢谢你!so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

    freq_term_matrix = count_vectorizer.transform(test_set)
    AttributeError: ‘matrix’ object has no attribute ‘transform’

    Am I using a wrong version of sklearn?

  35. Awesome simple and effective explaination.Please post more topics with such awesome explainations.Looking forward for upcoming articles.
    谢谢

  36. Thank you Chris, you are the only one on the web who was clear about the diagonal matrix.

  37. Great tutorial for Tf-Idf. Excellent work . Please add for cosine similarity also:)

  38. 我明白了TF-IDF计算处理。不过这是什么矩阵均值,以及我们如何使用TFIDF矩阵计算相似度让我困惑。你能解释一下,我们如何利用TFIDF矩阵.thanks

  39. 最好的解释..非常有帮助。亚洲金博宝你能告诉我如何绘制矢量文本分类的SVM ..我在微博分类工作。我很困惑,请帮助我。

  40. 您好,我很抱歉,如果我有错,但我不明白是怎么|| VD4 || 2 = 1。
    Ťhe value of d4 = (0.0 ,0.89,0.44,0.0) so the normalization will be = sqrt( square(.89)+square(.44))=sqrt(.193) = .44
    so what did i missed ? please help me to understand .

  41. 嗨,it is a great blog!
    If I need to do bi-gram cases, how can I use sklearn to finish it?

  42. I am not getting same result, when i am executing the same script.
    print (“IDF:”, tfidf.idf_) : IDF: [ 2.09861229 1. 1.40546511 1. ]

    My python version is: 3.5
    Scikit Learn version is: o.18.1

    什么我需要改变?可能是什么可能的错误?

    Ťhanks,

    1. 它可以很多东西,因为你正在使用一个不同ent Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

  43. Perfect introduction!
    No hocus pocus. Clear and simple, as technology should be.
    Very helpful
    Thank you very much.
    Keep posting!
    Obrigado

  44. Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

  45. hey , hii Christian
    您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目,其中我使用向量空间模型,这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
    谢谢你!

  46. 看到Ťhis example to know how to use it for the text classification process. “This” link does not work any more. Can you please provide a relevant link for the example.

    谢谢

  47. There is certainly a great deal to learn about this subject. I really like all the points you made.

  48. 1vbXlh You have brought up a very wonderful details , appreciate it for the post.

  49. I know this site provides quality based articles or
    reviews and additional data, is there any other web page which presents these kinds of
    在质量信息?

  50. In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?

Leave a Reply toberkayCancel reply

Your email address will not be published.

This site uses Akismet to reduce spam.Learn how your comment data is processed