# 机器学习::文本特征提取（TF-IDF） - 第二部分

Read the first part of this tutorial:Text feature extraction (tf-idf) – Part I

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

### Introduction

In the first post, we learned how to use theterm-frequencyto represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

To overcome this problem, the term frequency$\ mathrm {TF}（T，d）$of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

### Vector normalization

Suppose we are going to normalize the term-frequency vector$\ {VEC V_ {D_4}}$that we have calculated in the first part of this tutorial. The document$d4$从本教程的第一部分中有这样的文字表示：

D4：我们可以看到闪亮的阳光，明亮的阳光下。

$\ {VEC V_ {D_4}} =（0,2,1,0）$

To normalize the vector, is the same as calculating theUnit Vector矢量，而他们使用的是“帽子”符号表示：$\帽子{V}$。The definition of the unit vector$\帽子{V}$一个向量的$\ VEC {V}$is:

$\的DisplayStyle \帽子{V} = \压裂{\ vec的{V}} {\ | \ vec的{V} \ | _p}$

Where the$\帽子{V}$is the unit vector, or the normalized vector, the$\ VEC {V}$在矢量将被归一化和$\|\vec{v}\|_p$is the norm (magnitude, length) of the vector$\ VEC {V}$in the$L^p$空间（别担心，我将所有的解释）。

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the$L^p$spaces, also calledLebesgue spaces

### Lebesgue spaces

Usually, the length of a vector$\vec{u} = (u_1, u_2, u_3, \ldots, u_n)$is calculated using theEuclidean norma norm is a function that assigns a strictly positive length or size to all vectors in a vector space-, which is defined by:

$\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}$

But this isn’t the only way to define length, and that’s why you see (sometimes) a number$p$符合规范的符号，就像在了一起$\|\vec{u}\|_p$。That’s because it could be generalized as:

$\displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}$

$\的DisplayStyle \ | \ VEC【U} \ | _p =（\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P）^ \压裂{1} {P}$

So when you read about aL2-norm, you’re reading about theEuclidean norm, a norm with$p = 2$, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without the$p$number), you have theL2-norm(Euclidean norm).

When you read about aL1-norm, you’re reading about the norm with$p=1$, defined as:

$\的DisplayStyle \ | \ VEC【U} \ | _1 =（\左| U_1 \右| + \左| U_2 \右| + \左| U_3 \右| + \ ldots + \左| u_n \右|）$

Which is nothing more than a simple sum of the components of the vector, also known as出租汽车距离，也被称为曼哈顿距离。

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length$6 \times \sqrt{2} \approx 8.48$, and is the unique shortest path.
Source:维基百科::出租车几何

### 返回矢量归

$\帽子{V}= \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\ \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\ \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)$

And that is it ! Our normalized vector$\ {帽子V_ {D_4}}$has now a L2-norm$\ | \帽子{V_ {D_4}} \ | _2 = 1.0$

### 术语频率 - 逆文档频率（TF-IDF）重量

Now you have understood how the vector normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

火车文档集：D1：天空是蓝色的。D2：阳光灿烂。测试文档集：D3：在天空，阳光灿烂。D4：我们可以看到闪亮的阳光，明亮的阳光下。

Let’s see now, how idf (inverse document frequency) is then defined:

$\的DisplayStyle \ mathrm {IDF}（T）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：吨\在d \} \右|}}$

$\ mathrm {TF \ MBOX { - } IDF}（T）= \ mathrm {TF}（T，d）\倍\ mathrm {IDF}（t）的$

and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (global parameter).

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

$M_{train} = \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix}$

Since we have 4 features, we have to calculate$\mathrm{idf}(t_1)$,$\mathrm{idf}(t_2)$,$\mathrm{idf}(t_3)$,$\mathrm{idf}(t_4)$:

$\ mathrm {IDF}（T_1）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718$

$\mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$

$\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$

$\mathrm{idf}(t_4) = \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0$

These idf weights can be represented by a vector as:

$\ {VEC {idf_列车}}= (0.69314718, -0.40546511, -0.40546511, 0.0)$

$M_{idf} = \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}$

$M_{tf\mbox{-}idf} = M_{train} \times M_{idf}$

Please note that the matrix multiplication isn’t commutative, the result of$A \乘以B$will be different than the result of the$乙\一个时代$, and this is why the$M_{idf}$is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

$\begin{bmatrix} \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\ \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2) \end{bmatrix} \times \begin{bmatrix} \mathrm{idf}(t_1) & 0 & 0 & 0\\ 0 & \mathrm{idf}(t_2) & 0 & 0\\ 0 & 0 & \mathrm{idf}(t_3) & 0\\ 0 & 0 & 0 & \mathrm{idf}(t_4) \end{bmatrix} \\ = \begin{bmatrix} \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\ \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4) \end{bmatrix}$

Let’s see now a concrete example of this multiplication:

$M_{tf\mbox{-}idf} = M_{train} \times M_{idf} = \\ \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix} \times \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \\ = \begin{bmatrix} 0 & -0.40546511 & -0.40546511 & 0\\ 0 & -0.81093022 & -0.40546511 & 0 \end{bmatrix}$

And finally, we can apply our L2 normalization process to the$M_{tf\mbox{-}idf}$矩阵。Please note that this normalization is“row-wise”因为我们要处理矩阵的每一行作为一个分离向量进行归一化，而不是矩阵作为一个整体：

$M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2}$ $= \begin{bmatrix} 0 & -0.70710678 & -0.70710678 & 0\\ 0 & -0.89442719 & -0.4472136 & 0 \end{bmatrix}$

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

### Python的实践

Environment Used:Python的v.2.7.2,Numpy 1.6.1,Scipy v.0.9.0,Sklearn (Scikits.learn) v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learn特征提取模块。

从sklearn.feature_extraction.text进口CountVectorizer train_set =（“天空是蓝色的。”，“阳光灿烂”。）TEST_SET =（“在天空中的太阳是光明的。”，“我们可以看到闪耀的太阳，。明亮的太阳“）count_vectorizer = CountVectorizer（）count_vectorizer.fit_transform（train_set）打印 ”词汇“，count_vectorizer.vocabulary＃词汇：{ '蓝'：0， '太阳'：1， '鲜艳'：2 '天空'：3} freq_term_matrix = count_vectorizer.transform（TEST_SET）打印freq_term_matrix.todense（）＃[[0 1 1 1]＃[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix），我们可以实例化TfidfTransformer，这将是负责来计算我们的词频矩阵TF-IDF权重：

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。Now thatfit()method has calculated the idf for the matrix, let’s transform thefreq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, thetf_idf_matrixis actually our previous$M_{tf\mbox{-}idf}$矩阵。您可以通过使用达到相同的效果矢量器Scikit的类。学习是一个vectorizer that automatically combines theCountVectorizerTfidfTransformer给你。看到这个例子要知道如何使用它的文本分类过程。

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," in亚洲金博宝未知领域, 03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

### 参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Wikipedia :: tf-idf

Sklearn text feature extraction code

13 Mar 2015Formating, fixed images issues.
03 Oct 2011添加了有关使用Python示例环境信息

## 103个想法“机器学习::文本特征提取（TF-IDF） - 第二部分”

1. Severtcev says:

Wow!
完美的前奏在TF-IDF，非常感谢你！亚洲金博宝亚洲金博宝很有意思，我想学这个领域很长一段时间，你的职位是一个真正的礼物。这将是非常有趣的阅读更多亚洲金博宝关于该技术的使用情况。而且可能是你有兴趣，请，摆脱对文本语料库表示的其他方法的一些光，如果他们存在？
(sorry for bad English, I’m working to improve it, but there is still a lot of job to do)

2. Excellent work Christian! I am looking forward to reading your next posts on document classification, clustering and topics extraction with Naive Bayes, Stochastic Gradient Descent, Minibatch-k-Means and Non Negative Matrix factorization

而且，scikit学习的文档上的文本特征提取部分（我是罪魁祸首？）真的很差。如果你想给一个手并改善目前的状况，不要犹豫，加入邮件列表。

1. 十分感谢奥利弗。我真的想帮助sklearn，我只是得到一些更多的时间来做到这一点，你们都做了伟大的工作，我真的在lib中已经实现的算法量折服，保持良好的工作！

3. I like this tutorial better for the level of new concepts i am learning here.
That said, which version of scikits-learn are you using?.
最新通过的easy_install安装似乎有不同的模块层次结构（即没有找到sklearn feature_extraction）。如果你能提到你使用的版本，我只是尝试用这些例子。

1. 您好阿南德，我很高兴你喜欢它。我已经增加了大约只用一节“的Python惯例”之前，我使用的是scikits.learn 0.9（发布在几个星期前）环境的信息。

4. siamii says:

哪里是第3部分？我必须提交在4天内向量空间模型的分配。把它在周末的希望吗？

1. I’ve no date to publish it since I haven’t got any time to write it =(

5. Niu says:

再次感谢这个完整和明确的教程和我在等待即将到来的部分。

6. Jason Wu says:

由于基督徒！与s亚洲金博宝klearn向量空间很不错的工作。我只有一个问题，假设我已经计算了“tf_idf_matrix”，我想计算成对余弦相似性（每行之间）。我是有问题的稀疏矩阵格式，你可以请给出这样的例子？也是我的基质是相当大的，由60K说25K。非常感谢！

7. Khalid says:

Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:
1- You called the 2 dimensional matrix M_train, but it has the tf values of the D3 and D4 documents, so you should’ve called that matrix M_test instead of M_train. Because D3 and D4 are our test documents.
2- When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4). Because number of the documents is 2. D3 has the word ‘sun’ 1 time, D4 has it 2 times. Which makes it 3 but we also add 1 to that value to get rid of divided by 0 problem. And this makes it 4… Am I right or am I missing something?
Thank you.

1. Victoria says:

你是正确的：这些都是优秀的博客文章，但作者真的有责任/责任回去和纠正错误，这样的（和其他人，例如，第1部分; ...）：缺席训练下划线;设置STOP_WORDS参数;还我的电脑上，词汇索引是不同的。

正如我们赞赏的努力（荣誉的作者！），它也是一个显著伤害那些谁斗争过去在原有材料的（未修正）的错误。

1. Victoria says:

回复：我“你是正确的注释”（上），我应该补充：

“......还注意到康斯登Passot的评论（下同）关于分母：

‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

2. Yeshwant says:

Khalid,
This is a response to a very old question. However, I still want to respond to communicate what I understand from the article.
Your question 2: “When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4)”
我的理解：在数项的分母应该是（一些文件，其中术语出现+ 1），而不是长期的频率。术语“太阳”出现的文件的数目是2（1次在D3和D4中的2倍 - 完全出现3次在两个文件3是频率和2是文件号）。因此，分母为2 + 1 = 3。

8. 阿尔苏 says:

感谢...优良的帖子...

9. 插口 says:

excellent post!
I have some question. From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents

10. Thanuj says:

Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

11. Thanuj says:

我有同样的疑问，杰克（最后的评论）。从上个TF-IDF权重矩阵，我们怎么能拿到各自任期的重要性（例如，这是最重要的用语？）。我们如何利用这个矩阵来区分文档。

12. 丁丁 says:

I have a question..
After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

1. ashwin sudhini says:

F（IDF）的高值，表示特定载体（或文件）具有较高的局部强度和低全球实力，在这种情况下，你可以假设，在它的条款具有很高的重要性本地和不能忽视的。针对funtion（TF），其中只有长期重复大量的时间给予更多重视的那些，其中大部分时间是不正确的建模技术比较。

13. Vikram Bakhtiani says:

Hey ,
Thanx fr d code..was very helpful indeed !

1.For document clustering,after calculating inverted term frequency, shud i use any associativity coefficient like Jaccards coefficient and then apply the clustering algo like k-means or shud i apply d k-means directly to the document vectors after calculating inverted term frequency ?

2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

谢谢a ton fr the forth coming reply!

14. @Khalid：你在1-指出什么让我困惑过了一分钟（M_train VS M_test）。我想你误会了你的第二点，不过，因为我们用的是什么是真正发生的一个术语，无论任何给定的文档中出现的术语次数的文件数量。在这种情况下，那么，在为T2（“太阳”）的IDF值分母确实2 + 1（2个文件具有的术语“太阳”，1以避免潜在的零分割误差）。

我喜欢阅读本系列的第三批呢！我特别想了解更多有关特征选择。是否有一个惯用的方式来获得最高的分数TF.IDF条款的排序列表？你将如何确定这些方面的整体？你将如何得到这是最负责高或低的余弦相似度（逐行）的条款？

谢谢你的帖子_美好的_！

1. Bonnie Varghese says:

如果IDF（T2）进行登录2/4？

15. Matthys Meintjes says:

Excellent article and a great introduction to td-idf normalization.

You have a very clear and structured way of explaining these difficult concepts.

谢谢!

1. 谢谢for the feedback Matthys, I’m glad you liked the tutorial series.

1. PARAM says:

very good & infomative tutorial…. please upload more tutorials related to documents clustering process.

16. Laurent says:

Excellent article ! Thank you Christian. You did a great job.

17. Gavin Igor says:

您可以为使用TFIDF所以我们有TFIDF的矩阵，我们怎么可以用它来计算余弦做余弦相似度任何引用。感谢神奇的物品。

18. lavender says:

谢谢so much for this and for explaining the whole tf-idf thing thoroughly.

19. 请纠正我，如果我拨错
与启动后的公式“我们在第一个教程中计算出的频率：”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后：”应该是不idf_test idf_train。

Btw great series, can you give an simple approach for how to implement classification?

20. Divya says:

优秀它真的帮助我度过VSM的概念和TF-IDF得到。由于基督教

21. Sergio says:

Very good post. Congrats!!

Showing your results, I have a question:

The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

当我看到它，我明白，如果一个字中的所有文档apperars就是一个字只出现在一个文档中不太重要的：

However, in the results, the word “sun” or “bright” are most important than “sky”.

I’m not sure of understand it completly.

22. Awesome! Explains TF-IDF very well. Waiting eagerly for your next post.

23. 有一个明确的解释真棒工作。即使是外行人容易理解的主题..

24. 苏珊 says:

Terrific! I was familiar with tf-idf before but I found your scikits examples helpful as I’m trying to learn that package.

25. Thank you for writing such a detailed post. I learn allot.

26. Eugene Chinveeraphan says:

Excellent post! Stumbled on this by chance looking for more information on CountVectorizer, but I’m glad I read through both of your posts (part 1 and part 2).

1. Great thanks for the feedback Eugene, I’m really glad you liked the tutorial series.

27. me says:

似乎没有fit_transform（）为你描述..
Any idea why ?
>>> ts
（“天空是蓝色的”，“阳光灿烂”）
>>> V7 = CountVectorizer（）
>>> v7.fit_transform(ts)
<2×2 sparse matrix of type '’
用4个存储元件在坐标格式>
>>> print v7.vocabulary_
{u’is’: 0, u’the’: 1}

1. says:

其实，还有第一个Python样本中的两个小错误。
1. CountVectorizer应该被实例化，如下所示：
count_vectorizer = CountVectorizer(stop_words='english')
This will make sure the ‘is’, ‘the’ etc are removed.

2. To print the vocabulary, you have to add an underscore at the end.
print "Vocabulary:", count_vectorizer.vocabulary_

Excellent tutorial, just small things. hoep it helps others.

1. Drogo says:

谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

28. Kelvin John says:

29. I’m using scikit learn v .14. Is there any reason my results for running the exact same code would result in different results?

30. Karthik says:

感谢您抽出时间来写这篇文章。发现它非常有用。亚洲金博宝

31. 维杰 says:

Its useful…..thank you explaining the TD_IDF very elaborately..

32. Mike says:

感谢伟大的解释。

我有一个关于IDF（T＃）的计算问题。
在第一种情况下，你写的IDF（T1）=日志（2/1），因为我们没有我们收集此类条款，因此，我们添加1分母。现在，在T2的情况下，你写的日志（2/3），所以分母等于3，而不是4（= 1 + 2 + 1）？万一t3时，你写：日志（2/3），从而分母等于3（= 1 + 1 + 1）。我在这里看到的那种不一致性。你能不能，请解释一下，你是怎么计算的分母值。

谢谢。

1. Hello Mike, thanks for the feedback. You’re right, I just haven’t fixed it yet due to lack of time to review it and recalculate the values.

2. xpsycho says:

You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

3. MIK says:

是的，我有同样的问题...

33. 胡达 says:

This is good post

34. 胡达 says:

it is good if you can provide way to know how use ft-idf in classification of document. I see that example (python code) but if there is algorithm that is best because no all people can understand this language.

谢谢

35. Ganesh says:

Great post, really helped me understand the tf-idf concept!

36. Samuel Kahn says:

漂亮的文章

37. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

继续写：）

38. 尼Prem says:

Hi Christian,

It makes me very excited and lucky to have read this article. The clarity of your understanding reflects in the clarity of the document. It makes me regain my confidence in the field of machine learning.

由于一吨为美丽的解释。

Would like to read more from you.

谢谢,

1. 大感谢那种wors尼！我很高兴亚洲金博宝你喜欢本系列教程。

39. esra'a OK says:

谢谢very very much,very wonderful and useful.

40. Arne says:

Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

41. seher says:

how can i calculate tf idf for my own text file which is located some where in my pc?

42. Shubham says:

Brilliant article.

By far the easiest and most sound explanation of tf-tdf I’ve read. I really liked how you explained the mathematics behind it.

43. mehrab says:

精湛的文章新手

1. Dayananda says:

Excellent material. Excellent!!!

44. Derrick says:

嗨，great post! I’m using the TfidVectorizer module in scikit learn to produce the tf-idf matrix with norm=l2. I’ve been examining the output of the TfidfVectorizer after fit_transform of the corpora which I called tfidf_matrix. I’ve summed the rows but they do not sum to 1. The code is vect = TfidfVectorizer(use_idf=True, sublunar_tf=True, norm=”l2). tfidf_matrix = vect.fit_transform(data). When I run tfidf_matrix.sum(axis=1) the vectors are larger than 1. Perhaps I’m looking at the wrong matrix or I misunderstand how normalisation works. I hope someone can clarify this point! Thanks

45. Chris says:

Can I ask when you calculated the IDF, for example, log(2/1), did you use log to base 10 (e) or some other value? I’m getting different calculations!

46. 贡萨洛·g ^ says:

伟大的教程,刚开始一份新工作在毫升和this explains things very clearly as it should be.

47. Harsimranpal says:

Execellent帖子...。！非常感谢这篇文章。

But I need more information, As you show the practical with python, Can you provide it with JAVA language..

48. 塞巴斯蒂安 says:

我有点困惑为什么tf-idf negative numbers in this case? How do we interpret them? Correct me if I am wrong, but when the vector has a positive value, it means that the magnitude of that component determines how important that word is in that document. If the it is negative, I don’t know how to interpret it. If I were to take the dot product of a vector with all positive components and one with negative components, it would mean that some components may contribute negatively to the dot product even though on of the vectors has very high importance for a particular word.

49. 嗨，
谢谢so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

freq_term_matrix = count_vectorizer.transform(test_set)
AttributeError: ‘matrix’ object has no attribute ‘transform’

Am I using a wrong version of sklearn?

50. 莫希特古普塔 says:

谢谢

51. Alexandro says:

谢谢克里斯，你是唯一一个谁是明确了对角矩阵在网络上。

52. ishpreet says:

伟大的教程TF-IDF。优秀作品 。请添加对余弦相似性也:)

53. sherlockatsz says:

I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

54. lightningstrike says:

55. 匿名 says:

谢谢，好贴，我想它了

56. 匿名 says:

Thank you so much for such an amazing detailed explanation!

57. Akanksha Pande says:

best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

58. 科希克 says:

I learned so many things. Thanks Christian. Looking forward for your next tutorial.

59. MHR says:

嗨，I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.
the value of d4 = (0.0 ,0.89,0.44,0.0) so the normalization will be = sqrt( square(.89)+square(.44))=sqrt(.193) = .44
所以我有没有遗漏了什么？请帮我明白了。

60. Cuiqing Li says:

嗨，it is a great blog!
If I need to do bi-gram cases, how can I use sklearn to finish it?

61. 阿里 says:

这是非常大的亚洲金博宝。我喜欢你教。亚洲金博宝非常非常棒

62. 仅限Ritesh says:

我没有得到相同的结果，当我执行相同的脚本。
print (“IDF:”, tfidf.idf_) : IDF: [ 2.09861229 1. 1.40546511 1. ]

我的Python版本：3.5
Scikit Learn version is: o.18.1

什么我需要改变？可能是什么可能的错误？

thanks,

1. 它可以很多东西,因为你正在使用一个不同ent Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

1. Ravithej Chikkala says:

I am also getting: IDF: [2.09861229 1. 1.40546511 1. ]

63. Victor says:

Perfect introduction!
No hocus pocus. Clear and simple, as technology should be.
Thank you very much.
请发帖！

64. Hitesh Nankani says:

Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

65. 黎文禅师 says:

This post is interesting. I like this post…

66. Bren says:

clear cut and to the point explanations….great

67. Shipika Singh says:

hey , hii Christian
您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目，其中我使用向量空间模型，这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
谢谢

68. Eshwar期基于g says:

看到这个例子要知道如何使用它的文本分类过程。“This” link does not work any more. Can you please provide a relevant link for the example.

谢谢

69. amanda says:

这样一个伟大的解释！谢谢！

70. alternative investing says:

哇，真棒post.Much再次感谢。将阅读

Say, you got a nice post.Really thank you! Fantastic.

72. togel online says:

Wow, great article post.Much thanks again. Awesome.

73. 手机电脑 says:

当然有很大的了解这个问题。我真的很喜欢所有的点，你做。

74. chocopie says:

1vbXlh你提出了一个非常美妙的细节，欣赏它的职位。亚洲金博宝

75. I know this site provides quality based articles or
评论和其他数据，还有没有其他的网页呈现这类
在质量信息？

76. 鲁塞 says:

In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?

This site uses Akismet to reduce spam.Learn how your comment data is processed