机器学习::文本特征提取（TF-IDF） - 第二部分

Read the first part of this tutorial:文本特征提取（TF-IDF） - 第一部分

Introduction

In the first post, we learned how to use theterm-frequencyto represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

But let’s go back to our definition of the$\mathrm{tf}(t,d)$这实际上是长期的长期计数$t$在里面document$d$。使用这种简单的词频可能导致我们一样的问题keyword spamming，这是当我们有一个文档中的术语重复以改善上的IR其排名的目的（信息检索) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency$\mathrm{tf}(t,d)$上的矢量空间中的文件的通常也归一化。让我们来看看我们是如何规范这一载体。

Vector normalization

D4：我们可以看到闪亮的阳光，明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

$\ {VEC V_ {D_4}} =（0,2,1,0）$

$\ displaystyle \hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p}$

Where the$\帽子{V}$is the unit vector, or the normalized vector, the$\ VEC {V}$在矢量将被归一化和$\|\vec{v}\|_p$is the norm (magnitude, length) of the vector$\ VEC {V}$在里面$L ^ p$空间（别担心，我将所有的解释）。

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the$L ^ p$spaces, also called勒贝格空间

勒贝格空间

$\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}$

But this isn’t the only way to define length, and that’s why you see (sometimes) a number$p$together with the norm notation, like in$\ | \ VEC【U} \ | _p$。这是因为它可以被概括为：

$\ displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}$

$\的DisplayStyle \ | \ VEC【U} \ | _p =（\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P）^ \压裂{1} {P}$

So when you read about aL2范，你正在阅读关于Euclidean norm, a norm with$p = 2$, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without the$p$number), you have theL2范(Euclidean norm).

When you read about aL1-norm，你正在阅读关于norm with$p=1$， 定义为：

$\的DisplayStyle \ | \ VEC【U} \ | _1 =（\左| U_1 \右| + \左| U_2 \右| + \左| U_3 \右| + \ ldots + \左| u_n \右|）$

Which is nothing more than a simple sum of the components of the vector, also known as出租汽车距离，也被称为曼哈顿距离。

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length$6 \times \sqrt{2} \approx 8.48$, and is the unique shortest path.
Source:维基百科:: Taxicab Geometry

Back to vector normalization

$\帽子{V}= \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\ \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\ \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)$

And that is it ! Our normalized vector$\ {帽子V_ {D_4}}$has now a L2-norm$\ | \帽子{V_ {D_4}} \ | _2 = 1.0$

术语频率 - 逆文档频率（TF-IDF）重量

火车文档集：D1：天空是蓝色的。D2：阳光灿烂。测试文档集：D3：在天空，阳光灿烂。D4：我们可以看到闪亮的阳光，明亮的阳光下。

Your document space can be defined then as$D = \{ d_1, d_2, \ldots, d_n \}$哪里$n$是在你的文集文档的数量，并在我们的情况下，$D_{train} = \{d_1, d_2\}$and$D_ {测试} = \ {D_3，D_4 \}$。The cardinality of our document space is defined by$\左| {{D_火车}} \右|= 2$and$\left|{D_{test}}\right| = 2$, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

Let’s see now, how idf (inverse document frequency) is then defined:

$\的DisplayStyle \ mathrm {IDF}（T）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：吨\在d \} \右|}}$

$\ mathrm {TF \ MBOX { - } IDF}（T）= \ mathrm {TF}（T，d）\倍\ mathrm {IDF}（t）的$

and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (全局参数）。

$M_{train} = \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix}$

Since we have 4 features, we have to calculate$\ mathrm {IDF}（T_1）$,$\mathrm{idf}(t_2)$,$\mathrm{idf}(t_3)$,$\ mathrm {IDF}（T_4）$:

$\ mathrm {IDF}（T_1）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718$

$\mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$

$\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$

$\ mathrm {IDF}（T_4）= \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0$

These idf weights can be represented by a vector as:

$\ {VEC {idf_列车}} =（0.69314718，-0.40546511，-0.40546511，0.0）$

$M_{idf} = \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}$

$M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}$

Please note that the matrix multiplication isn’t commutative, the result of$A \乘以B$会比的结果不同$乙\一个时代$, and this is why the$M_{idf}$是对乘法的右侧，以完成每个IDF值到其对应的特征相乘的期望的效果：

$\begin{bmatrix} \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\ \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2) \end{bmatrix} \times \begin{bmatrix} \mathrm{idf}(t_1) & 0 & 0 & 0\\ 0 & \mathrm{idf}(t_2) & 0 & 0\\ 0 & 0 & \mathrm{idf}(t_3) & 0\\ 0 & 0 & 0 & \mathrm{idf}(t_4) \end{bmatrix} \\ = \begin{bmatrix} \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\ \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4) \end{bmatrix}$

$M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}= \\ \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix} \times \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \\ = \begin{bmatrix} 0 & -0.40546511 & -0.40546511 & 0\\ 0 & -0.81093022 & -0.40546511 & 0 \end{bmatrix}$

$M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2}$ $= \begin{bmatrix} 0 & -0.70710678 & -0.70710678 & 0\\ 0 & -0.89442719 & -0.4472136 & 0 \end{bmatrix}$

Python的实践

Environment Used:Python的v.2.7.2,Numpy 1.6.1,Scipy v.0.9.0,Sklearn (Scikits.learn) v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learn特征提取模块。

从sklearn.feature_extraction.text进口CountVectorizer train_set =（“天空是蓝色的。”，“阳光灿烂”。）TEST_SET =（“在天空中的太阳是光明的。”，“我们可以看到闪耀的太阳，。明亮的太阳“）count_vectorizer = CountVectorizer（）count_vectorizer.fit_transform（train_set）打印 ”词汇“，count_vectorizer.vocabulary＃词汇：{ '蓝'：0， '太阳'：1， '鲜艳'：2 '天空'：3} freq_term_matrix = count_vectorizer.transform（TEST_SET）打印freq_term_matrix.todense（）＃[[0 1 1 1]＃[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix），我们可以实例化TfidfTransformer，这将是负责来计算我们的词频矩阵TF-IDF权重：

从进口sklearn.feature_extraction.text TFIDF TfidfTransformer = TfidfTransformer（NORM = “L2”）tfidf.fit（freq_term_matrix）打印 “IDF：”，tfidf.idf_＃IDF：[0.69314718 -0.40546511 -0.40546511 0]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。Now that适合（）方法计算矩阵中的IDF上，让我们改造freq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform（freq_term_matrix）打印tf_idf_matrix.todense（）＃[[0 -0.70710678 -0.70710678 0]＃[0 -0.89442719 -0.4472136 0]]

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," in亚洲金博宝未知领域, 03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Sklearn text feature extraction code

2015年3月13日格式化，固定图像的问题。
03 Oct 2011添加了有关使用Python示例环境信息

103个想法“机器学习::文本特征提取（TF-IDF） - 第二部分”

1. Severtcev says:

哇！
Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
(sorry for bad English, I’m working to improve it, but there is still a lot of job to do)

2. 出色的工作基督徒！我期待着阅读的文档分类你的下一个职位，聚类和主题提取朴素贝叶斯，随机梯度下降，Minibatch-K均值和非负矩阵分解

而且，scikit学习的文档上的文本特征提取部分（我是罪魁祸首？）真的很差。如果你想给一个手并改善目前的状况，不要犹豫，加入邮件列表。

1. 十分感谢奥利弗。我真的想帮助sklearn，我只是得到一些更多的时间来做到这一点，你们都做了伟大的工作，我真的在lib中已经实现的算法量折服，保持良好的工作！

3. 我喜欢这个教程的新概念我在这里学习水平较好。
That said, which version of scikits-learn are you using?.
最新通过的easy_install安装似乎有不同的模块层次结构（即没有找到sklearn feature_extraction）。如果你能提到你使用的版本，我只是尝试用这些例子。

1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

4. siamii says:

哪里是第3部分？我必须提交在4天内向量空间模型的分配。把它在周末的希望吗？

1. 我没有时间来发布它，因为我没有任何时间来写吧=（

5. Niu says:

再次感谢这个完整和明确的教程和我在等待即将到来的部分。

6. Jason Wu says:

由于基督徒！与s亚洲金博宝klearn向量空间很不错的工作。我只有一个问题，假设我已经计算了“tf_idf_matrix”，我想计算成对余弦相似性（每行之间）。我是有问题的稀疏矩阵格式，你可以请给出这样的例子？也是我的基质是相当大的，由60K说25K。非常感谢！

7. Khalid says:

Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:
1-你调用2维矩阵M_train，但它具有D3和D4文件的TF值，所以你应该已经给那矩阵M_test而不是M_train。由于D3和D4是我们的测试文档。
2- When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4). Because number of the documents is 2. D3 has the word ‘sun’ 1 time, D4 has it 2 times. Which makes it 3 but we also add 1 to that value to get rid of divided by 0 problem. And this makes it 4… Am I right or am I missing something?
Thank you.

1. 维多利亚 says:

You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

1. 维多利亚 says:

回复：我“你是正确的注释”（上），我应该补充：

“......还注意到康斯登Passot的评论（下同）关于分母：

‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

2. Yeshwant says:

哈立德，
这是一个很古老的问题的答复。亚洲金博宝不过，我还是想回应沟通一下我从文章中了解。
你的问题2：“当你计算IDF值的T2（这是‘太阳’），它应该是日志（2/4）”
My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

8. arzu says:

感谢...优良的帖子...

9. 插口 says:

excellent post!
我有一些问题。从上个TF-IDF权重矩阵，我们怎么能拿到各自任期的重要性（例如，这是最重要的用语？）。我们如何利用这个矩阵文件进行分类

10. Thanuj says:

Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

11. Thanuj says:

我有同样的疑问，杰克（最后的评论）。从上个TF-IDF权重矩阵，我们怎么能拿到各自任期的重要性（例如，这是最重要的用语？）。我们如何利用这个矩阵来区分文档。

12. 丁丁 says:

I have a question..
After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

1. 阿什温sudhini says:

F（IDF）的高值，表示特定载体（或文件）具有较高的局部强度和低全球实力，在这种情况下，你可以假设，在它的条款具有很高的重要性本地和不能忽视的。针对funtion（TF），其中只有长期重复大量的时间给予更多重视的那些，其中大部分时间是不正确的建模技术比较。

13. 维克拉姆Bakhtiani says:

Hey ,
感谢名单FR d code..was的确非亚洲金博宝常有帮助！

1.For document clustering,after calculating inverted term frequency, shud i use any associativity coefficient like Jaccards coefficient and then apply the clustering algo like k-means or shud i apply d k-means directly to the document vectors after calculating inverted term frequency ?

2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

由于一吨FR第四到来的答复！

14. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

我喜欢阅读本系列的第三批呢！我特别想了解更多有关特征选择。是否有一个惯用的方式来获得最高的分数TF.IDF条款的排序列表？你将如何确定这些方面的整体？你将如何得到这是最负责高或低的余弦相似度（逐行）的条款？

谢谢你的帖子_美好的_！

1. Varghese表示邦妮 says:

Should idf(t2) be log 2/4 ?

15. Matthys Meintjes says:

Excellent article and a great introduction to td-idf normalization.

You have a very clear and structured way of explaining these difficult concepts.

谢谢!

1. 感谢您的反馈Matthys，我很高兴你喜欢这个系列教程。

1. PARAM says:

亚洲金博宝很不错的＆infomative教程...。请相关的上传文档聚类过程更多的教程。

16. Laurent says:

优秀的文章！谢谢基督徒。你做的非常出色。

17. 加文·伊戈尔 says:

Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

18. lavender says:

非常感谢这和彻底解释整个TF-IDF的事情。

19. 请纠正我，如果我拨错
与启动后的公式“我们在第一个教程中计算出的频率：”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后：”应该是不idf_test idf_train。

顺便说一句伟大的系列赛，你可以给如何实施分类的简单的方法？

20. Divya says:

Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

21. Sergio says:

亚洲金博宝很不错的职位。恭喜！！

显示你的结果，我有个问题：

The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

However, in the results, the word “sun” or “bright” are most important than “sky”.

I’m not sure of understand it completly.

22. 真棒！解释了TF-IDF非常好。亚洲金博宝热切等待你的下一个职位。

23. awesome work with a clear cut explanation . Even a layman can easily understand the subject..

24. 苏珊 says:

了不起！我以前熟悉的TF-IDF，但我发现你scikits例子有益，因为我想学习那个包。

25. Thank you for writing such a detailed post. I learn allot.

26. Eugene Chinveeraphan says:

Excellent post! Stumbled on this by chance looking for more information on CountVectorizer, but I’m glad I read through both of your posts (part 1 and part 2).

1. Great thanks for the feedback Eugene, I’m really glad you liked the tutorial series.

27. me says:

似乎没有fit_transform（）为你描述..
Any idea why ?
>>> ts
（“天空是蓝色的”，“阳光灿烂”）
>>> V7 = CountVectorizer（）
>>> v7.fit_transform（TS）
<2×2型的稀疏矩阵“”
用4个存储元件在坐标格式>
>>> print v7.vocabulary_
{u’is’: 0, u’the’: 1}

1. says:

其实，还有第一个Python样本中的两个小错误。
1. CountVectorizer should be instantiated like so:
count_vectorizer = CountVectorizer(stop_words='english')
This will make sure the ‘is’, ‘the’ etc are removed.

2. To print the vocabulary, you have to add an underscore at the end.
print "Vocabulary:", count_vectorizer.vocabulary_

优秀的教程，只是小事情。hoep它可以帮助别人。

1. Drogo says:

谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

28. Kelvin John says:

我喜欢你的文章。

29. 我使用scikit学习v 0.14。有什么原因，我的结果运行完全相同的代码会导致不同的结果？

30. KARTHIK says:

感谢您抽出时间来写这篇文章。发现它非常有用。亚洲金博宝

31. 维杰 says:

它的有用... ..thank你解释非常精心的TD_IDF ..亚洲金博宝

32. Mike says:

感谢伟大的解释。

我有一个关于IDF（T＃）的计算问题。
In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

谢谢。

1. Hello Mike, thanks for the feedback. You’re right, I just haven’t fixed it yet due to lack of time to review it and recalculate the values.

2. xpsycho says:

You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

3. MIK says:

yes, I had the same question…

33. 胡达 says:

这是一个好职位

34. 胡达 says:

这是很好的，如果你能提供的方式来知道如何使用FT-IDF中的文档分类。我看到示例（Python代码），但如果有算法是最好的，因为没有所有的人都能理解这种语言。

谢谢

35. Ganesh says:

伟大的职位，真正帮助我理解了TF-IDF的概念！

36. 塞缪尔·卡恩 says:

漂亮的文章

37. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

继续写：）

38. 尼普雷姆 says:

Hi Christian,

It makes me very excited and lucky to have read this article. The clarity of your understanding reflects in the clarity of the document. It makes me regain my confidence in the field of machine learning.

由于一吨为美丽的解释。

想从你更多。

谢谢,

1. 大感谢那种wors尼！我很高兴亚洲金博宝你喜欢本系列教程。

39. esra'a OK says:

谢谢very very much,very wonderful and useful.

40. Arne says:

Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

41. seher says:

how can i calculate tf idf for my own text file which is located some where in my pc?

42. Shubham says:

辉煌的文章。

到目前为止TF-TDF的最简单，最完善的解释，我读过。我真的很喜欢你如何解释数学后面。

43. mehrab says:

精湛的文章新手

1. Dayananda says:

Excellent material. Excellent!!!

44. 起重机 says:

Hi, great post! I’m using the TfidVectorizer module in scikit learn to produce the tf-idf matrix with norm=l2. I’ve been examining the output of the TfidfVectorizer after fit_transform of the corpora which I called tfidf_matrix. I’ve summed the rows but they do not sum to 1. The code is vect = TfidfVectorizer(use_idf=True, sublunar_tf=True, norm=”l2). tfidf_matrix = vect.fit_transform(data). When I run tfidf_matrix.sum(axis=1) the vectors are larger than 1. Perhaps I’m looking at the wrong matrix or I misunderstand how normalisation works. I hope someone can clarify this point! Thanks

45. Chris says:

我能问你的时候计算的IDF，例如日志（2/1），你用日志基地10（E）或其他一些价值？我得到不同的计算！

46. 贡萨洛·g ^ says:

Great tutorial, just started a new job in ML and this explains things very clearly as it should be.

47. Harsimranpal says:

Execellent帖子...。！非常感谢这篇文章。

但是，我需要更多的信息，当你展示实际使用python，你可以为它提供JAVA语言..

48. Sebastian says:

我有点困惑，为什么TF-IDF在这种情况下，给出了负数？我们如何解读？纠正我，如果我错了，但是当载体为正值，这意味着该组件的大小确定字是该文件中有多么重要。如果是负数，我不知道如何解释它。如果我是采取向量的点积与所有积极的部件和一个负组件，这将意味着，一些部件可能负点积贡献，即使在载体有一个特定的词非常高的重视。亚洲金博宝

49. Hi,
谢谢so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

freq_term_matrix = count_vectorizer.transform(test_set)
AttributeError的：“矩阵”对象没有属性“变换”

我使用sklearn的版本错误？

50. Mohit Gupta says:

谢谢

51. 亚历山德罗 says:

谢谢克里斯，你是唯一一个谁是明确了对角矩阵在网络上。

52. ishpreet says:

伟大的教程TF-IDF。优秀作品 。请添加对余弦相似性也:)

53. sherlockatsz says:

I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

54. lightningstrike says:

THX为你的露骨和详细的解释。

55. 匿名 says:

谢谢，好贴，我想它了

56. 匿名 says:

Thank you so much for such an amazing detailed explanation!

57. Akanksha潘德 says:

best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

58. 科希克 says:

我学到了很多东西。由于基督教。期待你的下一个教程。

59. Mhr says:

Hi, I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.
the value of d4 = (0.0 ,0.89,0.44,0.0) so the normalization will be = sqrt( square(.89)+square(.44))=sqrt(.193) = .44
所以我有没有遗漏了什么？请帮我明白了。

60. Cuiqing Li says:

Hi, it is a great blog!
如果我需要做双克的情况下，我该如何使用sklearn来完成呢？

61. 阿里 says:

这是非常大的亚洲金博宝。我喜欢你教。亚洲金博宝非常非常棒

62. 仅限Ritesh says:

I am not getting same result, when i am executing the same script.
打印（“IDF：”，tfidf.idf_）：IDF：[2.09861229 1. 1.40546511 1]

我的Python版本：3.5
Scikit Learn version is: o.18.1

what does i need to change? what might be the possible error?

thanks,

1. 它可以是很多东西，因为你使用的是不同的Python解释器的版本也不同Scikit-学习版，你应该会在结果的差异，因为他们可能已经改变了默认参数，算法，圆等

1. Ravithej Chikkala says:

我也越来越：IDF：2.09861229 1 1.40546511 1]

63. 胜利者 says:

Perfect introduction!
没有骗人把戏。清晰简单的，随着技术的应。
非常感谢你。亚洲金博宝
请发帖！

64. Hitesh Nankani says:

Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

65. 黎文禅师 says:

This post is interesting. I like this post…

66. 布伦 says:

clear cut and to the point explanations….great

67. Shipika Singh says:

哎，HII基督教
您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目，其中我使用向量空间模型，这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
谢谢

68. Eshwar S G says:

See this example to know how to use it for the text classification process. “This” link does not work any more. Can you please provide a relevant link for the example.

谢谢

69. amanda says:

这样一个伟大的解释！谢谢！

70. 另类投资 says:

wow, awesome post.Much thanks again. Will read on

71. Android应用下载PC says:

也就是说，如果你有一个很好的post.Really谢谢！太棒了。

72. togel在线 says:

Wow, great article post.Much thanks again. Awesome.

73. 手机电脑 says:

当然有很大的了解这个问题。我真的很喜欢所有的点，你做。

74. chocopie says:

1vbXlh You have brought up a very wonderful details , appreciate it for the post.

75. 我知道这个网站提供基于高质量的文章或
评论和其他数据，还有没有其他的网页呈现这类
information in quality?

76. Rousse says:

In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?