# 机器学习::文本特征提取（TF-IDF） - 第二部分

### Introduction

But let’s go back to our definition of the$\mathrm{tf}(t,d)$which is actually the term count of the term$Ť$in the document$d$。使用这种简单的词频可能导致我们一样的问题滥用关键字，这是当我们有一个文档中的术语重复以改善上的IR其排名的目的（信息检索）系统，甚至对创建长文档偏见，使他们看起来比他们只是因为手册中出现的高频更重要。

### Vector normalization

Suppose we are going to normalize the term-frequency vector$\ {VEC V_ {D_4}}$Ťhat we have calculated in the first part of this tutorial. The document$D4$from the first part of this tutorial had this textual representation:

D4：我们可以看到闪亮的阳光，明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

$\ {VEC V_ {D_4}} =（0,2,1,0）$

$\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}$

$\帽子{V}$is the unit vector, or the normalized vector, the$\vec{v}$在矢量将被归一化和$\|\vec{v}\|_p$是矢量的范数（大小，长度）$\vec{v}$in the$L ^ p$空间（别担心，我将所有的解释）。

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the$L ^ p$空间，也被称为勒贝格空间

### 勒贝格空间

$\ | \ VEC【U} \ |= \ SQRT【U ^ 2_1 + U ^ 2_2 + U ^ 2_3 + \ ldots + U ^ 2_n}$

$\的DisplayStyle \ | \ VEC【U} \ | _p =（\左| U_1 \右| ^ P + \左| U_2 \右| ^ P + \左| U_3 \右| ^ P + \ ldots + \左|u_n \右| ^ p）^ \压裂{1} {p}$

$\displaystyle \|\vec{u}\|_p = (\sum\limits_{i=1}^{n}\left|\vec{u}_i\right|^p)^\frac{1}{p}$

When you read about aL1范，你正在阅读关于ñorm with$P = 1$，defined as:

$\的DisplayStyle \ | \ VEC【U} \ | _1 =（\左| U_1 \右| + \左| U_2 \右| + \左| U_3 \右| + \ ldots + \左| u_n \右|）$

Which is nothing more than a simple sum of the components of the vector, also known asTaxicab distance，也被称为曼哈顿距离。

Source:Wikipedia :: Taxicab Geometry

### Back to vector normalization

$\帽子{V}= \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\ \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\ \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)$

And that is it ! Our normalized vector$\ {帽子V_ {D_4}}$has now a L2-norm$\ | \帽子{V_ {D_4}} \ | _2 = 1.0$

### The term frequency – inverse document frequency (tf-idf) weight

火车文档集：D1：天空是蓝色的。D2：阳光灿烂。测试文档集：D3：在天空，阳光灿烂。D4：我们可以看到闪亮的阳光，明亮的阳光下。

Your document space can be defined then as$D = \{ d_1, d_2, \ldots, d_n \}$哪里$ñ$是在你的文集文档的数量，并在我们的情况下，$D_ {火车} = \ {D_1，D_2 \}$and$D_ {测试} = \ {D_3，D_4 \}$。我们的文档空间的基数被定义$\左| {{D_火车}} \右|= 2$and$\左| {{D_测试}} \右|= 2$，since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

Let’s see now, how idf (inverse document frequency) is then defined:

$\displaystyle \mathrm{idf}(t) = \log{\frac{\left|D\right|}{1+\left|\{d : t \in d\}\right|}}$

$\ mathrm {TF \ MBOX { - } IDF}（T）= \ mathrm {TF}（T，d）\倍\ mathrm {IDF}（t）的$

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

$M_{train} = \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix}$

$\ mathrm {IDF}（T_1）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718$

$\ mathrm {IDF}（T_2）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：T_2 \在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511$

$\ mathrm {IDF}（t_3处）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：t_3处\在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511$

$\mathrm{idf}(t_4) = \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0$

These idf weights can be represented by a vector as:

$\ {VEC {idf_列车}}= (0.69314718, -0.40546511, -0.40546511, 0.0)$

$M_{idf} = \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}$

$M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf}$

$\begin{bmatrix} \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\ \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2) \end{bmatrix} \times \begin{bmatrix} \mathrm{idf}(t_1) & 0 & 0 & 0\\ 0 & \mathrm{idf}(t_2) & 0 & 0\\ 0 & 0 & \mathrm{idf}(t_3) & 0\\ 0 & 0 & 0 & \mathrm{idf}(t_4) \end{bmatrix} \\ = \begin{bmatrix} \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\ \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4) \end{bmatrix}$

$M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf} = \\ \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix} \times \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \\ = \begin{bmatrix} 0 & -0.40546511 & -0.40546511 & 0\\ 0 & -0.81093022 & -0.40546511 & 0 \end{bmatrix}$

And finally, we can apply our L2 normalization process to the$M_ {TF \ MBOX { - }} IDF$matrix. Please note that this normalization is“逐行”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

$M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2}$ $= \begin{bmatrix} 0 & -0.70710678 & -0.70710678 & 0\\ 0 & -0.89442719 & -0.4472136 & 0 \end{bmatrix}$

### Python的实践

Environment UsedPython的v.2.7.2NumPy的1.6.1SciPy的v.0.9.0Sklearn (Scikits.learn) v.0.9

从sklearn.feature_extraction.text进口CountVectorizer train_set =（“天空是蓝色的。”，“阳光灿烂”。）TEST_SET =（“在天空中的太阳是光明的。”，“我们可以看到闪耀的太阳，。明亮的太阳“）count_vectorizer = CountVectorizer（）count_vectorizer.fit_transform（train_set）打印 ”词汇“，count_vectorizer.vocabulary＃词汇：{ '蓝'：0， '太阳'：1， '鲜艳'：2 '天空'：3} freq_term_matrix = count_vectorizer.transform（TEST_SET）打印freq_term_matrix.todense（）＃[[0 1 1 1]＃[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix），我们可以实例化TfidfTransformer，这将是负责来计算我们的词频矩阵TF-IDF权重：

从进口sklearn.feature_extraction.text TFIDF TfidfTransformer = TfidfTransformer（NORM = “L2”）tfidf.fit（freq_term_matrix）打印 “IDF：”，tfidf.idf_＃IDF：[0.69314718 -0.40546511 -0.40546511 0]

tf_idf_matrix= tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, thetf_idf_matrix其实我们以前$M_ {TF \ MBOX { - }} IDF$matrix. You can accomplish the same effect by using the矢量器类Scikit.learn的这是一个矢量器自动结合CountVectorizerTfidfTransformerŤo you. See这个例子要知道如何使用它的文本分类过程。

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

### References

The classic Vector Space Model

Sklearn文本特征提取码

13 Mar 2015-格式化，固定图像的问题。
2011 10月3日-添加了有关使用Python示例环境信息

## 103 thoughts to “Machine Learning :: Text feature extraction (tf-idf) – Part II”

1. Severtcev says:

Wow!
Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
(对不起,糟糕的英语,我努力改善它,but there is still a lot of job to do)

2. 出色的工作基督徒！我期待着阅读的文档分类你的下一个职位，聚类和主题提取朴素贝叶斯，随机梯度下降，Minibatch-K均值和非负矩阵分解

而且，scikit学习的文档上的文本特征提取部分（我是罪魁祸首？）真的很差。如果你想给一个手并改善目前的状况，不要犹豫，加入邮件列表。

1. 十分感谢奥利弗。我真的想帮助sklearn，我只是得到一些更多的时间来做到这一点，你们都做了伟大的工作，我真的在lib中已经实现的算法量折服，保持良好的工作！

3. 我喜欢这个教程的新概念我在这里学习水平较好。
这就是说，学习scikits您正在使用哪个版本？
最新通过的easy_install安装似乎有不同的模块层次结构（即没有找到sklearn feature_extraction）。如果你能提到你使用的版本，我只是尝试用这些例子。

1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

4. siamii says:

哪里是第3部分？我必须提交在4天内向量空间模型的分配。把它在周末的希望吗？

1. I’ve no date to publish it since I haven’t got any time to write it =(

5. Niu says:

再次感谢这个完整和明确的教程和我在等待即将到来的部分。

6. 吴季刚 says:

由于基督徒！与s亚洲金博宝klearn向量空间很不错的工作。我只有一个问题，假设我已经计算了“tf_idf_matrix”，我想计算成对余弦相似性（每行之间）。我是有问题的稀疏矩阵格式，你可以请给出这样的例子？也是我的基质是相当大的，由60K说25K。非常感谢！

7. 哈立德 says:

伟大的职位......我明白了什么TF-IDF以及如何与一个具体的例子实施。但我发现2周的事情，我不知道：
1-你调用2维矩阵M_train，但它具有D3和D4文件的TF值，所以你应该已经给那矩阵M_test而不是M_train。由于D3和D4是我们的测试文档。
2 - 当你计算IDF值的T2（这是“太阳”），它应该是日志（2/4）。因为文件的数目是2 D3有词“太阳” 1次，D4有它的2倍。这使得3，但是我们也加1到值摆脱0分的问题。这使得4 ...我说得对不对还是我失去了一些东西？
谢谢。

1. 维多利亚 says:

You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

1. 维多利亚 says:

回复：我“你是正确的注释”（上），我应该补充：

“......还注意到康斯登Passot的评论（下同）关于分母：

‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

2. Yeshwant says:

哈立德，
This is a response to a very old question. However, I still want to respond to communicate what I understand from the article.
Your question 2: “When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4)”
My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

8. arzu says:

感谢...优良的帖子...

9. Jack says:

优秀的帖子！
I have some question. From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents

10. Thanuj says:

非常感谢。你在这样一个简单的方法来解释它。这是非常有用的。再次感谢了很多。

11. Thanuj says:

I have same doubt as Jack(last comment). From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents.

12. 丁丁 says:

I have a question..
在TF-IDF操作后，我们得到与值的numpy的阵列。假设我们需要从阵列中获得最高50个值。我们怎样才能做到这一点？

1. ashwin sudhini says:

F（IDF）的高值，表示特定载体（或文件）具有较高的局部强度和低全球实力，在这种情况下，你可以假设，在它的条款具有很高的重要性本地和不能忽视的。针对funtion（TF），其中只有长期重复大量的时间给予更多重视的那些，其中大部分时间是不正确的建模技术比较。

13. Vikram Bakhtiani says:

嘿，
感谢名单FR d code..was的确非亚洲金博宝常有帮助！

1.适用于文档聚类，计算反相的术语频率之后，shud我使用任何关联性系数等Jaccards系数，然后应用聚类算法中像k均值或shud我计算反转术语频率后直接适用d k均值到文档向量？

2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

由于一吨FR第四到来的答复！

14. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

我喜欢阅读本系列的第三批呢！我特别想了解更多有关特征选择。是否有一个惯用的方式来获得最高的分数TF.IDF条款的排序列表？你将如何确定这些方面的整体？你将如何得到这是最负责高或低的余弦相似度（逐行）的条款？

Thank you for the _great_ posts!

1. Bonnie Varghese says:

Should idf(t2) be log 2/4 ?

15. Matthys Meintjes says:

优秀文章和一个伟大的介绍TD-IDF正常化。

You have a very clear and structured way of explaining these difficult concepts.

谢谢！

1. 感谢您的反馈Matthys，我很高兴你喜欢这个系列教程。

1. PARAM says:

very good & infomative tutorial…. please upload more tutorials related to documents clustering process.

16. 洛朗 says:

优秀的文章！谢谢基督徒。你做的非常出色。

17. Gavin Igor says:

Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

18. 薰衣草 says:

谢谢so much for this and for explaining the whole tf-idf thing thoroughly.

1. 谢谢for the feedback, I’m glad you liked the tutorial series.

19. 请纠正我，如果我拨错
从“频率后的公式Calculated in the first tutorial:” should Mtest not Mtrain. also after starting ‘These idf weights can be represented by a vector as:” should be idf_test not idf_train.

顺便说一句伟大的系列赛，你可以给如何实施分类的简单的方法？

20. 迪夫亚 says:

Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

21. 塞尔吉奥 says:

亚洲金博宝很不错的职位。恭喜！！

Showing your results, I have a question:

成比例的TF-IDF值增加到的次数的字出现在文档中，但是通过在语料库中的字，这有助于控制的事实，一些词语通常比另一些更常见的频率偏移。

When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

However, in the results, the word “sun” or “bright” are most important than “sky”.

我不知道的完全地理解它。

22. 真棒！解释了TF-IDF非常好。亚洲金博宝热切等待你的下一个职位。

23. awesome work with a clear cut explanation . Even a layman can easily understand the subject..

24. 苏珊 says:

Terrific! I was familiar with tf-idf before but I found your scikits examples helpful as I’m trying to learn that package.

25. 谢谢你写这么详细的职位。我学会了配发。

26. 尤金Chinveeraphan says:

优秀的帖子！一次偶然的机会找上CountVectorizer更多信息，无意中发现了这一点，但我很高兴我通过两个您的文章（第1部分和第2部分）的读取。

现在用书签您的博客

1. 为回馈尤金十分感谢，我真的很高兴你喜欢这个系列教程。

27. me says:

Does not seem to fit_transform() as you describe..
任何想法，为什么？
>>> TS
（“天空是蓝色的”，“阳光灿烂”）
>>> v7 = CountVectorizer()
>>> v7.fit_transform(ts)
<2×2 sparse matrix of type '’
用4个存储元件在坐标格式>
>>> print v7.vocabulary_
{u'is’：0，u'the”：1}

1. says:

其实，还有第一个Python样本中的两个小错误。
1. CountVectorizer should be instantiated like so:
count_vectorizer = CountVectorizer（STOP_WORDS = '英语'）
这将确保“是”，“的”等被删除。

2.要打印的词汇，你必须在末尾添加下划线。
print "Vocabulary:", count_vectorizer.vocabulary_

优秀的教程，只是小事情。hoep它可以帮助别人。

1. Drogo says:

由于灰。虽然文章是相当自我解释的，您的评论使整个差异。

28. 约翰·凯尔文 says:

29. 我使用scikit学习v 0.14。有什么原因，我的结果运行完全相同的代码会导致不同的结果？

30. KARTHIK says:

感谢您抽出时间来写这篇文章。发现它非常有用。亚洲金博宝

31. 维杰 says:

Its useful…..thank you explaining the TD_IDF very elaborately..

32. 麦克风 says:

谢谢for the great explanation.

我有一个关于IDF（T＃）的计算问题。
In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

谢谢。

1. 您好迈克，感谢您的反馈意见。你说得对，我只是还没有固定它尚未由于缺乏时间来审查它，并重新计算值。

2. xpsycho says:

You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

3. MIK says:

yes, I had the same question…

33. 胡达 says:

This is good post

34. 胡达 says:

这是很好的，如果你能提供的方式来知道如何使用FT-IDF中的文档分类。我看到示例（Python代码），但如果有算法是最好的，因为没有所有的人都能理解这种语言。

谢谢

35. Ganesh神 says:

Great post, really helped me understand the tf-idf concept!

36. 塞缪尔·卡恩 says:

好贴

37. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

继续写：）

38. 尼Prem says:

嗨基督徒，

这让我非常兴奋和幸运，读亚洲金博宝这篇文章。你理解的清晰反映了文件的清晰度。这让我重拾我的信心在机器学习领域。

由于一吨为美丽的解释。

想从你更多。

谢谢，

1. 大感谢那种wors尼！我很高兴亚洲金博宝你喜欢本系列教程。

39. esra'a OK says:

非常感谢你非常，非常亚洲金博宝美妙的和有用的。

40. Arne says:

谢谢你的良好的收官之作。你提到一些这比较L1和L2规范的论文，我计划研究，多一点深入。你还知道他们的名字？

41. seher says:

我如何能计算TF IDF为自己的文本文件，它位于一些地方在我的电脑？

42. Shubham says:

Brilliant article.

By far the easiest and most sound explanation of tf-tdf I’ve read. I really liked how you explained the mathematics behind it.

43. mehrab says:

精湛的文章新手

1. Dayananda says:

Excellent material. Excellent!!!

44. Derrick says:

嗨，伟大的职位！我使用的是TfidVectorizer模块scikit学习产生与规范= L2的TF-IDF矩阵。我把它叫做tfidf_matrix语料的fit_transform后，我一直在检查TfidfVectorizer的输出。我总结了行，但他们并不总和为1的代码是VECT = TfidfVectorizer（use_idf =真，sublunar_tf =真，规范=” L2）。tfidf_matrix = vect.fit_transform（数据）。当我运行tfidf_matrix.sum（轴= 1）的载体是大于1也许我看错矩阵或我误解如何正常化的作品。我希望有人能澄清这一点！谢谢

45. Chris says:

Can I ask when you calculated the IDF, for example, log(2/1), did you use log to base 10 (e) or some other value? I’m getting different calculations!

46. 贡萨洛·g ^ says:

伟大的教程，刚开始在ML一份新工作，这很清楚，因为它应该是解释的事情。亚洲金博宝

47. Harsimranpal says:

Execellent帖子...。！非常感谢这篇文章。

但是，我需要更多的信息，当你展示实际使用python，你可以为它提供JAVA语言..

48. Sebastian says:

我有点困惑，为什么TF-IDF在这种情况下，给出了负数？我们如何解读？纠正我，如果我错了，但是当载体为正值，这意味着该组件的大小确定字是该文件中有多么重要。如果是负数，我不知道如何解释它。如果我是采取向量的点积与所有积极的部件和一个负组件，这将意味着，一些部件可能负点积贡献，即使在载体有一个特定的词非常高的重视。亚洲金博宝

49. Hi,
谢谢so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

freq_term_matrix = count_vectorizer.transform（TEST_SET）
AttributeError: ‘matrix’ object has no attribute ‘transform’

我使用sklearn的版本错误？

50. Mohit Gupta says:

谢谢

51. Alexandro says:

谢谢克里斯，你是唯一一个谁是明确了对角矩阵在网络上。

52. ishpreet says:

伟大的教程TF-IDF。优秀作品 。请添加对余弦相似性也:)

53. sherlockatsz says:

我明白了TF-IDF计算处理。不过这是什么矩阵均值，以及我们如何使用TFIDF矩阵计算相似度让我困惑。你能解释一下，我们如何利用TFIDF矩阵.thanks

54. lightningstrike says:

THX为你的露骨和详细的解释。

55. 匿名 says:

谢谢，好贴，我想它了

56. 匿名 says:

非常感谢您对这样一个惊人的详细的解释！

57. Akanksha Pande says:

最好的解释..非常有帮助。亚洲金博宝你能告诉我如何绘制矢量文本分类的SVM ..我在微博分类工作。我很困惑，请帮助我。

58. 科希克 says:

I learned so many things. Thanks Christian. Looking forward for your next tutorial.

59. Mhr says:

您好，我很抱歉，如果我有错，但我不明白是怎么|| VD4 || 2 = 1。
D4 =的值（0.0，0.89,0.44,0.0），因此归一化将是= SQRT（正方形（0.89）+平方（0.44））= SQRT（0.193）= 0.44
所以我有没有遗漏了什么？请帮我明白了。

60. 李催情 says:

Hi, it is a great blog!
如果我需要做双克的情况下，我该如何使用sklearn来完成呢？

61. 阿里 says:

这是非常大的亚洲金博宝。我喜欢你教。亚洲金博宝非常非常好

62. 仅限Ritesh says:

I am not getting same result, when i am executing the same script.
print (“IDF:”, tfidf.idf_) : IDF: [ 2.09861229 1. 1.40546511 1. ]

我的Python版本：3.5
Scikit了解的版本是：o.18.1

what does i need to change? what might be the possible error?

Ťhanks,

1. It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

1. Ravithej Chikkala says:

我也越来越：IDF：2.09861229 1 1.40546511 1]

63. Victor says:

完美的介绍！
No hocus pocus. Clear and simple, as technology should be.
亚洲金博宝很有帮助
非常感谢你。亚洲金博宝
请发帖！

64. 亚太区首席技术官Matt南卡尼 says:

Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

65. 黎文禅师 says:

This post is interesting. I like this post…

66. 布伦 says:

明确的，重点突出的解释... .great

67. Shipika辛格 says:

hey , hii Christian
your post is really helpful to me to understand tfd-idf from the basics. I’m working on a project of classification where I’m using vector space model which results in determining the categories where my test document should be present. its a part of machine learning . it would be great if you suggest me something related to that. I’m stuck at this point.
谢谢

68. Eshwar S G says:

看到这个例子就知道如何使用它的文本分类过程。“这个”链接不起作用了。能否请您提供相关链接，例如。

谢谢

69. 阿曼达 says:

such a great explanation! thankyou!

70. alternative investing says:

wow, awesome post.Much thanks again. Will read on

71. Android应用下载PC says:

Say, you got a nice post.Really thank you! Fantastic.

72. Ťogel online says:

哇，伟大的文章post.Much再次感谢。真棒。

73. 手机电脑 says:

当然有很大的了解这个问题。我真的很喜欢所有的点，你做。

74. Chocopie says:

1vbXlh You have brought up a very wonderful details , appreciate it for the post.

75. 我知道这个网站提供基于高质量的文章或
评论和其他数据，还有没有其他的网页呈现这类
information in quality?

76. Rousse says:

In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?

This site uses Akismet to reduce spam.Learn how your comment data is processed