# 机器学习::文本特征提取（TF-IDF） - 第一部分I

Read the first part of this tutorial:文本特征提取（TF-IDF） - 第一部分

This post is acontinuationof the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend youto read the first partof the post series in order to follow this second post.

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

### Introduction

In the first post, we learned how to use theterm-frequencyto represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

But let’s go back to our definition of the$\mathrm{tf}(t,d)$这实际上是长期的长期计数$t$在里面document$d$。The use of this simple term frequency could lead us to problems likekeyword spamming, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency$\mathrm{tf}(t,d)$of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

### Vector normalization

D4：我们可以看到闪亮的阳光，明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

$\vec{v_{d_4}} = (0,2,1,0)$

$\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}$

Where the$\hat{v}$is the unit vector, or the normalized vector, the$\vec{v}$is the vector going to be normalized and the$\|\vec{v}\|_p$是很正常的(大小、长度)的向量$\vec{v}$在里面$L^p$space (don’t worry, I’m going to explain it all).

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the$L^p$spaces, also calledLebesgue spaces

### Lebesgue spaces

Usually, the length of a vector$\vec{u} = (u_1, u_2, u_3, \ldots, u_n)$is calculated using theEuclidean norma norm is a function that assigns a strictly positive length or size to all vectors in a vector space-, which is defined by:

$\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}$

But this isn’t the only way to define length, and that’s why you see (sometimes) a number$p$together with the norm notation, like in$\ | \ VEC【U} \ | _p$。这是因为它可以被概括为：

$\displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}$

and simplified as:

$\的DisplayStyle \ | \ VEC【U} \ | _p =（\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P）^ \压裂{1} {P}$

So when you read about aL2范, you’re reading about theEuclidean norm, a norm with$p = 2$, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without the$p$number), you have theL2范(Euclidean norm).

When you read about aL1-norm, you’re reading about the norm with$p=1$， 定义为：

$\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)$

Which is nothing more than a simple sum of the components of the vector, also known as出租汽车距离, also called Manhattan distance.

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length$6 \times \sqrt{2} \approx 8.48$, and is the unique shortest path.
Source:Wikipedia :: Taxicab Geometry

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of thescikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (即所使用的L1范数的单位矢量是不会具有长度1，如果你要以后采取其L2范数).

### Back to vector normalization

Now that you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector$\vec{v_{d_4}} = (0,2,1,0)$为了得到其单位向量$\hat{v_{d_4}}$。To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

$\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\ \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\ \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\ \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)$

And that is it ! Our normalized vector$\hat{v_{d_4}}$has now a L2-norm$\|\hat{v_{d_4}}\|_2 = 1.0$

Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf.

### 术语频率 - 逆文档频率（TF-IDF）重量

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

Your document space can be defined then as$D = \{ d_1, d_2, \ldots, d_n \}$where$n$is the number of documents in your corpus, and in our case as$D_{train} = \{d_1, d_2\}$and$D_{test} = \{d_3, d_4\}$。The cardinality of our document space is defined by$\left|{D_{train}}\right| = 2$and$\left|{D_{test}}\right| = 2$, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

Let’s see now, how idf (inverse document frequency) is then defined:

$\的DisplayStyle \ mathrm {IDF}（T）= \日志{\压裂{\左| d \右|} {1+ \左| \ {d：吨\在d \} \右|}}$

where$\左| \ {d：T \在d \} \右|$is thenumber of documentswhere the term$t$appears, when the term-frequency function satisfies$\ mathrm {TF}（T，d）\ 0 NEQ$, we’re only adding 1 into the formula to avoid zero-division.

The formula for the tf-idf is then:

$\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)$

and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (全局参数).

$M_{train} = \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix}$

Since we have 4 features, we have to calculate$\mathrm{idf}(t_1)$,$\mathrm{idf}(t_2)$,$\mathrm{idf}(t_3)$,$\ mathrm {IDF}（T_4）$:

$\mathrm{idf}(t_1) = \log{\frac{\left|D\right|}{1+\left|\{d : t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718$

$\mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$

$\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511$

$\ mathrm {IDF}（T_4）= \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0$

These idf weights can be represented by a vector as:

$\ {VEC {idf_列车}} =（0.69314718，-0.40546511，-0.40546511，0.0）$

Now that we have our matrix with the term frequency ($M_{train}$) and the vector representing the idf for each feature of our matrix ($vec {idf_ \{火车}}$), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrix$M_{train}$与各自的$vec {idf_ \{火车}}$vector dimension. To do that, we can create a square对角矩阵called$M_{idf}$同时与垂直和水平尺寸等于向量$vec {idf_ \{火车}}$dimension:

$M_{idf} = \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}$

and then multiply it to the term frequency matrix, so the final result can be defined then as:

$M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}$

Please note that the matrix multiplication isn’t commutative, the result of$A \times B$会比的结果不同$B \times A$, and this is why the$M_{idf}$is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

${bmatrix} \ \开始mathrm {tf} (t_1 d_1) & \ mathrm {tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\ \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2) \end{bmatrix} \times \begin{bmatrix} \mathrm{idf}(t_1) & 0 & 0 & 0\\ 0 & \mathrm{idf}(t_2) & 0 & 0\\ 0 & 0 & \mathrm{idf}(t_3) & 0\\ 0 & 0 & 0 & \mathrm{idf}(t_4) \end{bmatrix} \\ = \begin{bmatrix} \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\ \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4) \end{bmatrix}$

Let’s see now a concrete example of this multiplication:

$M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}= \\ \begin{bmatrix} 0 & 1 & 1 & 1\\ 0 & 2 & 1 & 0 \end{bmatrix} \times \begin{bmatrix} 0.69314718 & 0 & 0 & 0\\ 0 & -0.40546511 & 0 & 0\\ 0 & 0 & -0.40546511 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \\ = \begin{bmatrix} 0 & -0.40546511 & -0.40546511 & 0\\ 0 & -0.81093022 & -0.40546511 & 0 \end{bmatrix}$

$M_{tf\mbox{-}idf} = \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2}$ $= \begin{bmatrix} 0 & -0.70710678 & -0.70710678 & 0\\ 0 & -0.89442719 & -0.4472136 & 0 \end{bmatrix}$

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

### 蟒蛇practice

Environment Used:蟒蛇v.2.7.2,Numpy 1.6.1,Scipy v.0.9.0,Sklearn (Scikits.learn) v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learn特征提取module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix), we can instantiate theTfidfTransformer, which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。Now thatfit()method has calculated the idf for the matrix, let’s transform thefreq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform（freq_term_matrix）打印tf_idf_matrix.todense（）＃[[0 -0.70710678 -0.70710678 0]＃[0 -0.89442719 -0.4472136 0]]

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," in亚洲金博宝未知领域, 03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

### 参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Wikipedia :: tf-idf

Sklearn text feature extraction code

2015年3月13日Formating, fixed images issues.
03 Oct 2011Added the info about the environment used for Python examples

## 103 thoughts to “Machine Learning :: Text feature extraction (tf-idf) – Part II”

1. Severtcev says:

哇！
Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
(对不起,糟糕的英语,我努力改善它,but there is still a lot of job to do)

2. Excellent work Christian! I am looking forward to reading your next posts on document classification, clustering and topics extraction with Naive Bayes, Stochastic Gradient Descent, Minibatch-k-Means and Non Negative Matrix factorization

Also, the documentation of scikit-learn is really poor on the text feature extraction part (I am the main culprit…). Don’t hesitate to join the mailing list if you want to give a hand and improve upon the current situation.

1. Great thanks Olivier. I really want to help sklearn, I just have to get some more time to do that, you guys have done a great work, I’m really impressed by the amount of algorithms already implemented in the lib, keep the good work !

3. I like this tutorial better for the level of new concepts i am learning here.
That said, which version of scikits-learn are you using?.
The latest as installed by easy_install seems to have a different module hierarchy (i.e doesn’t find feature_extraction in sklearn). If you could mention the version you used, i will just try out with those examples.

1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

4. siamii says:

Where’s part 3? I’ve got to submit an assignment on Vector Space Modelling in 4 days. Any hope of putting it up over the weekend?

1. 我没有时间来发布它，因为我没有任何时间来写吧=（

5. Niu says:

谢谢again for this complete and explicit tutorial and I am waiting for the coming section.

6. Jason Wu says:

谢谢Christian! a very nice work on vector space with sklearn. I just have one question, suppose I have computed the ‘tf_idf_matrix’, and I would like to compute the pair-wise cosine similarity (between each rows). I was having problem with the sparse matrix format, can you please give an example on that? Also my matrix is pretty big, say 25k by 60k. Thanks a lot!

7. Khalid says:

Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:
1- You called the 2 dimensional matrix M_train, but it has the tf values of the D3 and D4 documents, so you should’ve called that matrix M_test instead of M_train. Because D3 and D4 are our test documents.
2- When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4). Because number of the documents is 2. D3 has the word ‘sun’ 1 time, D4 has it 2 times. Which makes it 3 but we also add 1 to that value to get rid of divided by 0 problem. And this makes it 4… Am I right or am I missing something?
Thank you.

1. 胜利者ia says:

You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

1. 胜利者ia says:

re: my ‘you are correct comment’ (above), I should have added:

“… noting also Frédérique Passot’s comment (below) regarding the denominator:

‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

2. Yeshwant says:

Khalid,
这是一个很古老的问题的答复。亚洲金博宝不过，我还是想回应沟通一下我从文章中了解。
你的问题2：“当你计算IDF值的T2（这是‘太阳’），它应该是日志（2/4）”
My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

8. arzu says:

thanks… excellent post…

9. 插口 says:

excellent post!
我有一些问题。从上个TF-IDF权重矩阵，我们怎么能拿到各自任期的重要性（例如，这是最重要的用语？）。我们如何利用这个矩阵文件进行分类

10. Thanuj says:

Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

11. Thanuj says:

I have same doubt as Jack(last comment). From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents.

12. tintin says:

I have a question..
After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

1. 阿什温sudhini says:

high value of f(idf) denotes that the particular vector(or Document) has high local strength and low global strength, in which case you can assume that the terms in it has high significance locally and cant be ignored. Comparing against funtion(tf) where only the term repeats high number of times are the ones given more importance,which most of the times is not a proper modelling technique.

13. 维克拉姆Bakhtiani says:

Hey ,
Thanx fr d code..was very helpful indeed !

1.For document clustering,after calculating inverted term frequency, shud i use any associativity coefficient like Jaccards coefficient and then apply the clustering algo like k-means or shud i apply d k-means directly to the document vectors after calculating inverted term frequency ?

2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

谢谢a ton fr the forth coming reply!

14. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

I’d love to read the third installment of this series too! I’d be particularly interested in learning more about feature selection. Is there an idiomatic way to get a sorted list of the terms with the highest tf.idf scores? How would you identify those terms overall? How would you get the terms which are the most responsible for a high or low cosine similarity (row by row)?

谢谢你的帖子_美好的_！

1. Varghese表示邦妮 says:

Should idf(t2) be log 2/4 ?

15. Matthys Meintjes says:

Excellent article and a great introduction to td-idf normalization.

You have a very clear and structured way of explaining these difficult concepts.

谢谢!

1. 谢谢for the feedback Matthys, I’m glad you liked the tutorial series.

1. param says:

亚洲金博宝很不错的＆infomative教程...。请相关的上传文档聚类过程更多的教程。

16. Laurent says:

Excellent article ! Thank you Christian. You did a great job.

17. 加文·伊戈尔 says:

Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

18. lavender says:

非常感谢这和彻底解释整个TF-IDF的事情。

19. Please correct me if i’m worng
与启动后的公式“我们在第一个教程中计算出的频率：”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后：”应该是不idf_test idf_train。

Btw great series, can you give an simple approach for how to implement classification?

20. Divya says:

Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

21. Sergio says:

Very good post. Congrats!!

显示你的结果，我有个问题：

The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

However, in the results, the word “sun” or “bright” are most important than “sky”.

I’m not sure of understand it completly.

22. Awesome! Explains TF-IDF very well. Waiting eagerly for your next post.

23. awesome work with a clear cut explanation . Even a layman can easily understand the subject..

24. Susan says:

了不起！我以前熟悉的TF-IDF，但我发现你scikits例子有益，因为我想学习那个包。

25. Thank you for writing such a detailed post. I learn allot.

26. Eugene Chinveeraphan says:

Excellent post! Stumbled on this by chance looking for more information on CountVectorizer, but I’m glad I read through both of your posts (part 1 and part 2).

1. Great thanks for the feedback Eugene, I’m really glad you liked the tutorial series.

27. me says:

似乎没有fit_transform（）为你描述..
Any idea why ?
>>> ts
(‘The sky is blue’, ‘The sun is bright’)
>>> V7 = CountVectorizer（）
>>> v7.fit_transform（TS）
<2×2型的稀疏矩阵“”
with 4 stored elements in COOrdinate format>
>>> print v7.vocabulary_
{u’is’: 0, u’the’: 1}

1. Ash says:

Actually, there are two small errors in the first Python sample.
1. CountVectorizer should be instantiated like so:
count_vectorizer = CountVectorizer(stop_words='english')
This will make sure the ‘is’, ‘the’ etc are removed.

2. To print the vocabulary, you have to add an underscore at the end.
print "Vocabulary:", count_vectorizer.vocabulary_

Excellent tutorial, just small things. hoep it helps others.

1. Drogo says:

谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

28. Kelvin John says:

我喜欢你的文章。

29. I’m using scikit learn v .14. Is there any reason my results for running the exact same code would result in different results?

30. Karthik says:

31. Vijay says:

它的有用... ..thank你解释非常精心的TD_IDF ..亚洲金博宝

32. Mike says:

感谢伟大的解释。

I have a question about calculation of the idf(t#).
In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

谢谢。

1. Hello Mike, thanks for the feedback. You’re right, I just haven’t fixed it yet due to lack of time to review it and recalculate the values.

2. xpsycho says:

You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

3. mik says:

yes, I had the same question…

33. huda says:

这是一个好职位

34. huda says:

it is good if you can provide way to know how use ft-idf in classification of document. I see that example (python code) but if there is algorithm that is best because no all people can understand this language.

谢谢

35. Ganesh says:

伟大的职位，真正帮助我理解了TF-IDF的概念！

36. Samuel Kahn says:

漂亮的文章

37. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

Keep writing:)

38. 尼普雷姆 says:

Hi Christian,

It makes me very excited and lucky to have read this article. The clarity of your understanding reflects in the clarity of the document. It makes me regain my confidence in the field of machine learning.

谢谢a ton for the beautiful explanation.

Would like to read more from you.

谢谢,
Neethu

1. Great thanks for the kind wors Neethu ! I’m very glad you liked the tutorial series.

39. esra'a ok says:

thank you very very much,very wonderful and useful.

40. Arne says:

Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

41. seher says:

how can i calculate tf idf for my own text file which is located some where in my pc?

42. Shubham says:

辉煌的文章。

到目前为止TF-TDF的最简单，最完善的解释，我读过。我真的很喜欢你如何解释数学后面。

43. mehrab says:

superb article for newbies

1. Dayananda says:

Excellent material. Excellent!!!

44. 起重机 says:

Hi, great post! I’m using the TfidVectorizer module in scikit learn to produce the tf-idf matrix with norm=l2. I’ve been examining the output of the TfidfVectorizer after fit_transform of the corpora which I called tfidf_matrix. I’ve summed the rows but they do not sum to 1. The code is vect = TfidfVectorizer(use_idf=True, sublunar_tf=True, norm=”l2). tfidf_matrix = vect.fit_transform(data). When I run tfidf_matrix.sum(axis=1) the vectors are larger than 1. Perhaps I’m looking at the wrong matrix or I misunderstand how normalisation works. I hope someone can clarify this point! Thanks

45. Chris says:

我能问你的时候计算的IDF，例如日志（2/1），你用日志基地10（E）或其他一些价值？我得到不同的计算！

46. Gonzalo G says:

Great tutorial, just started a new job in ML and this explains things very clearly as it should be.

47. Harsimranpal says:

But I need more information, As you show the practical with python, Can you provide it with JAVA language..

48. Sebastian says:

I’m a little bit confused why tf-idf gives negative numbers in this case? How do we interpret them? Correct me if I am wrong, but when the vector has a positive value, it means that the magnitude of that component determines how important that word is in that document. If the it is negative, I don’t know how to interpret it. If I were to take the dot product of a vector with all positive components and one with negative components, it would mean that some components may contribute negatively to the dot product even though on of the vectors has very high importance for a particular word.

49. Hi,
thank you so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

freq_term_matrix = count_vectorizer.transform(test_set)
AttributeError的：“矩阵”对象没有属性“变换”

Am I using a wrong version of sklearn?

50. Mohit Gupta says:

Awesome simple and effective explaination.Please post more topics with such awesome explainations.Looking forward for upcoming articles.
谢谢

51. 亚历山德罗 says:

Thank you Chris, you are the only one on the web who was clear about the diagonal matrix.

52. ishpreet says:

Great tutorial for Tf-Idf. Excellent work . Please add for cosine similarity also:)

53. sherlockatsz says:

I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

54. lightningstrike says:

55. Anonymous says:

thanks, nice post, I’m trying it out

56. Anonymous says:

Thank you so much for such an amazing detailed explanation!

57. Akanksha潘德 says:

best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

58. Koushik says:

我学到了很多东西。由于基督教。期待你的下一个教程。

59. Mhr says:

Hi, I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.
the value of d4 = (0.0 ,0.89,0.44,0.0) so the normalization will be = sqrt( square(.89)+square(.44))=sqrt(.193) = .44

60. Cuiqing Li says:

Hi, it is a great blog!
If I need to do bi-gram cases, how can I use sklearn to finish it?

61. alireza says:

it is very great. i love your teach. very very good

62. Ritesh says:

I am not getting same result, when i am executing the same script.
打印（“IDF：”，tfidf.idf_）：IDF：[2.09861229 1. 1.40546511 1]

My python version is: 3.5
Scikit Learn version is: o.18.1

what does i need to change? what might be the possible error?

thanks,

1. It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

1. Ravithej Chikkala says:

I am also getting: IDF: [2.09861229 1. 1.40546511 1. ]

63. 胜利者 says:

Perfect introduction!
没有骗人把戏。清晰简单的，随着技术的应。
Thank you very much.
Keep posting!

64. Hitesh Nankani says:

Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

65. LÊ VĂN HẠNH says:

This post is interesting. I like this post…

66. Bren says:

clear cut and to the point explanations….great

67. Shipika Singh says:

哎，HII基督教
您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目，其中我使用向量空间模型，这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
thank you

68. Eshwar S G says:

See this example to know how to use it for the text classification process. “This” link does not work any more. Can you please provide a relevant link for the example.

谢谢

69. amanda says:

这样一个伟大的解释！谢谢！

70. 另类投资 says:

wow, awesome post.Much thanks again. Will read on

也就是说，如果你有一个很好的post.Really谢谢！太棒了。

72. togel在线 says:

Wow, great article post.Much thanks again. Awesome.

73. Mobile Computer says:

74. chocopie says:

1vbXlh You have brought up a very wonderful details , appreciate it for the post.

75. I know this site provides quality based articles or
reviews and additional data, is there any other web page which presents these kinds of
information in quality?

76. Rousse says:

In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?