机器学习::文本特征提取(TF-IDF) - 第一部分I

Read the first part of this tutorial:文本特征提取(TF-IDF) - 第一部分

This post is acontinuation在哪里,我们开始学习有关文本特征提取和向量空间模型表示的理论和实践的第一部分。我真的建议你to read the first partof the post series in order to follow this second post.

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

介绍

In the first post, we learned how to use theterm-frequencyto represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

But let’s go back to our definition of the\ mathrm {TF}(T,d)这实际上是长期的长期计数t在文档中d。The use of this simple term frequency could lead us to problems likekeyword spamming, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency\ mathrm {TF}(T,d)of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

矢量归

假设我们要正常化术语频矢量\ {VEC V_ {D_4}}我们在本教程的第一部分已经计算。该文件d4从本教程的第一部分中有这样的文字表示:

D4:我们可以看到闪亮的阳光,明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

\ {VEC V_ {D_4}} =(0,2,1,0)

为了归一化矢量,是相同的计算Unit Vectorof the vector, and they are denoted using the “hat” notation:\帽子{V}。的单位矢量的定义\帽子{V}of a vector\ VEC {V}is:

\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}

Where the\帽子{V}是单位矢量,或者归一化矢量,所述\ VEC {V}在矢量将被归一化和\ | \ VEC {V} \ | _p是很正常的(大小、长度)的向量\ VEC {V}在里面L^pspace (don’t worry, I’m going to explain it all).

的单位矢量实际上无非是矢量的归一化版本的更多,是一种载体,其长度为1。

归一化处理(来源:http://processing.org/learning/pvector/)
归一化处理(来源:http://processing.org/learning/pvector/)

但这里的重要问题是如何向量的长度来计算,并明白这一点,你必须了解的动机L^pspaces, also calledLebesgue spaces

Lebesgue spaces

How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)
How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)

Usually, the length of a vector\ {VEC U】=(U_1,U_2,U_3,\ ldots,u_n)is calculated using the欧几里得范一个准则是在矢量空间中分配一个严格正长度或大小于所有矢量的函数-, which is defined by:

(来源:http://processing.org/learning/pvector/)
(来源:http://processing.org/learning/pvector/)

\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}

But this isn’t the only way to define length, and that’s why you see (sometimes) a numberptogether with the norm notation, like in\ | \ VEC【U} \ | _p。这是因为它可以被概括为:

\displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}

和simplified as:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P)^ \压裂{1} {P}

So when you read about aL2范, you’re reading about the欧几里得范, a norm withp = 2, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without thepnumber), you have theL2范(Euclidean norm).

当你阅读一L1-norm你正在阅读与规范p=1, 定义为:

\的DisplayStyle \ | \ VEC【U} \ | _1 =(\左| U_1 \右| + \左| U_2 \右| + \左| U_3 \右| + \ ldots + \左| u_n \右|)

这无非是向量的组件的简单相加,也被称为出租汽车距离, also called Manhattan distance.

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length6 \倍\ SQRT {2} \约8.48, and is the unique shortest path.
资源:维基百科::出租车几何

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of thescikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (即所使用的L1范数的单位矢量是不会具有长度1,如果你要以后采取其L2范数).

Back to vector normalization

现在你知道了矢量正常化进程是什么,我们可以尝试一个具体的例子,使用L2范数的过程(我们现在使用正确的术语),以规范我们的矢量\ {VEC V_ {D_4}} =(0,2,1,0)为了得到其单位向量\ {帽子V_ {D_4}}。To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

\帽子{V}= \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

这就是它!我们的法矢\ {帽子V_ {D_4}}现在有一个L2范\ | \帽子{V_ {D_4}} \ | _2 = 1.0

Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf.

术语频率 - 逆文档频率(TF-IDF)重量

现在您已经了解如何向量normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

火车文档集:D1:天空是蓝色的。D2:阳光灿烂。测试文档集:D3:在天空,阳光灿烂。D4:我们可以看到闪亮的阳光,明亮的阳光下。

Your document space can be defined then asd = \ {D_1,D_2,\ ldots,D_N \}wheren是个number of documents in your corpus, and in our case asD_{train} = \{d_1, d_2\}D_ {测试} = \ {D_3,D_4 \}。The cardinality of our document space is defined by\left|{D_{train}}\right| = 2\left|{D_{test}}\right| = 2, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

现在让我们看看,然后是如何IDF(逆文档频率)定义:

\的DisplayStyle \ mathrm {IDF}(T)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:吨\在d \} \右|}}

where\左| \ {d:T \在d \} \右|是个number of documentswhere the termtappears, when the term-frequency function satisfies\ mathrm {TF}(T,d)\ 0 NEQ, we’re only adding 1 into the formula to avoid zero-division.

为TF-IDF式则是:

\ mathrm {TF \ MBOX { - } IDF}(T)= \ mathrm {TF}(T,d)\倍\ mathrm {IDF}(t)的

和this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (本地参数)和整个集合中的术语的低文档频率(全局参数).

现在,让我们计算每个出现在与我们在第一个教程计算词频特征矩阵功能的IDF:

M_ {}列车=  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

Since we have 4 features, we have to calculate\mathrm{idf}(t_1),\mathrm{idf}(t_2),\mathrm{idf}(t_3),\ mathrm {IDF}(T_4):

\mathrm{idf}(t_1) = \log{\frac{\left|D\right|}{1+\left|\{d : t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718

\mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\ mathrm {IDF}(T_4)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_4 \在d \} \右|}} = \日志{\压裂{2} {2}} = 0.0

这些IDF权重可以由矢量作为表示:

\ {VEC {idf_列车}} =(0.69314718,-0.40546511,-0.40546511,0.0)

现在,我们有我们的词频矩阵(M_ {}列车)和表示我们的矩阵的每个特征的IDF(矢量vec {idf_ \{火车}}),我们可以计算出我们的TF-IDF权重。我们要做的是矩阵中的每一列的简单乘法M_ {}列车与各自的vec {idf_ \{火车}}vector dimension. To do that, we can create a square对角矩阵calledM_ {} IDF同时与垂直和水平尺寸等于向量vec {idf_ \{火车}}dimension:

M_ {} IDF=   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

和then multiply it to the term frequency matrix, so the final result can be defined then as:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}

Please note that the matrix multiplication isn’t commutative, the result ofA \乘以B会比的结果不同B \times A,这就是为什么M_ {} IDFis on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

\begin{bmatrix}   \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

Let’s see now a concrete example of this multiplication:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}= \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

最后,我们可以将我们的L2归一化处理的M_{tf\mbox{-}idf}矩阵。请注意,这正常化“row-wise”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

M_{tf\mbox{-}idf} = \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

Python的实践

环境中使用:Python的v.2.7.2,Numpy 1.6.1,Scipy v.0.9.0,Sklearn(Scikits.learn)v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learn特征提取模块。

第一步是创建我们的训练和测试文档集和计算词频矩阵:

从sklearn.feature_extraction.text进口CountVectorizer train_set =(“天空是蓝色的。”,“阳光灿烂”。)TEST_SET =(“在天空中的太阳是光明的。”,“我们可以看到闪耀的太阳,。明亮的太阳“)count_vectorizer = CountVectorizer()count_vectorizer.fit_transform(train_set)打印 ”词汇“,count_vectorizer.vocabulary#词汇:{ '蓝':0, '太阳':1, '鲜艳':2 '天空':3} freq_term_matrix = count_vectorizer.transform(TEST_SET)打印freq_term_matrix.todense()#[[0 1 1 1]#[0 2 1 0]]

现在,我们有频率项矩阵(称为freq_term_matrix), we can instantiate theTfidfTransformer, which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。现在fit()我thod has calculated the idf for the matrix, let’s transform thefreq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform(freq_term_matrix)打印tf_idf_matrix.todense()#[[0 -0.70710678 -0.70710678 0]#[0 -0.89442719 -0.4472136 0]]

这就是它的tf_idf_matrixis actually our previousM_{tf\mbox{-}idf}矩阵。You can accomplish the same effect by using theVectorizerScikit的类。学习是一个vectorizer that automatically combines theCountVectorizer和theTfidfTransformerto you. Seethis example要知道如何使用它的文本分类过程。

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," in亚洲金博宝未知领域, 03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Wikipedia :: tf-idf

经典的向量空间模型

Sklearn text feature extraction code

更新

2015年3月13日Formating, fixed images issues.
03 Oct 2011Added the info about the environment used for Python examples

103个想法“机器学习::文本特征提取(TF-IDF) - 第二部分”

  1. 哇!
    Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
    (对不起,糟糕的英语,我正在努力对其进行改进,但仍然有很多工作要做的)

  2. Excellent work Christian! I am looking forward to reading your next posts on document classification, clustering and topics extraction with Naive Bayes, Stochastic Gradient Descent, Minibatch-k-Means and Non Negative Matrix factorization

    Also, the documentation of scikit-learn is really poor on the text feature extraction part (I am the main culprit…). Don’t hesitate to join the mailing list if you want to give a hand and improve upon the current situation.

    1. Great thanks Olivier. I really want to help sklearn, I just have to get some more time to do that, you guys have done a great work, I’m really impressed by the amount of algorithms already implemented in the lib, keep the good work !

  3. I like this tutorial better for the level of new concepts i am learning here.
    That said, which version of scikits-learn are you using?.
    最新通过的easy_install安装似乎有不同的模块层次结构(即没有找到sklearn feature_extraction)。如果你能提到你使用的版本,我只是尝试用这些例子。

    1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

  4. 哪里是第3部分?我必须提交在4天内向量空间模型的分配。把它在周末的希望吗?

  5. 再次感谢这个完整和明确的教程和我在等待即将到来的部分。

  6. 由于基督徒!与s亚洲金博宝klearn向量空间很不错的工作。我只有一个问题,假设我已经计算了“tf_idf_matrix”,我想计算成对余弦相似性(每行之间)。我是有问题的稀疏矩阵格式,你可以请给出这样的例子?也是我的基质是相当大的,由60K说25K。非常感谢!

  7. Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:
    1- You called the 2 dimensional matrix M_train, but it has the tf values of the D3 and D4 documents, so you should’ve called that matrix M_test instead of M_train. Because D3 and D4 are our test documents.
    2- When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4). Because number of the documents is 2. D3 has the word ‘sun’ 1 time, D4 has it 2 times. Which makes it 3 but we also add 1 to that value to get rid of divided by 0 problem. And this makes it 4… Am I right or am I missing something?
    Thank you.

    1. You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

      As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

      1. re: my ‘you are correct comment’ (above), I should have added:

        “… noting also Frédérique Passot’s comment (below) regarding the denominator:

        “......我们用的是什么确实是在发生的一个术语,无论任何给定的文档中出现的术语次数的文件数量。在这种情况下,然后,在用于T2(“太阳”)的IDF值分母确实2 + 1(2个文件具有“太阳”术语,1以避免潜在的零分割误差)。“

    2. Khalid,
      这是一个很古老的问题的答复。亚洲金博宝不过,我还是想回应沟通一下我从文章中了解。
      你的问题2:“当你计算IDF值的T2(这是‘太阳’),它应该是日志(2/4)”
      My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

  8. excellent post!
    我有一些问题。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵文件进行分类

  9. Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

  10. 我有同样的疑问,杰克(最后的评论)。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵来区分文档。

  11. 我有个问题..
    After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

    1. high value of f(idf) denotes that the particular vector(or Document) has high local strength and low global strength, in which case you can assume that the terms in it has high significance locally and cant be ignored. Comparing against funtion(tf) where only the term repeats high number of times are the ones given more importance,which most of the times is not a proper modelling technique.

  12. Hey ,
    Thanx fr d code..was very helpful indeed !

    1.For document clustering,after calculating inverted term frequency, shud i use any associativity coefficient like Jaccards coefficient and then apply the clustering algo like k-means or shud i apply d k-means directly to the document vectors after calculating inverted term frequency ?

    2.您是如何评价倒词频为calcuating文档向量文本聚类?

    谢谢a ton fr the forth coming reply!

  13. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

    I’d love to read the third installment of this series too! I’d be particularly interested in learning more about feature selection. Is there an idiomatic way to get a sorted list of the terms with the highest tf.idf scores? How would you identify those terms overall? How would you get the terms which are the most responsible for a high or low cosine similarity (row by row)?

    谢谢你的帖子_美好的_!

  14. Excellent article and a great introduction to td-idf normalization.

    你必须解释这些复杂的概亚洲金博宝念非常清晰,结构化的方法。

    谢谢!

      1. 亚洲金博宝很不错的&infomative教程...。请相关的上传文档聚类过程更多的教程。

  15. Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

  16. Please correct me if i’m worng
    与启动后的公式“我们在第一个教程中计算出的频率:”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后:”应该是不idf_test idf_train。

    Btw great series, can you give an simple approach for how to implement classification?

  17. Very good post. Congrats!!

    显示你的结果,我有个问题:

    我读了维基百科:
    The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

    When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

    然而,在结果中,“太阳”或“明亮”是比“天空”最重要的。

    I’m not sure of understand it completly.

  18. 了不起!我以前熟悉的TF-IDF,但我发现你scikits例子有益,因为我想学习那个包。

  19. Excellent post! Stumbled on this by chance looking for more information on CountVectorizer, but I’m glad I read through both of your posts (part 1 and part 2).

    Bookmarking your blog now

  20. 似乎没有fit_transform()为你描述..
    Any idea why ?
    >>> ts
    (“天空是蓝色的”,“阳光灿烂”)
    >>> V7 = CountVectorizer()
    >>> v7.fit_transform(TS)
    <2×2型的稀疏矩阵“”
    用4个存储元件在坐标格式>
    >>>打印v7.vocabulary_
    {u’is’: 0, u’the’: 1}

    1. 其实,还有第一个Python样本中的两个小错误。
      1. CountVectorizer should be instantiated like so:
      count_vectorizer = CountVectorizer(stop_words='english')
      This will make sure the ‘is’, ‘the’ etc are removed.

      2. To print the vocabulary, you have to add an underscore at the end.
      打印“词汇:” count_vectorizer.vocabulary_

      Excellent tutorial, just small things. hoep it helps others.

      1. 谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

  21. 感谢您抽出时间来写这篇文章。发现它非常有用。亚洲金博宝

  22. 感谢伟大的解释。

    I have a question about calculation of the idf(t#).
    In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

    谢谢。

    1. 你理解错了,分母你不把这个词的总和每个文档中,你只是总结所有具有词的至少一个aparition的文件。

  23. it is good if you can provide way to know how use ft-idf in classification of document. I see that example (python code) but if there is algorithm that is best because no all people can understand this language.

    谢谢

  24. 尼斯。一种解释有助于正确看待这个事情。是TF-IDF的好办法做聚类(例如,从已知的语料用杰卡德分析或方差相对于平均值设定)?

    Keep writing:)

  25. Hi Christian,

    It makes me very excited and lucky to have read this article. The clarity of your understanding reflects in the clarity of the document. It makes me regain my confidence in the field of machine learning.

    谢谢a ton for the beautiful explanation.

    Would like to read more from you.

    谢谢,
    Neethu

  26. Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

  27. how can i calculate tf idf for my own text file which is located some where in my pc?

  28. 辉煌的文章。

    到目前为止TF-TDF的最简单,最完善的解释,我读过。我真的很喜欢你如何解释数学后面。

  29. Hi, great post! I’m using the TfidVectorizer module in scikit learn to produce the tf-idf matrix with norm=l2. I’ve been examining the output of the TfidfVectorizer after fit_transform of the corpora which I called tfidf_matrix. I’ve summed the rows but they do not sum to 1. The code is vect = TfidfVectorizer(use_idf=True, sublunar_tf=True, norm=”l2). tfidf_matrix = vect.fit_transform(data). When I run tfidf_matrix.sum(axis=1) the vectors are larger than 1. Perhaps I’m looking at the wrong matrix or I misunderstand how normalisation works. I hope someone can clarify this point! Thanks

  30. 我能问你的时候计算的IDF,例如日志(2/1),你用日志基地10(E)或其他一些价值?我得到不同的计算!

  31. Great tutorial, just started a new job in ML and this explains things very clearly as it should be.

  32. Execellent帖子...。!非常感谢这篇文章。

    But I need more information, As you show the practical with python, Can you provide it with JAVA language..

  33. I’m a little bit confused why tf-idf gives negative numbers in this case? How do we interpret them? Correct me if I am wrong, but when the vector has a positive value, it means that the magnitude of that component determines how important that word is in that document. If the it is negative, I don’t know how to interpret it. If I were to take the dot product of a vector with all positive components and one with negative components, it would mean that some components may contribute negatively to the dot product even though on of the vectors has very high importance for a particular word.

  34. Hi,
    非常感谢您对这个主题这个详细的解释,真是太好了。无论如何,你可以给我一个提示,这可能是我的错误,我不断看到的来源:

    freq_term_matrix= count_vectorizer.transform(test_set)
    AttributeError的:“矩阵”对象没有属性“变换”

    Am I using a wrong version of sklearn?

  35. Awesome simple and effective explaination.Please post more topics with such awesome explainations.Looking forward for upcoming articles.
    谢谢

  36. 谢谢克里斯,你是唯一一个谁是明确了对角矩阵在网络上。

  37. Great tutorial for Tf-Idf. Excellent work . Please add for cosine similarity also:)

  38. I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

  39. best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

  40. Hi, I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.
    the value of d4 = (0.0 ,0.89,0.44,0.0) so the normalization will be = sqrt( square(.89)+square(.44))=sqrt(.193) = .44
    so what did i missed ? please help me to understand .

  41. 嗨,这是一个伟大的博客!
    If I need to do bi-gram cases, how can I use sklearn to finish it?

  42. 我没有得到相同的结果,当我执行相同的脚本。
    打印(“IDF:”,tfidf.idf_):IDF:[2.09861229 1. 1.40546511 1]

    My python version is: 3.5
    Scikit Learn version is: o.18.1

    what does i need to change? what might be the possible error?

    谢谢,

    1. It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

  43. Perfect introduction!
    没有骗人把戏。清晰简单的,随着技术的应。
    Very helpful
    Thank you very much.
    请发帖!
    Obrigado

  44. 为什么| d |= 2,在IDF方程。它不应该是4,因为| d |代表的审议的文件数量,我们有2从测试,2个来自火车。

  45. 哎,HII基督教
    您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目,其中我使用向量空间模型,这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
    thank you

  46. See this example to know how to use it for the text classification process. “This” link does not work any more. Can you please provide a relevant link for the example.

    谢谢

  47. 也就是说,如果你有一个很好的post.Really谢谢!太棒了。

  48. There is certainly a great deal to learn about this subject. I really like all the points you made.

  49. 1vbXlh你提出了一个非常美妙的细节,欣赏它的职位。亚洲金博宝

  50. I know this site provides quality based articles or
    reviews and additional data, is there any other web page which presents these kinds of
    information in quality?

  51. 在第一个例子。IDF(T1),日志(2/1)由计算器= 0.3010。为什么他们获得0.69 ..请有什么不对?

Leave a Reply to科希克取消回复

Your email address will not be published.

本网站使用的Akismet,以减少垃圾邮件。了解您的意见如何处理数据