机器学习::文本特征提取(TF-IDF) - 第二部分

Read the first part of this tutorial:文本特征提取(TF-IDF) - 第一部分

这个职位是一个continuation在哪里,我们开始学习有关文本特征提取和向量空间模型表示的理论和实践的第一部分。我真的建议你to read the first partof the post series in order to follow this second post.

由于很多人喜欢这个教程的第一部分,该第二部分是比第一个长一点。

Introduction

In the first post, we learned how to use theterm-frequencyto represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

在TF-IDF权重来解决这个问题。什么TF-IDF给出的是如何重要的是一个集合中的文档的话,这就是为什么TF-IDF结合本地和全球的参数,因为它考虑到不仅需要隔离的期限,但也文献集内的术语。什么TF-IDF然后做来解决这个问题,是缩小,同时扩大了难得的条件频繁的条款;出现比其他的10倍以上期限不为10倍比它更重要的是,为什么TF-IDF采用对数刻度的做到这一点。

But let’s go back to our definition of the\mathrm{tf}(t,d)这实际上是长期的长期计数t在里面documentd。使用这种简单的词频可能导致我们一样的问题keyword spamming,这是当我们有一个文档中的术语重复以改善上的IR其排名的目的(信息检索) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency\mathrm{tf}(t,d)上的矢量空间中的文件的通常也归一化。让我们来看看我们是如何规范这一载体。

Vector normalization

假设我们要正常化术语频矢量\ {VEC V_ {D_4}}that we have calculated in the first part of this tutorial. The documentd4从本教程的第一部分中有这样的文字表示:

D4:我们可以看到闪亮的阳光,明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

\ {VEC V_ {D_4}} =(0,2,1,0)

为了归一化矢量,是相同的计算单位向量矢量,而他们使用的是“帽子”符号表示:\帽子{V}。的单位矢量的定义\帽子{V}of a vector\ VEC {V}is:

\ displaystyle \hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p}

Where the\帽子{V}is the unit vector, or the normalized vector, the\ VEC {V}在矢量将被归一化和\|\vec{v}\|_pis the norm (magnitude, length) of the vector\ VEC {V}在里面L ^ p空间(别担心,我将所有的解释)。

的单位矢量实际上无非是矢量的归一化版本的更多,是一种载体,其长度为1。

归一化处理(来源:http://processing.org/learning/pvector/)
归一化处理(来源:http://processing.org/learning/pvector/)

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of theL ^ pspaces, also called勒贝格空间

勒贝格空间

How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)
How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)

通常,一个矢量的长度\vec{u} = (u_1, u_2, u_3, \ldots, u_n)使用计算Euclidean norma norm is a function that assigns a strictly positive length or size to all vectors in a vector space- ,其被定义为:

(来源:http://processing.org/learning/pvector/)
(来源:http://processing.org/learning/pvector/)

\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}

But this isn’t the only way to define length, and that’s why you see (sometimes) a numberptogether with the norm notation, like in\ | \ VEC【U} \ | _p。这是因为它可以被概括为:

\ displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}

并简化为:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P)^ \压裂{1} {P}

So when you read about aL2范,你正在阅读关于Euclidean norm, a norm withp = 2, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without thepnumber), you have theL2范(Euclidean norm).

When you read about aL1-norm,你正在阅读关于norm withp=1, 定义为:

\的DisplayStyle \ | \ VEC【U} \ | _1 =(\左| U_1 \右| + \左| U_2 \右| + \左| U_3 \右| + \ ldots + \左| u_n \右|)

Which is nothing more than a simple sum of the components of the vector, also known as出租汽车距离,也被称为曼哈顿距离。

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length6 \times \sqrt{2} \approx 8.48, and is the unique shortest path.
Source:维基百科:: Taxicab Geometry

请注意,您也可以使用任何规范正常化的载体,但我们将使用最常用的规范,L2范数,这也是在0.9版本的默认scikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (即所使用的L1范数的单位矢量是不会具有长度1,如果你要以后采取其L2范数)。

Back to vector normalization

现在你知道了矢量正常化进程是什么,我们可以尝试一个具体的例子,使用L2范数的过程(我们现在使用正确的术语),以规范我们的矢量\ {VEC V_ {D_4}} =(0,2,1,0)为了得到其单位向量\ {帽子V_ {D_4}}。为了做到这一点,我们将简单的将其插入单位矢量的定义,对其进行评估:

\帽子{V}= \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

And that is it ! Our normalized vector\ {帽子V_ {D_4}}has now a L2-norm\ | \帽子{V_ {D_4}} \ | _2 = 1.0

注意,在这里我们归一化频率cy document vector, but later we’re going to do that after the calculation of the tf-idf.

术语频率 - 逆文档频率(TF-IDF)重量

现在你已经了解了矢量归在理论和实践是如何工作的,让我们继续我们的教程。假设你有你的收藏(从教程的第一部分拍摄)在下列文件:

火车文档集:D1:天空是蓝色的。D2:阳光灿烂。测试文档集:D3:在天空,阳光灿烂。D4:我们可以看到闪亮的阳光,明亮的阳光下。

Your document space can be defined then asD = \{ d_1, d_2, \ldots, d_n \}哪里n是在你的文集文档的数量,并在我们的情况下,D_{train} = \{d_1, d_2\}andD_ {测试} = \ {D_3,D_4 \}。The cardinality of our document space is defined by\左| {{D_火车}} \右|= 2and\left|{D_{test}}\right| = 2, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

Let’s see now, how idf (inverse document frequency) is then defined:

\的DisplayStyle \ mathrm {IDF}(T)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:吨\在d \} \右|}}

哪里\左| \ {d:T \在d \} \右|is thenumber of documents哪里the termt出现,术语频率函数满足当\ mathrm {TF}(T,d)\ 0 NEQ, we’re only adding 1 into the formula to avoid zero-division.

为TF-IDF式则是:

\ mathrm {TF \ MBOX { - } IDF}(T)= \ mathrm {TF}(T,d)\倍\ mathrm {IDF}(t)的

and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (全局参数)。

现在,让我们计算每个出现在与我们在第一个教程计算词频特征矩阵功能的IDF:

M_{train} =  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

Since we have 4 features, we have to calculate\ mathrm {IDF}(T_1),\mathrm{idf}(t_2),\mathrm{idf}(t_3),\ mathrm {IDF}(T_4):

\ mathrm {IDF}(T_1)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718

\mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\ mathrm {IDF}(T_4)= \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0

These idf weights can be represented by a vector as:

\ {VEC {idf_列车}} =(0.69314718,-0.40546511,-0.40546511,0.0)

现在,我们有我们的词频矩阵(M_{train})和表示我们的矩阵的每个特征的IDF(矢量\ {VEC {idf_列车}}), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrixM_{train}与各自的\ {VEC {idf_列车}}vector dimension. To do that, we can create a square对角矩阵M_{idf}同时与垂直和水平尺寸等于向量\ {VEC {idf_列车}}dimension:

M_{idf} =   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

和n multiply it to the term frequency matrix, so the final result can be defined then as:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}

Please note that the matrix multiplication isn’t commutative, the result ofA \乘以B会比的结果不同乙\一个时代, and this is why theM_{idf}是对乘法的右侧,以完成每个IDF值到其对应的特征相乘的期望的效果:

\begin{bmatrix}   \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

现在让我们来看看这个乘法的一个具体的例子:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}= \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

最后,我们可以将我们的L2归一化处理的M_{tf\mbox{-}idf}矩阵。请注意,这正常化“row-wise”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

这就是我们的我们的测试文档集,这实际上是单位向量的集合的漂亮归TF-IDF权重。如果你把矩阵的每一行的L2范数,你会发现它们都具有1的L2范数。

Python的实践

Environment Used:Python的v.2.7.2,Numpy 1.6.1,Scipy v.0.9.0,Sklearn (Scikits.learn) v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learn特征提取模块。

第一步是创建我们的训练和测试文档集和计算词频矩阵:

从sklearn.feature_extraction.text进口CountVectorizer train_set =(“天空是蓝色的。”,“阳光灿烂”。)TEST_SET =(“在天空中的太阳是光明的。”,“我们可以看到闪耀的太阳,。明亮的太阳“)count_vectorizer = CountVectorizer()count_vectorizer.fit_transform(train_set)打印 ”词汇“,count_vectorizer.vocabulary#词汇:{ '蓝':0, '太阳':1, '鲜艳':2 '天空':3} freq_term_matrix = count_vectorizer.transform(TEST_SET)打印freq_term_matrix.todense()#[[0 1 1 1]#[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix),我们可以实例化TfidfTransformer,这将是负责来计算我们的词频矩阵TF-IDF权重:

从进口sklearn.feature_extraction.text TFIDF TfidfTransformer = TfidfTransformer(NORM = “L2”)tfidf.fit(freq_term_matrix)打印 “IDF:”,tfidf.idf_#IDF:[0.69314718 -0.40546511 -0.40546511 0]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。Now that适合()方法计算矩阵中的IDF上,让我们改造freq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform(freq_term_matrix)打印tf_idf_matrix.todense()#[[0 -0.70710678 -0.70710678 0]#[0 -0.89442719 -0.4472136 0]]

这就是它的tf_idf_matrixis actually our previousM_{tf\mbox{-}idf}矩阵。You can accomplish the same effect by using the矢量器Scikit的类。学习是一个vectorizer that automatically combines theCountVectorizerTfidfTransformerto you. See这个例子要知道如何使用它的文本分类过程。

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," in亚洲金博宝未知领域, 03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

维基百科:: tf-idf

经典的向量空间模型

Sklearn text feature extraction code

Updates

2015年3月13日格式化,固定图像的问题。
03 Oct 2011添加了有关使用Python示例环境信息

103个想法“机器学习::文本特征提取(TF-IDF) - 第二部分”

  1. 哇!
    Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
    (sorry for bad English, I’m working to improve it, but there is still a lot of job to do)

  2. 出色的工作基督徒!我期待着阅读的文档分类你的下一个职位,聚类和主题提取朴素贝叶斯,随机梯度下降,Minibatch-K均值和非负矩阵分解

    而且,scikit学习的文档上的文本特征提取部分(我是罪魁祸首?)真的很差。如果你想给一个手并改善目前的状况,不要犹豫,加入邮件列表。

    1. 十分感谢奥利弗。我真的想帮助sklearn,我只是得到一些更多的时间来做到这一点,你们都做了伟大的工作,我真的在lib中已经实现的算法量折服,保持良好的工作!

  3. 我喜欢这个教程的新概念我在这里学习水平较好。
    That said, which version of scikits-learn are you using?.
    最新通过的easy_install安装似乎有不同的模块层次结构(即没有找到sklearn feature_extraction)。如果你能提到你使用的版本,我只是尝试用这些例子。

    1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

  4. 哪里是第3部分?我必须提交在4天内向量空间模型的分配。把它在周末的希望吗?

  5. 由于基督徒!与s亚洲金博宝klearn向量空间很不错的工作。我只有一个问题,假设我已经计算了“tf_idf_matrix”,我想计算成对余弦相似性(每行之间)。我是有问题的稀疏矩阵格式,你可以请给出这样的例子?也是我的基质是相当大的,由60K说25K。非常感谢!

  6. Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:
    1-你调用2维矩阵M_train,但它具有D3和D4文件的TF值,所以你应该已经给那矩阵M_test而不是M_train。由于D3和D4是我们的测试文档。
    2- When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4). Because number of the documents is 2. D3 has the word ‘sun’ 1 time, D4 has it 2 times. Which makes it 3 but we also add 1 to that value to get rid of divided by 0 problem. And this makes it 4… Am I right or am I missing something?
    Thank you.

    1. You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

      As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

      1. 回复:我“你是正确的注释”(上),我应该补充:

        “......还注意到康斯登Passot的评论(下同)关于分母:

        ‘… what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (‘sun’) is indeed 2+1 (2 documents have the term ‘sun’, +1 to avoid a potential zero division error).’ “

    2. 哈立德,
      这是一个很古老的问题的答复。亚洲金博宝不过,我还是想回应沟通一下我从文章中了解。
      你的问题2:“当你计算IDF值的T2(这是‘太阳’),它应该是日志(2/4)”
      My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

  7. excellent post!
    我有一些问题。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵文件进行分类

  8. Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

  9. 我有同样的疑问,杰克(最后的评论)。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵来区分文档。

  10. I have a question..
    After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

    1. F(IDF)的高值,表示特定载体(或文件)具有较高的局部强度和低全球实力,在这种情况下,你可以假设,在它的条款具有很高的重要性本地和不能忽视的。针对funtion(TF),其中只有长期重复大量的时间给予更多重视的那些,其中大部分时间是不正确的建模技术比较。

  11. Hey ,
    感谢名单FR d code..was的确非亚洲金博宝常有帮助!

    1.For document clustering,after calculating inverted term frequency, shud i use any associativity coefficient like Jaccards coefficient and then apply the clustering algo like k-means or shud i apply d k-means directly to the document vectors after calculating inverted term frequency ?

    2. How do u rate inverted term frequency for calcuating document vectors for document clustering ?

    由于一吨FR第四到来的答复!

  12. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

    我喜欢阅读本系列的第三批呢!我特别想了解更多有关特征选择。是否有一个惯用的方式来获得最高的分数TF.IDF条款的排序列表?你将如何确定这些方面的整体?你将如何得到这是最负责高或低的余弦相似度(逐行)的条款?

    谢谢你的帖子_美好的_!

  13. Excellent article and a great introduction to td-idf normalization.

    You have a very clear and structured way of explaining these difficult concepts.

    谢谢!

      1. 亚洲金博宝很不错的&infomative教程...。请相关的上传文档聚类过程更多的教程。

  14. Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

  15. 请纠正我,如果我拨错
    与启动后的公式“我们在第一个教程中计算出的频率:”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后:”应该是不idf_test idf_train。

    顺便说一句伟大的系列赛,你可以给如何实施分类的简单的方法?

  16. Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

  17. 亚洲金博宝很不错的职位。恭喜!!

    显示你的结果,我有个问题:

    I read in the wikipedia:
    The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

    When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

    However, in the results, the word “sun” or “bright” are most important than “sky”.

    I’m not sure of understand it completly.

  18. 了不起!我以前熟悉的TF-IDF,但我发现你scikits例子有益,因为我想学习那个包。

  19. Excellent post! Stumbled on this by chance looking for more information on CountVectorizer, but I’m glad I read through both of your posts (part 1 and part 2).

    Bookmarking your blog now

  20. 似乎没有fit_transform()为你描述..
    Any idea why ?
    >>> ts
    (“天空是蓝色的”,“阳光灿烂”)
    >>> V7 = CountVectorizer()
    >>> v7.fit_transform(TS)
    <2×2型的稀疏矩阵“”
    用4个存储元件在坐标格式>
    >>> print v7.vocabulary_
    {u’is’: 0, u’the’: 1}

    1. 其实,还有第一个Python样本中的两个小错误。
      1. CountVectorizer should be instantiated like so:
      count_vectorizer = CountVectorizer(stop_words='english')
      This will make sure the ‘is’, ‘the’ etc are removed.

      2. To print the vocabulary, you have to add an underscore at the end.
      print "Vocabulary:", count_vectorizer.vocabulary_

      优秀的教程,只是小事情。hoep它可以帮助别人。

      1. 谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

  21. 感谢伟大的解释。

    我有一个关于IDF(T#)的计算问题。
    In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

    谢谢。

    1. You got it wrong, in the denominator you don’t put the sum of the term in each document, you just sum all the documents that have at least one aparition of the term.

  22. 这是很好的,如果你能提供的方式来知道如何使用FT-IDF中的文档分类。我看到示例(Python代码),但如果有算法是最好的,因为没有所有的人都能理解这种语言。

    谢谢

  23. Nice. An explanation helps put things into perspective. Is tf-idf a good way to do clustering (e.g. use Jaccard analysis or variance against the average set from a known corpus)?

    继续写:)

  24. Hi Christian,

    It makes me very excited and lucky to have read this article. The clarity of your understanding reflects in the clarity of the document. It makes me regain my confidence in the field of machine learning.

    由于一吨为美丽的解释。

    想从你更多。

    谢谢,

  25. Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

  26. how can i calculate tf idf for my own text file which is located some where in my pc?

  27. 辉煌的文章。

    到目前为止TF-TDF的最简单,最完善的解释,我读过。我真的很喜欢你如何解释数学后面。

  28. Hi, great post! I’m using the TfidVectorizer module in scikit learn to produce the tf-idf matrix with norm=l2. I’ve been examining the output of the TfidfVectorizer after fit_transform of the corpora which I called tfidf_matrix. I’ve summed the rows but they do not sum to 1. The code is vect = TfidfVectorizer(use_idf=True, sublunar_tf=True, norm=”l2). tfidf_matrix = vect.fit_transform(data). When I run tfidf_matrix.sum(axis=1) the vectors are larger than 1. Perhaps I’m looking at the wrong matrix or I misunderstand how normalisation works. I hope someone can clarify this point! Thanks

  29. 我能问你的时候计算的IDF,例如日志(2/1),你用日志基地10(E)或其他一些价值?我得到不同的计算!

  30. Great tutorial, just started a new job in ML and this explains things very clearly as it should be.

  31. Execellent帖子...。!非常感谢这篇文章。

    但是,我需要更多的信息,当你展示实际使用python,你可以为它提供JAVA语言..

  32. 我有点困惑,为什么TF-IDF在这种情况下,给出了负数?我们如何解读?纠正我,如果我错了,但是当载体为正值,这意味着该组件的大小确定字是该文件中有多么重要。如果是负数,我不知道如何解释它。如果我是采取向量的点积与所有积极的部件和一个负组件,这将意味着,一些部件可能负点积贡献,即使在载体有一个特定的词非常高的重视。亚洲金博宝

  33. Hi,
    谢谢so much for this detailed explanation on this topic, really great. Anyway, could you give me a hint what could be the source of my error that I am keep on seeing:

    freq_term_matrix = count_vectorizer.transform(test_set)
    AttributeError的:“矩阵”对象没有属性“变换”

    我使用sklearn的版本错误?

  34. 真棒简单而有效的explaination.Please发布更多的话题与这样真棒explainations.Looking着为即将到来的文章。
    谢谢

  35. 谢谢克里斯,你是唯一一个谁是明确了对角矩阵在网络上。

  36. I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

  37. best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

  38. Hi, I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.
    the value of d4 = (0.0 ,0.89,0.44,0.0) so the normalization will be = sqrt( square(.89)+square(.44))=sqrt(.193) = .44
    所以我有没有遗漏了什么?请帮我明白了。

  39. Hi, it is a great blog!
    如果我需要做双克的情况下,我该如何使用sklearn来完成呢?

  40. I am not getting same result, when i am executing the same script.
    打印(“IDF:”,tfidf.idf_):IDF:[2.09861229 1. 1.40546511 1]

    我的Python版本:3.5
    Scikit Learn version is: o.18.1

    what does i need to change? what might be the possible error?

    thanks,

    1. 它可以是很多东西,因为你使用的是不同的Python解释器的版本也不同Scikit-学习版,你应该会在结果的差异,因为他们可能已经改变了默认参数,算法,圆等

  41. Perfect introduction!
    没有骗人把戏。清晰简单的,随着技术的应。
    Very helpful
    非常感谢你。亚洲金博宝
    请发帖!
    Obrigado

  42. Why is |D| = 2, in the idf equation. Shouldn’t it be 4 since |D| denotes the number of documents considered, and we have 2 from test, 2 from train.

  43. 哎,HII基督教
    您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目,其中我使用向量空间模型,这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
    谢谢

  44. See this example to know how to use it for the text classification process. “This” link does not work any more. Can you please provide a relevant link for the example.

    谢谢

  45. 也就是说,如果你有一个很好的post.Really谢谢!太棒了。

  46. 当然有很大的了解这个问题。我真的很喜欢所有的点,你做。

  47. 1vbXlh You have brought up a very wonderful details , appreciate it for the post.

  48. 我知道这个网站提供基于高质量的文章或
    评论和其他数据,还有没有其他的网页呈现这类
    information in quality?

  49. In the first example. idf(t1), the log (2/1) = 0.3010 by the calculator. Why they obtained 0.69.. Please What is wrong?

发表回复Severtcev取消回复

您的电子邮件地址不会被公开。

本网站使用的Akismet,以减少垃圾邮件。Learn how your comment data is processed