机器学习::文本特征提取(TF-IDF) - 第二部分

阅读本教程的第一部分:文本特征提取(TF-IDF) - 第一部分

这个职位是一个延续of the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend you阅读第一部分后一系列以遵循这个第二。

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

介绍

在第一篇文章中,我们学会了如何使用长期频以表示在矢量空间的文本信息。然而,与术语频率方法的主要问题是,它大大加快了频繁的条款和规模下降,这比高频方面经验更丰富罕见的条款。基本的直觉是,在许多文件中经常出现的一个术语不太好鉴别,真正有意义的(至少在许多实验测试);这里最重要的问题是:你为什么会在例如分类问题,强调术语,是在你的文档的整个语料库几乎礼物?

在TF-IDF权重来解决这个问题。什么TF-IDF给出的是如何重要的是一个集合中的文档的话,这就是为什么TF-IDF结合本地和全球的参数,因为它考虑到不仅需要隔离的期限,但也文献集内的术语。什么TF-IDF然后做来解决这个问题,是缩小,同时扩大了难得的条件频繁的条款;出现比其他的10倍以上期限不为10倍比它更重要的是,为什么TF-IDF采用对数刻度的做到这一点。

But let’s go back to our definition of the\ mathrm {TF}(T,d)这实际上是长期的长期计数Ť在文档中d。The use of this simple term frequency could lead us to problems like滥用关键字,which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (信息检索)系统,甚至对创建长文档偏见,使他们看起来比他们只是因为手册中出现的高频更重要。

为了克服这个问题,词频\ mathrm {TF}(T,d)of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

矢量归

假设我们要正常化术语频矢量\vec{v_{d_4}}我们在本教程的第一部分已经计算。该文件D4从本教程的第一部分中有这样的文字表示:

D4:我们可以看到闪亮的阳光,明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

\vec{v_{d_4}} = (0,2,1,0)

为了归一化矢量,是相同的计算Unit Vector矢量,而他们使用的是“帽子”符号表示:\hat{v}。的单位矢量的定义\hat{v}of a vector\ VEC {V}是:

\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}

\hat{v}是单位矢量,或者归一化矢量,所述\ VEC {V}是个vector going to be normalized and the\ | \ VEC {V} \ | _p是矢量的范数(大小,长度)\ VEC {V}在里面L^p空间(别担心,我将所有的解释)。

的单位矢量实际上无非是矢量的归一化版本的更多,是一种载体,其长度为1。

归一化处理(来源:http://processing.org/learning/pvector/)
归一化处理(来源:http://processing.org/learning/pvector/)

但这里的重要问题是如何向量的长度来计算,并明白这一点,你必须了解的动机L^p空间,也被称为Lebesgue spaces

Lebesgue spaces

多久这个载体?(来源:来源:http://processing.org/learning/pvector/)
多久这个载体?(来源:来源:http://processing.org/learning/pvector/)

Usually, the length of a vector\ {VEC U】=(U_1,U_2,U_3,\ ldots,u_n)is calculated using the欧几里得范-一个准则是在矢量空间中分配一个严格正长度或大小于所有矢量的函数-, which is defined by:

(来源:http://processing.org/learning/pvector/)
(来源:http://processing.org/learning/pvector/)

\ | \ VEC【U} \ |= \ SQRT【U ^ 2_1 + U ^ 2_2 + U ^ 2_3 + \ ldots + U ^ 2_n}

但是,这不是定义长度的唯一途径,这就是为什么你看到(有时)的数pŤogether with the norm notation, like in\ | \ VEC【U} \ | _p。这是因为它可以被概括为:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\左| U_1 \右| ^ P + \左| U_2 \右| ^ P + \左| U_3 \右| ^ P + \ ldots + \左|u_n \右| ^ p)^ \压裂{1} {p}

并简化为:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P)^ \压裂{1} {P}

所以,当你阅读有关L2范,you’re reading about the欧几里得范,具有规范p = 2时用于测量的矢量的长度的最常用标准,通常称为“大小”;其实,当你有一个不合格的长度测量(不p号),你有L2范(欧几里得范数)。

当你阅读一L1-norm你正在阅读与规范P = 1, 定义为:

\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)

这无非是向量的组件的简单相加,也被称为出租汽车距离,也被称为曼哈顿距离。

出租车几何与欧几里得距离:在出租车几何所有三个描绘线具有对于相同的路径具有相同的长度(12)。在欧几里德几何,绿色的线有长度,6 \倍\ SQRT {2} \约8.48,并且是唯一的最短路径。
资源:维基百科::出租车几何

请注意,您也可以使用任何规范正常化的载体,但我们将使用最常用的规范,L2范数,这也是在0.9版本的默认scikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (即所使用的L1范数的单位矢量是不会具有长度1,如果你要以后采取其L2范数).

Back to vector normalization

现在you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector\vec{v_{d_4}} = (0,2,1,0)为了得到其单位向量\hat{v_{d_4}}。To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

这就是它!我们的法矢\hat{v_{d_4}}现在有一个L2范\|\hat{v_{d_4}}\|_2 = 1.0

请注意,这里我们归我们词频文档向量,但后来我们要做的是,TF-IDF的计算后。

术语频率 - 逆文档频率(TF-IDF)重量

现在您已经了解如何向量normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

Your document space can be defined then asd = \ {D_1,D_2,\ ldots,D_N \}哪里ñ是个文件数in your corpus, and in our case asD_ {火车} = \ {D_1,D_2 \}D_{test} = \{d_3, d_4\}。我们的文档空间的基数被定义\left|{D_{train}}\right| = 2\左| {{D_测试}} \右|= 2,since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

现在让我们看看,然后是如何IDF(逆文档频率)定义:

\的DisplayStyle \ mathrm {IDF}(T)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:吨\在d \} \右|}}

哪里\左| \ {d:T \在d \} \右|是个文件数其中术语Ť出现,术语频率函数满足当\ mathrm {TF}(T,d)\ 0 NEQ,我们只加1代入公式,以避免零分。

The formula for the tf-idf is then:

\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)

和该公式具有重要的后果:当你有给定文档中高词频(TF)达到TF-IDF计算的高权重(本地参数)和整个集合中的术语的低文档频率(全局参数).

现在,让我们计算每个出现在与我们在第一个教程计算词频特征矩阵功能的IDF:

M_ {}列车=  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

因为我们有4个特点,我们要计算\ mathrm {IDF}(T_1)\ mathrm {IDF}(T_2)\ mathrm {IDF}(t_3处)\ mathrm {IDF}(T_4)

\ mathrm {IDF}(T_1)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718

\ mathrm {IDF}(T_2)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_2 \在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\ mathrm {IDF}(t_3处)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:t_3处\在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\ mathrm {IDF}(T_4)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_4 \在d \} \右|}} = \日志{\压裂{2} {2}} = 0.0

这些IDF权重可以由矢量作为表示:

\ {VEC {idf_列车}} =(0.69314718,-0.40546511,-0.40546511,0.0)

现在we have our matrix with the term frequency (M_ {}列车) and the vector representing the idf for each feature of our matrix (\ {VEC {idf_列车}}),我们可以计算出我们的TF-IDF权重。我们要做的是矩阵中的每一列的简单乘法M_ {}列车与各自的\ {VEC {idf_列车}}向量维度。要做到这一点,我们可以创建一个正方形对角矩阵M_ {} IDF同时与垂直和水平尺寸等于向量\ {VEC {idf_列车}}dimension:

M_ {} IDF=   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

然后将它乘到术语频率矩阵,因此最终结果然后可以定义为:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}

请注意,矩阵乘法是不可交换的,结果A \times B会比的结果不同乙\一个时代,这就是为什么M_ {} IDFis on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

{bmatrix} \ \开始mathrm {tf} (t_1 d_1) & \ mathrm {Ťf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

Let’s see now a concrete example of this multiplication:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}= \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

最后,我们可以将我们的L2归一化处理的M_ {TF \ MBOX { - }} IDF矩阵。请注意,这正常化“逐行”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

蟒蛇practice

环境中使用蟒蛇v.2.7.2NumPy的1.6.1SciPy的v.0.9.0Sklearn(Scikits.learn)v.0.9

现在,你在等待的部分!在本节中,我将使用Python的使用,以显示TF-IDF计算的每一步Scikit.learn特征提取module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

现在,我们有频率项矩阵(称为freq_term_matrix),我们可以实例化TfidfTransformer,这将是负责来计算我们的词频矩阵TF-IDF权重:

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

请注意,我所指定的标准为L2,这是可选的(实际上默认为L2范数),但我已经添加了参数,使其明确向你表示,它会使用L2范数。还要注意的是,你可以通过访问称为内部属性看IDF计算权重idf_。现在fit()我Ťhod has calculated the idf for the matrix, let’s transform thefreq_term_matrix到TF-IDF权重矩阵:

tf_idf_matrix = tfidf.transform(freq_term_matrix)打印tf_idf_matrix.todense()#[[0 -0.70710678 -0.70710678 0]#[0 -0.89442719 -0.4472136 0]]

这就是它的Ťf_idf_matrix其实我们以前M_ {TF \ MBOX { - }} IDF矩阵。You can accomplish the same effect by using the矢量器类Scikit.learn的这是一个矢量器自动结合CountVectorizerTfidfTransformerŤo you. See这个例子Ťo know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

引用本文为:基督教S. Perone,“机器学习::文本特征提取(TF-IDF) - 第二部分”,在亚洲金博宝未知领域,03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

参考

理解逆文档频率:对IDF理论论证

Wikipedia :: tf-idf

经典的向量空间模型

Sklearn文本特征提取码

更新

2015年3月13日-格式化,固定图像的问题。
2011 10月3日-添加了有关使用Python示例环境信息

103个想法“机器学习::文本特征提取(TF-IDF) - 第二部分”

  1. 哇!
    Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?
    (对不起,糟糕的英语,我正在努力对其进行改进,但仍然有很多工作要做的)

  2. Excellent work Christian! I am looking forward to reading your next posts on document classification, clustering and topics extraction with Naive Bayes, Stochastic Gradient Descent, Minibatch-k-Means and Non Negative Matrix factorization

    而且,scikit学习的文档上的文本特征提取部分(我是罪魁祸首?)真的很差。如果你想给一个手并改善目前的状况,不要犹豫,加入邮件列表。

    1. 十分感谢奥利弗。我真的想帮助sklearn,我只是得到一些更多的时间来做到这一点,你们都做了伟大的工作,我真的在lib中已经实现的算法量折服,保持良好的工作!

  3. I like this tutorial better for the level of new concepts i am learning here.
    这就是说,学习scikits您正在使用哪个版本?
    The latest as installed by easy_install seems to have a different module hierarchy (i.e doesn’t find feature_extraction in sklearn). If you could mention the version you used, i will just try out with those examples.

    1. Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

  4. Where’s part 3? I’ve got to submit an assignment on Vector Space Modelling in 4 days. Any hope of putting it up over the weekend?

  5. 谢谢again for this complete and explicit tutorial and I am waiting for the coming section.

  6. 谢谢克里斯Ťian! a very nice work on vector space with sklearn. I just have one question, suppose I have computed the ‘tf_idf_matrix’, and I would like to compute the pair-wise cosine similarity (between each rows). I was having problem with the sparse matrix format, can you please give an example on that? Also my matrix is pretty big, say 25k by 60k. Thanks a lot!

  7. 伟大的职位......我明白了什么TF-IDF以及如何与一个具体的例子实施。但我发现2周的事情,我不知道:
    1- You called the 2 dimensional matrix M_train, but it has the tf values of the D3 and D4 documents, so you should’ve called that matrix M_test instead of M_train. Because D3 and D4 are our test documents.
    2 - 当你计算IDF值的T2(这是“太阳”),它应该是日志(2/4)。因为文件的数目是2 D3有词“太阳” 1次,D4有它的2倍。这使得3,但是我们也加1到值摆脱0分的问题。这使得4 ...我说得对不对还是我失去了一些东西?
    谢谢。

    1. You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

      As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

      1. 回复:我“你是正确的注释”(上),我应该补充:

        “......还注意到康斯登Passot的评论(下同)关于分母:

        “......我们用的是什么确实是在发生的一个术语,无论任何给定的文档中出现的术语次数的文件数量。在这种情况下,然后,在用于T2(“太阳”)的IDF值分母确实2 + 1(2个文件具有“太阳”术语,1以避免潜在的零分割误差)。“

    2. 哈立德,
      这是一个很古老的问题的答复。亚洲金博宝不过,我还是想回应沟通一下我从文章中了解。
      你的问题2:“当你计算IDF值的T2(这是‘太阳’),它应该是日志(2/4)”
      My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

  8. 优秀的帖子!
    我有一些问题。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵文件进行分类

  9. 非常感谢。你在这样一个简单的方法来解释它。这是非常有用的。再次感谢了很多。

  10. 我有同样的疑问,杰克(最后的评论)。从上个TF-IDF权重矩阵,我们怎么能拿到各自任期的重要性(例如,这是最重要的用语?)。我们如何利用这个矩阵来区分文档。

  11. 我有个问题..
    在TF-IDF操作后,我们得到与值的numpy的阵列。假设我们需要从阵列中获得最高50个值。我们怎样才能做到这一点?

    1. F(IDF)的高值,表示特定载体(或文件)具有较高的局部强度和低全球实力,在这种情况下,你可以假设,在它的条款具有很高的重要性本地和不能忽视的。针对funtion(TF),其中只有长期重复大量的时间给予更多重视的那些,其中大部分时间是不正确的建模技术比较。

  12. 嘿,
    Thanx fr d code..was very helpful indeed !

    1.适用于文档聚类,计算反相的术语频率之后,shud我使用任何关联性系数等Jaccards系数,然后应用聚类算法中像k均值或shud我计算反转术语频率后直接适用d k均值到文档向量?

    2.您是如何评价倒词频为calcuating文档向量文本聚类?

    谢谢a ton fr the forth coming reply!

  13. @Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

    我喜欢阅读本系列的第三批呢!我特别想了解更多有关特征选择。是否有一个惯用的方式来获得最高的分数TF.IDF条款的排序列表?你将如何确定这些方面的整体?你将如何得到这是最负责高或低的余弦相似度(逐行)的条款?

    谢谢你的帖子_美好的_!

  14. 优秀文章和一个伟大的介绍TD-IDF正常化。

    你必须解释这些复杂的概亚洲金博宝念非常清晰,结构化的方法。

    谢谢!

      1. 亚洲金博宝很不错的&infomative教程...。请相关的上传文档聚类过程更多的教程。

  15. Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

  16. 请纠正我,如果我拨错
    与启动后的公式“我们在第一个教程中计算出的频率:”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后:”应该是不idf_test idf_train。

    Btw great series, can you give an simple approach for how to implement classification?

  17. Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

  18. Very good post. Congrats!!

    显示你的结果,我有个问题:

    我读了维基百科:
    成比例的TF-IDF值增加到的次数的字出现在文档中,但是通过在语料库中的字,这有助于控制的事实,一些词语通常比另一些更常见的频率偏移。

    When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

    然而,在结果中,“太阳”或“明亮”是比“天空”最重要的。

    我不知道的完全地理解它。

  19. Hello,

    The explanation is awesome. I haven’t seen a better one yet. I have trouble reproducing the results. It might be because of some update of sklearn.
    是否有可能为你更新代码?

    It seem that the formula for computing the tf-idf vector has changed a little bit. Is a typo or another formula. Below is the link to the source code.

    https://金宝博游戏网址github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L954

    非常感谢

  20. 了不起!我以前熟悉的TF-IDF,但我发现你scikits例子有益,因为我想学习那个包。

  21. 优秀的帖子!一次偶然的机会找上CountVectorizer更多信息,无意中发现了这一点,但我很高兴我通过两个您的文章(第1部分和第2部分)的读取。

    现在用书签您的博客

  22. 似乎没有fit_transform()为你描述..
    任何想法,为什么?
    >>> TS
    (‘The sky is blue’, ‘The sun is bright’)
    >>> V7 = CountVectorizer()
    >>> v7.fit_transform(TS)
    <2×2型的稀疏矩阵“”
    with 4 stored elements in COOrdinate format>
    >>>打印v7.vocabulary_
    {u'is’:0,u'the”:1}

    1. Actually, there are two small errors in the first Python sample.
      1. CountVectorizer should be instantiated like so:
      count_vectorizer = CountVectorizer(STOP_WORDS = '英语')
      这将确保“是”,“的”等被删除。

      2.要打印的词汇,你必须在末尾添加下划线。
      打印“词汇:” count_vectorizer.vocabulary_

      Excellent tutorial, just small things. hoep it helps others.

      1. 由于灰。虽然文章是相当自我解释的,您的评论使整个差异。

  23. 感谢伟大的解释。

    我有一个关于IDF(T#)的计算问题。
    In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

    谢谢。

    1. 你理解错了,分母你不把这个词的总和每个文档中,你只是总结所有具有词的至少一个aparition的文件。

  24. it is good if you can provide way to know how use ft-idf in classification of document. I see that example (python code) but if there is algorithm that is best because no all people can understand this language.

    谢谢

  25. 尼斯。一种解释有助于正确看待这个事情。是TF-IDF的好办法做聚类(例如,从已知的语料用杰卡德分析或方差相对于平均值设定)?

    继续写:)

  26. 嗨基督徒,

    这让我非常兴奋和幸运,读亚洲金博宝这篇文章。你理解的清晰反映了文件的清晰度。这让我重拾我的信心在机器学习领域。

    由于一吨为美丽的解释。

    Would like to read more from you.

    谢谢,

  27. 谢谢你的良好的收官之作。你提到一些这比较L1和L2规范的论文,我计划研究,多一点深入。你还知道他们的名字?

  28. 我如何能计算TF IDF为自己的文本文件,它位于一些地方在我的电脑?

  29. 辉煌的文章。

    到目前为止TF-TDF的最简单,最完善的解释,我读过。我真的很喜欢你如何解释数学后面。

  30. 嗨,伟大的职位!我使用的是TfidVectorizer模块scikit学习产生与规范= L2的TF-IDF矩阵。我把它叫做tfidf_matrix语料的fit_transform后,我一直在检查TfidfVectorizer的输出。我总结了行,但他们并不总和为1的代码是VECT = TfidfVectorizer(use_idf =真,sublunar_tf =真,规范=” L2)。tfidf_matrix = vect.fit_transform(数据)。当我运行tfidf_matrix.sum(轴= 1)的载体是大于1也许我看错矩阵或我误解如何正常化的作品。我希望有人能澄清这一点!谢谢

  31. 我能问你的时候计算的IDF,例如日志(2/1),你用日志基地10(E)或其他一些价值?我得到不同的计算!

  32. 伟大的教程,刚开始在ML一份新工作,这很清楚,因为它应该是解释的事情。亚洲金博宝

  33. Execellent post….!!! Thanks alot for this article.

    But I need more information, As you show the practical with python, Can you provide it with JAVA language..

  34. I’m a little bit confused why tf-idf gives negative numbers in this case? How do we interpret them? Correct me if I am wrong, but when the vector has a positive value, it means that the magnitude of that component determines how important that word is in that document. If the it is negative, I don’t know how to interpret it. If I were to take the dot product of a vector with all positive components and one with negative components, it would mean that some components may contribute negatively to the dot product even though on of the vectors has very high importance for a particular word.

  35. Hi,
    非常感谢您对这个主题这个详细的解释,真是太好了。无论如何,你可以给我一个提示,这可能是我的错误,我不断看到的来源:

    freq_term_matrix = count_vectorizer.transform(TEST_SET)
    AttributeError的:“矩阵”对象没有属性“变换”

    Am I using a wrong version of sklearn?

  36. 真棒简单而有效的explaination.Please发布更多的话题与这样真棒explainations.Looking着为即将到来的文章。
    谢谢

  37. Thank you Chris, you are the only one on the web who was clear about the diagonal matrix.

  38. 我明白了TF-IDF计算处理。不过这是什么矩阵均值,以及我们如何使用TFIDF矩阵计算相似度让我困惑。你能解释一下,我们如何利用TFIDF矩阵.thanks

  39. 最好的解释..非常有帮助。亚洲金博宝你能告诉我如何绘制矢量文本分类的SVM ..我在微博分类工作。我很困惑,请帮助我。

  40. 您好,我很抱歉,如果我有错,但我不明白是怎么|| VD4 || 2 = 1。
    D4 =的值(0.0,0.89,0.44,0.0),因此归一化将是= SQRT(正方形(0.89)+平方(0.44))= SQRT(0.193)= 0.44
    所以我有没有遗漏了什么?请帮我明白了。

  41. 嗨,这是一个伟大的博客!
    If I need to do bi-gram cases, how can I use sklearn to finish it?

  42. 我没有得到相同的结果,当我执行相同的脚本。
    打印(“IDF:”,tfidf.idf_):IDF:[2.09861229 1. 1.40546511 1]

    我的Python版本:3.5
    Scikit了解的版本是:o.18.1

    what does i need to change? what might be the possible error?

    谢谢,

    1. It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

  43. 完美的介绍!
    没有骗人把戏。清晰简单的,随着技术的应。
    亚洲金博宝很有帮助
    Thank you very much.
    Keep posting!
    Obrigado

  44. 为什么| d |= 2,在IDF方程。它不应该是4,因为| d |代表的审议的文件数量,我们有2从测试,2个来自火车。

  45. 哎,HII基督教
    您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目,其中我使用向量空间模型,这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。
    谢谢

  46. 看到这个例子就知道如何使用它的文本分类过程。“这个”链接不起作用了。能否请您提供相关链接,例如。

    谢谢

  47. 也就是说,如果你有一个很好的post.Really谢谢!太棒了。

  48. 当然有很大的了解这个问题。我真的很喜欢所有的点,你做。

  49. 1vbXlh你提出了一个非常美妙的细节,欣赏它的职位。亚洲金博宝

  50. I know this site provides quality based articles or
    评论和其他数据,还有没有其他的网页呈现这类
    information in quality?

  51. 在第一个例子。IDF(T1),日志(2/1)由计算器= 0.3010。为什么他们获得0.69 ..请有什么不对?

发表回复Shipika辛格取消回复

Your email address will not be published.

本网站使用的Akismet,以减少垃圾邮件。了解您的意见如何处理数据