机器学习::文本特征提取(TF-IDF) - 第二部分

阅读本教程的第一部分:Text feature extraction (tf-idf) – Part I

这个职位是一个Continuationof the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend youŤo read the first part后一系列以遵循这个第二。

由于很多人喜欢这个教程的第一部分,该第二部分是比第一个长一点。

Introduction

在第一篇文章中,我们学会了如何使用长期频Ťo represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

在TF-IDF权重来解决这个问题。什么TF-IDF给出的是如何重要的是一个集合中的文档的话,这就是为什么TF-IDF结合本地和全球的参数,因为它考虑到不仅需要隔离的期限,但也文献集内的术语。什么TF-IDF然后做来解决这个问题,是缩小,同时扩大了难得的条件频繁的条款;出现比其他的10倍以上期限不为10倍比它更重要的是,为什么TF-IDF采用对数刻度的做到这一点。

但是,让我们回到我们的定义\ mathrm {TF}(T,d)which is actually the term count of the termŤin the documentd。使用这种简单的词频可能导致我们一样的问题滥用关键字,这是当我们有一个文档中的术语重复以改善上的IR其排名的目的(信息检索)系统,甚至对创建长文档偏见,使他们看起来比他们只是因为手册中出现的高频更重要。

To overcome this problem, the term frequency\ mathrm {TF}(T,d)上的矢量空间中的文件的通常也归一化。让我们来看看我们是如何规范这一载体。

Vector normalization

假设我们要正常化术语频矢量\vec{v_{d_4}}Ťhat we have calculated in the first part of this tutorial. The documentD4from the first part of this tutorial had this textual representation:

D4:We can see the shining sun, the bright sun.

和使用该文件的非归一化项频向量空间表示为:

\vec{v_{d_4}} = (0,2,1,0)

为了归一化矢量,是相同的计算Unit Vector矢量,而他们使用的是“帽子”符号表示:\hat{v}。The definition of the unit vector\hat{v}一个向量的\ VEC {V}是:

\的DisplayStyle \帽子{V} = \压裂{\ vec的{V}} {\ | \ vec的{V} \ | _p}

\hat{v}is the unit vector, or the normalized vector, the\ VEC {V}is the vector going to be normalized and the\|\vec{v}\|_pis the norm (magnitude, length) of the vector\ VEC {V}in theL ^ p空间(别担心,我将所有的解释)。

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

The normalization process (Source: http://processing.org/learning/pvector/)
The normalization process (Source: http://processing.org/learning/pvector/)

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of theL ^ pspaces, also called勒贝格空间

勒贝格空间

How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)
How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)

通常,一个矢量的长度\vec{u} = (u_1, u_2, u_3, \ldots, u_n)使用计算Euclidean norm-a norm is a function that assigns a strictly positive length or size to all vectors in a vector space- ,其被定义为:

(Source: http://processing.org/learning/pvector/)
(Source: http://processing.org/learning/pvector/)

\ | \ VEC【U} \ |= \ SQRT【U ^ 2_1 + U ^ 2_2 + U ^ 2_3 + \ ldots + U ^ 2_n}

But this isn’t the only way to define length, and that’s why you see (sometimes) a numberp符合规范的符号,就像在了一起\ | \ VEC【U} \ |_p。那是因为它可以概括为:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\左| U_1 \右| ^ P + \左| U_2 \右| ^ P + \左| U_3 \右| ^ P + \ ldots + \左|u_n \右| ^ p)^ \压裂{1} {p}

并简化为:

\displaystyle \|\vec{u}\|_p = (\sum\limits_{i=1}^{n}\left|\vec{u}_i\right|^p)^\frac{1}{p}

So when you read about aL2-norm,你正在阅读关于Euclidean norm,a norm withp = 2时用于测量的矢量的长度的最常用标准,通常称为“大小”;其实,当你有一个不合格的长度测量(不pñumber), you have theL2-norm(欧几里得范数)。

When you read about aL1-norm,你正在阅读关于ñorm withP = 1, 定义为:

\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)

Which is nothing more than a simple sum of the components of the vector, also known asTaxicab distance,也被称为曼哈顿距离。

出租车几何与欧几里得距离:在出租车几何所有三个描绘线具有对于相同的路径具有相同的长度(12)。在欧几里德几何,绿色的线有长度,6 \times \sqrt{2} \approx 8.48,并且是唯一的最短路径。
Source:维基百科::出租车几何

请注意,您也可以使用任何规范正常化的载体,但我们将使用最常用的规范,L2范数,这也是在0.9版本的默认scikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (即所使用的L1范数的单位矢量是不会具有长度1,如果你要以后采取其L2范数)。

返回矢量归

Now that you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector\vec{v_{d_4}} = (0,2,1,0)为了得到其单位向量\hat{v_{d_4}}。为了做到这一点,我们将简单的将其插入单位矢量的定义,对其进行评估:

\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

And that is it ! Our normalized vector\hat{v_{d_4}}has now a L2-norm\|\hat{v_{d_4}}\|_2 = 1.0

请注意,这里我们归我们词频文档向量,但后来我们要做的是,TF-IDF的计算后。

The term frequency – inverse document frequency (tf-idf) weight

现在你已经了解了矢量归在理论和实践是如何工作的,让我们继续我们的教程。假设你有你的收藏(从教程的第一部分拍摄)在下列文件:

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

您的文档空间可以那么作为被定义D = \{ d_1, d_2, \ldots, d_n \}哪里ñ是在你的文集文档的数量,并在我们的情况下,D_{train} = \{d_1, d_2\}andD_{test} = \{d_3, d_4\}。我们的文档空间的基数被定义\左| {{D_火车}} \右|= 2and\left|{D_{test}}\right| = 2,因为我们只有2两个用于训练和测试文档,但他们显然并不需要有相同的基数。

Let’s see now, how idf (inverse document frequency) is then defined:

\displaystyle \mathrm{idf}(t) = \log{\frac{\left|D\right|}{1+\left|\{d : t \in d\}\right|}}

哪里\左| \ {d:T \在d \} \右|is the文件数其中术语Ť出现,术语频率函数满足当\ mathrm {TF}(T,d)\neq 0,我们只加1代入公式,以避免零分。

The formula for the tf-idf is then:

\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)

和该公式具有重要的后果:当你有给定文档中高词频(TF)达到TF-IDF计算的高权重(local parameter) and a low document frequency of the term in the whole collection (全局参数)。

现在,让我们计算每个出现在与我们在第一个教程计算词频特征矩阵功能的IDF:

M_{train} =  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

因为我们有4个特点,我们要计算\ mathrm {IDF}(T_1)\mathrm{idf}(t_2)\mathrm{idf}(t_3)\ mathrm {IDF}(T_4)

\ mathrm {IDF}(T_1)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_1 \在d \} \右|}} = \日志{\压裂{2} {1}} = 0.69314718

\ mathrm {IDF}(T_2)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_2 \在d \} \右|}} = \日志{\压裂{2} {3}} = -0.40546511

\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\ mathrm {IDF}(T_4)= \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0

这些IDF权重可以由矢量作为表示:

\ {VEC {idf_列车}} =(0.69314718,-0.40546511,-0.40546511,0.0)

Now that we have our matrix with the term frequency (M_{train}) and the vector representing the idf for each feature of our matrix (\ {VEC {idf_列车}}), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrixM_{train}与各自的\ {VEC {idf_列车}}向量维度。要做到这一点,我们可以创建一个正方形diagonal matrixM_{idf}同时与垂直和水平尺寸等于向量\ {VEC {idf_列车}}尺寸:

M_{idf} =   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

然后将它乘到术语频率矩阵,因此最终结果然后可以定义为:

M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf}

请注意,矩阵乘法是不可交换的,结果A \times B会比的结果不同乙\一个时代,and this is why theM_{idf}是对乘法的右侧,以完成每个IDF值到其对应的特征相乘的期望的效果:

{bmatrix} \ \开始mathrm {tf} (t_1 d_1) & \ mathrm {Ťf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

现在让我们来看看这个乘法的一个具体的例子:

M_ {TF \ MBOX { - }} IDF= M_{train} \times M_{idf} = \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

And finally, we can apply our L2 normalization process to theM_ {TF \ MBOX { - }} IDF矩阵。请注意,这正常化“逐行”因为我们要处理矩阵的每一行作为一个分离向量进行归一化,而不是矩阵作为一个整体:

M_ {TF \ MBOX { - } IDF} = \压裂{M_ {TF \ MBOX { - } IDF}} {\ | M_ {TF \ MBOX { - } IDF} \ | _2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

这就是我们的我们的测试文档集,这实际上是单位向量的集合的漂亮归TF-IDF权重。如果你把矩阵的每一行的L2范数,你会发现它们都具有1的L2范数。

Python practice

Environment UsedPython v.2.7.2Numpy 1.6.1Scipy v.0.9.0Sklearn (Scikits.learn) v.0.9

现在,你在等待的部分!在本节中,我将使用Python的使用,以显示TF-IDF计算的每一步Scikit.learn特征提取module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

Now that we have the frequency term matrix (calledfreq_term_matrix),我们可以实例化TfidfTransformer,这将是负责来计算我们的词频矩阵TF-IDF权重:

从进口sklearn.feature_extraction.text TFIDF TfidfTransformer = TfidfTransformer(NORM = “L2”)tfidf.fit(freq_term_matrix)打印 “IDF:”,tfidf.idf_#IDF:[0.69314718 -0.40546511 -0.40546511 0]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。Now that适合()方法计算矩阵中的IDF上,让我们改造freq_term_matrix到TF-IDF权重矩阵:

tf_idf_matrix = tfidf.transform(freq_term_matrix)打印tf_idf_matrix.todense()#[[0 -0.70710678 -0.70710678 0]#[0 -0.89442719 -0.4472136 0]]

这就是它的tf_idf_matrixis actually our previousM_ {TF \ MBOX { - }} IDF矩阵。您可以通过使用达到相同的效果矢量器类Scikit.learn的这是一个矢量器自动结合CountVectorizerTfidfTransformer给你。看到这个例子Ťo know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

如果你喜欢,随时提出意见和建议,修改等。

引用本文为:基督教S. Perone,“机器学习::文本特征提取(TF-IDF) - 第二部分”,在亚洲金博宝未知领域,03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

References

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Wikipedia :: tf-idf

The classic Vector Space Model

Sklearn文本特征提取码

Updates

13 Mar 2015-格式化,固定图像的问题。
2011 10月3日-添加了有关使用Python示例环境信息