Machine Learning :: Text feature extraction (tf-idf) – Part II

Read the first part of this tutorial:文本特征提取(TF-IDF) - 第一部分

This post is acontinuationof the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend youto read the first partof the post series in order to follow this second post.

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

介绍

In the first post, we learned how to use theterm-frequencyto represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

But let’s go back to our definition of the\mathrm{tf}(t,d)这实际上是长期的长期计数t在文档中d。The use of this simple term frequency could lead us to problems likekeyword spamming, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency\mathrm{tf}(t,d)of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

矢量归

Suppose we are going to normalize the term-frequency vector\vec{v_{d_4}}我们在本教程的第一部分已经计算。该文件d4从本教程的第一部分中有这样的文字表示:

D4:我们可以看到闪亮的阳光,明亮的阳光下。

And the vector space representation using the non-normalized term-frequency of that document was:

\vec{v_{d_4}} = (0,2,1,0)

To normalize the vector, is the same as calculating the单位向量of the vector, and they are denoted using the “hat” notation:\hat{v}。的单位矢量的定义\hat{v}of a vector\vec{v}is:

\ displaystyle \帽子{v} = \压裂vec {v}} {\ vec {v} {\ | \ \ |_p}

Where the\hat{v}是单位矢量,或者归一化矢量,所述\vec{v}是个vector going to be normalized and the\ | \ VEC {V} \ | _p是个norm (magnitude, length) of the vector\vec{v}在里面L^pspace (don’t worry, I’m going to explain it all).

的单位矢量实际上无非是矢量的归一化版本的更多,是一种载体,其长度为1。

归一化处理(来源:http://processing.org/learning/pvector/)
归一化处理(来源:http://processing.org/learning/pvector/)

但这里的重要问题是如何向量的长度来计算,并明白这一点,你必须了解的动机L^pspaces, also calledLebesgue spaces

Lebesgue spaces

How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)
How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)

Usually, the length of a vector\ {VEC U】=(U_1,U_2,U_3,\ ldots,u_n)is calculated using the欧几里得范一个准则是在矢量空间中分配一个严格正长度或大小于所有矢量的函数-, which is defined by:

(来源:http://processing.org/learning/pvector/)
(来源:http://processing.org/learning/pvector/)

\|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}

But this isn’t the only way to define length, and that’s why you see (sometimes) a numberptogether with the norm notation, like in\ | \ VEC【U} \ | _p。这是因为它可以被概括为:

\displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}

和simplified as:

\的DisplayStyle \ | \ VEC【U} \ | _p =(\总和\ limits_ {I = 1} ^ {N} \左| \ VEC {U】_i \右| ^ P)^ \压裂{1} {P}

So when you read about aL2范, you’re reading about the欧几里得范, a norm withp=2, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without thepnumber), you have theL2范(Euclidean norm).

当你阅读一L1范你正在阅读与规范p=1, defined as:

\displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)

这无非是向量的组件的简单相加,也被称为出租汽车距离, also called Manhattan distance.

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length6 \倍\ SQRT {2} \约8.48, and is the unique shortest path.
资源:Wikipedia :: Taxicab Geometry

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of thescikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later).

Back to vector normalization

现在you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector\vec{v_{d_4}} = (0,2,1,0)in order to get its unit vector\hat{v_{d_4}}。To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

\hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

这就是它!我们的法矢\hat{v_{d_4}}现在有一个L2范\|\hat{v_{d_4}}\|_2 = 1.0

Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf.

术语频率 - 逆文档频率(TF-IDF)重量

Now you have understood how the vector normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

Your document space can be defined then asd = \ {D_1,D_2,\ ldots,D_N \}wheren是个number of documents in your corpus, and in our case asD_{train} = \{d_1, d_2\}D_{test} = \{d_3, d_4\}。The cardinality of our document space is defined by\left|{D_{train}}\right| = 2\left|{D_{test}}\right| = 2, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

现在让我们看看,然后是如何IDF(逆文档频率)定义:

\的DisplayStyle \ mathrm {IDF}(T)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:吨\在d \} \右|}}

where\left|\{d : t \in d\}\right|是个number of documentswhere the termtappears, when the term-frequency function satisfies\ mathrm {TF}(T,d)\ 0 NEQ, we’re only adding 1 into the formula to avoid zero-division.

The formula for the tf-idf is then:

\mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)

和this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (本地参数)和整个集合中的术语的低文档频率(global parameter).

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

M_ {}列车=  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

Since we have 4 features, we have to calculate\mathrm{idf}(t_1),\mathrm{idf}(t_2),\mathrm{idf}(t_3),\mathrm{idf}(t_4):

\mathrm{idf}(t_1) = \log{\frac{\left|D\right|}{1+\left|\{d : t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718

\mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

\ mathrm {IDF}(T_4)= \日志{\压裂{\左| d \右|} {1+ \左| \ {d:T_4 \在d \} \右|}} = \日志{\压裂{2} {2}} = 0.0

这些idf权重可以用一个向量表示s:

\vec{idf_{train}} = (0.69314718, -0.40546511, -0.40546511, 0.0)

现在we have our matrix with the term frequency (M_ {}列车) and the vector representing the idf for each feature of our matrix (\vec{idf_{train}}),我们可以计算出我们的TF-IDF权重。我们要做的是矩阵中的每一列的简单乘法M_ {}列车with the respective\vec{idf_{train}}vector dimension. To do that, we can create a square对角矩阵calledM_ {} IDFwith both the vertical and horizontal dimensions equal to the vector\vec{idf_{train}}dimension:

M_ {} IDF=   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

和then multiply it to the term frequency matrix, so the final result can be defined then as:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}

Please note that the matrix multiplication isn’t commutative, the result ofA \times Bwill be different than the result of theB \times A,这就是为什么M_ {} IDFis on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

{bmatrix} \ \开始mathrm {tf} (t_1 d_1) & \ mathrm {tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

Let’s see now a concrete example of this multiplication:

M_ {TF \ MBOX { - } IDF} = M_ {火车} \倍M_ {IDF}= \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

最后,我们可以将我们的L2归一化处理的M_{tf\mbox{-}idf}matrix. Please note that this normalization is“row-wise”because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

M_{tf\mbox{-}idf} = \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2} = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

蟒蛇practice

环境中使用:蟒蛇v.2.7.2,Numpy 1.6.1,Scipy v.0.9.0,Sklearn(Scikits.learn)v.0.9

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using theScikit.learnfeature extraction module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

现在,我们有频率项矩阵(称为freq_term_matrix), we can instantiate theTfidfTransformer, which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute calledidf_。现在fit()method has calculated the idf for the matrix, let’s transform thefreq_term_matrixto the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, thetf_idf_matrixis actually our previousM_{tf\mbox{-}idf}matrix. You can accomplish the same effect by using theVectorizerclass of the Scikit.learn which is a vectorizer that automatically combines theCountVectorizer和theTfidfTransformerto you. Seethis exampleto know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," inTerra Incognita, 03/10/2011,//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/

参考

Understanding Inverse Document Frequency: on theoretical arguments for IDF

维基百科:: TF-IDF

经典的向量空间模型

Sklearn text feature extraction code

更新

2015年3月13日Formating, fixed images issues.
03 Oct 2011Added the info about the environment used for Python examples