Read the first part of this tutorial:Text feature extraction (tf-idf) – Part I。

This post is a**延续**of the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend you**阅读第一部分**of the post series in order to follow this second post.

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

### 介绍

In the first post, we learned how to use the**term-frequency**以表示在矢量空间的文本信息。然而，与术语频率方法的主要问题是，它大大加快了频繁的条款和规模下降，这比高频方面经验更丰富罕见的条款。基本的直觉是，在许多文件中经常出现的一个术语不太好鉴别，真正有意义的（至少在许多实验测试）;这里最重要的问题是：你为什么会在例如分类问题，强调术语，是在你的文档的整个语料库几乎礼物？

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

But let’s go back to our definition of thewhich is actually the term count of the term在文档中。The use of this simple term frequency could lead us to problems like*keyword spamming*, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (*Information Retrieval*）系统，甚至对创建长文档偏见，使他们看起来比他们只是因为手册中出现的高频更重要。

To overcome this problem, the term frequencyof a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

### 矢量归

Suppose we are going to normalize the term-frequency vector我们在本教程的第一部分已经计算。该文件from the first part of this tutorial had this textual representation:

d4: We can see the shining sun, the bright sun.

And the vector space representation using the non-normalized term-frequency of that document was:

规范化的向量,是一样的说话g theUnit Vectorof the vector, and they are denoted using the “hat” notation:。The definition of the unit vectorof a vectoris:

Where the是单位矢量，或者归一化矢量，所述是个vector going to be normalized and the是矢量的范数（大小，长度）in thespace (don’t worry, I’m going to explain it all).

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

但这里的重要问题是如何向量的长度来计算，并明白这一点，你必须了解的动机空间，也被称为Lebesgue spaces。

### Lebesgue spaces

Usually, the length of a vectoris calculated using the欧几里得范–*一个准则是在矢量空间中分配一个严格正长度或大小于所有矢量的函数*-, which is defined by:

但是，这不是定义长度的唯一途径，这就是为什么你看到（有时）的数together with the norm notation, like in。That’s because it could be generalized as:

和simplified as:

所以，当你阅读有关**L2-norm**, you’re reading about the**欧几里得范**，具有规范, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without the号），你有**L2-norm**(Euclidean norm).

当你阅读一**L1-norm**你正在阅读与规范, defined as:

这无非是向量的组件的简单相加，也被称为Taxicab distance, also called Manhattan distance.

*Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length，并且是唯一的最短路径。资源：维基百科::出租车几何*

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of thescikits.learn。You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (*a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later*).

### Back to vector normalization

现在you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vectorin order to get its unit vector。To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

这就是它！我们的法矢现在有一个L2范。

**Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf.**

### 术语频率 - 逆文档频率（TF-IDF）重量

现在您已经了解如何向量normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.

Your document space can be defined then aswhere是个number of documents in your corpus, and in our case as和。The cardinality of our document space is defined by和, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

现在让我们看看，然后是如何IDF（逆文档频率）定义：

where是个**number of documents**where the term看来,当term-frequency function satisfies, we’re only adding 1 into the formula to avoid zero-division.

The formula for the tf-idf is then:

和this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (*本地参数*）和整个集合中的术语的低文档频率（*global parameter*).

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

Since we have 4 features, we have to calculate,,,:

这些IDF权重可以由矢量作为表示：

现在we have our matrix with the term frequency () and the vector representing the idf for each feature of our matrix (），我们可以计算出我们的TF-IDF权重。我们要做的是矩阵中的每一列的简单乘法with the respectivevector dimension. To do that, we can create a squarediagonal matrix叫with both the vertical and horizontal dimensions equal to the vectordimension:

然后将它乘到术语频率矩阵，因此最终结果然后可以定义为：

Please note that the matrix multiplication isn’t commutative, the result ofwill be different than the result of the，这就是为什么is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

Let’s see now a concrete example of this multiplication:

And finally, we can apply our L2 normalization process to thematrix. Please note that this normalization is**“row-wise”**because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

### Python practice

**环境中使用**:Python v.2.7.2,NumPy的1.6.1,SciPy的v.0.9.0,Sklearn（Scikits.learn）v.0.9。

现在，你在等待的部分！在本节中，我将使用Python的使用，以显示TF-IDF计算的每一步Scikit.learnfeature extraction module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") count_vectorizer = CountVectorizer() count_vectorizer.fit_transform(train_set) print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} freq_term_matrix = count_vectorizer.transform(test_set) print freq_term_matrix.todense() #[[0 1 1 1] #[0 2 1 0]]

现在，我们有频率项矩阵（称为**freq_term_matrix**), we can instantiate the**TfidfTransformer**, which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) print "IDF:", tfidf.idf_ # IDF: [ 0.69314718 -0.40546511 -0.40546511 0. ]

请注意，我所指定的标准为L2，这是可选的（实际上默认为L2范数），但我已经添加了参数，使其明确向你表示，它会使用L2范数。还要注意的是，你可以通过访问称为内部属性看IDF计算权重**idf_**。现在**fit()**我thod has calculated the idf for the matrix, let’s transform the**freq_term_matrix**to the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform(freq_term_matrix) print tf_idf_matrix.todense() # [[ 0. -0.70710678 -0.70710678 0. ] # [ 0. -0.89442719 -0.4472136 0. ]]

And that is it, the**tf_idf_matrix**其实我们以前matrix. You can accomplish the same effect by using the**Vectorizer**Scikit的类。学习是一个vectorizer that automatically combines the**CountVectorizer**和the**TfidfTransformer**to you. Seethis exampleto know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

*亚洲金博宝未知领域*，03/10/2011，//www.cpetem.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/。

### 参考

Sklearn text feature extraction code

### 更新

**13 Mar 2015**–*格式化，固定图像的问题。***03 Oct 2011**–*Added the info about the environment used for Python examples*

Wow!

Perfect intro in tf-idf, thank you very much! Very interesting, I’ve wanted to study this field for a long time and you posts it is a real gift. It would be very interesting to read more about use-cases of the technique. And may be you’ll be interested, please, to shed some light on other methods of text corpus representation, if they exists?

（对不起，糟糕的英语，我正在努力对其进行改进，但仍然有很多工作要做的）

Excellent work Christian! I am looking forward to reading your next posts on document classification, clustering and topics extraction with Naive Bayes, Stochastic Gradient Descent, Minibatch-k-Means and Non Negative Matrix factorization

Also, the documentation of scikit-learn is really poor on the text feature extraction part (I am the main culprit…). Don’t hesitate to join the mailing list if you want to give a hand and improve upon the current situation.

Great thanks Olivier. I really want to help sklearn, I just have to get some more time to do that, you guys have done a great work, I’m really impressed by the amount of algorithms already implemented in the lib, keep the good work !

I like this tutorial better for the level of new concepts i am learning here.

That said, which version of scikits-learn are you using?.

The latest as installed by easy_install seems to have a different module hierarchy (i.e doesn’t find feature_extraction in sklearn). If you could mention the version you used, i will just try out with those examples.

Hello Anand, I’m glad you liked it. I’ve added the information about the environment used just before the section “Python practice”, I’m using the scikits.learn 0.9 (released a few weeks ago).

Where’s part 3? I’ve got to submit an assignment on Vector Space Modelling in 4 days. Any hope of putting it up over the weekend?

I’ve no date to publish it since I haven’t got any time to write it =(

谢谢again for this complete and explicit tutorial and I am waiting for the coming section.

谢谢克里斯tian! a very nice work on vector space with sklearn. I just have one question, suppose I have computed the ‘tf_idf_matrix’, and I would like to compute the pair-wise cosine similarity (between each rows). I was having problem with the sparse matrix format, can you please give an example on that? Also my matrix is pretty big, say 25k by 60k. Thanks a lot!

Great post… I understand what tf-idf and how to implement it with a concrete example. But I caught 2 things that I’m not sure about:

1- You called the 2 dimensional matrix M_train, but it has the tf values of the D3 and D4 documents, so you should’ve called that matrix M_test instead of M_train. Because D3 and D4 are our test documents.

2 - 当你计算IDF值的T2（这是“太阳”），它应该是日志（2/4）。因为文件的数目是2 D3有词“太阳” 1次，D4有它的2倍。这使得3，但是我们也加1到值摆脱0分的问题。这使得4 ...我说得对不对还是我失去了一些东西？

Thank you.

You are correct: these are excellent blog articles, but the author REALLY has a duty/responsibility to go back and correct errors, like this (and others, e.g. Part 1; …): missing training underscores; setting the stop_words parameter; also on my computer, the vocabulary indexing is different.

As much as we appreciate the effort (kudos to the author!), it is also a significant disservice to those who struggle past those (uncorrected) errors in the original material.

re: my ‘you are correct comment’ (above), I should have added:

“… noting also Frédérique Passot’s comment (below) regarding the denominator:

“......我们用的是什么确实是在发生的一个术语，无论任何给定的文档中出现的术语次数的文件数量。在这种情况下，然后，在用于T2（“太阳”）的IDF值分母确实2 + 1（2个文件具有“太阳”术语，1以避免潜在的零分割误差）。“

Khalid,

This is a response to a very old question. However, I still want to respond to communicate what I understand from the article.

Your question 2: “When you calculate the idf value for the t2 (which is ‘sun’) it should be log(2/4)”

My understanding: The denominator in log term should be (number of documents in which the term appears + 1) and not frequency of the term. The number of documents the term “Sun” appears is 2 (1 time in D3 and 2 times in D4 — totally it appears 3 times in two documents. 3 is frequency and 2 is number of documents). Hence the denominator is 2 + 1 = 3.

thanks… excellent post…

优秀的帖子！

I have some question. From the last tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?). How can we use this matrix to classify documents

Thank You So Much. You explained it in such a simple way. It was really useful. Once again thanks a lot.

我有同样的疑问，杰克（最后的评论）。从上个TF-IDF权重矩阵，我们怎么能拿到各自任期的重要性（例如，这是最重要的用语？）。我们如何利用这个矩阵来区分文档。

我有个问题..

After the tf-idf operation, we get a numpy array with values. Suppose we need to get the highest 50 values from the array. How can we do that?

high value of f(idf) denotes that the particular vector(or Document) has high local strength and low global strength, in which case you can assume that the terms in it has high significance locally and cant be ignored. Comparing against funtion(tf) where only the term repeats high number of times are the ones given more importance,which most of the times is not a proper modelling technique.

Hey ,

Thanx fr d code..was very helpful indeed !

1.适用于文档聚类，计算反相的术语频率之后，shud我使用任何关联性系数等Jaccards系数，然后应用聚类算法中像k均值或shud我计算反转术语频率后直接适用d k均值到文档向量？

2.您是如何评价倒词频为calcuating文档向量文本聚类？

谢谢a ton fr the forth coming reply!

@Khalid: what you’re pointing out in 1- got me confused too for a minute (M_train vs M_test). I think you are mistaken on your second point, though, because what we are using is really the number of documents in which a term occurs, regardless of the number of times the term occurs in any given document. In this case, then, the denominator in the idf value for t2 (“sun”) is indeed 2+1 (2 documents have the term “sun”, +1 to avoid a potential zero division error).

I’d love to read the third installment of this series too! I’d be particularly interested in learning more about feature selection. Is there an idiomatic way to get a sorted list of the terms with the highest tf.idf scores? How would you identify those terms overall? How would you get the terms which are the most responsible for a high or low cosine similarity (row by row)?

Thank you for the _great_ posts!

Should idf(t2) be log 2/4 ?

Excellent article and a great introduction to td-idf normalization.

你必须解释这些复杂的概亚洲金博宝念非常清晰，结构化的方法。

谢谢！

谢谢for the feedback Matthys, I’m glad you liked the tutorial series.

very good & infomative tutorial…. please upload more tutorials related to documents clustering process.

Excellent article ! Thank you Christian. You did a great job.

Can you provide any reference for doing cosine similarity using tfidf so we have the matrix of tf-idf how can we use that to calculate cosine. Thanks for fantastic article.

谢谢so much for this and for explaining the whole tf-idf thing thoroughly.

感谢您的反馈，我很高兴你喜欢这个系列教程。

Please correct me if i’m worng

与启动后的公式“我们在第一个教程中计算出的频率：”应该不MTEST Mtrain。也开始“这些IDF权重可以由矢量作为表示后：”应该是不idf_test idf_train。

Btw great series, can you give an simple approach for how to implement classification?

Excellent it really helped me get through the concept of VSM and tf-idf. Thanks Christian

Very good post. Congrats!!

Showing your results, I have a question:

我读了维基百科：

The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

When I read it, I understand that if a word apperars in all documents is less important that a word that only appears in one document:

然而，在结果中，“太阳”或“明亮”是比“天空”最重要的。

I’m not sure of understand it completly.

Awesome! Explains TF-IDF very well. Waiting eagerly for your next post.

awesome work with a clear cut explanation . Even a layman can easily understand the subject..

Great thanks for the feedback Rahul !

Hello,

The explanation is awesome. I haven’t seen a better one yet. I have trouble reproducing the results. It might be because of some update of sklearn.

Would it be possible for you to update the code?

It seem that the formula for computing the tf-idf vector has changed a little bit. Is a typo or another formula. Below is the link to the source code.

https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L954

Many thanks

Terrific! I was familiar with tf-idf before but I found your scikits examples helpful as I’m trying to learn that package.

I’m glad you liked Susan, thanks for the feedback !

Thank you for writing such a detailed post. I learn allot.

优秀的帖子！一次偶然的机会找上CountVectorizer更多信息，无意中发现了这一点，但我很高兴我通过两个您的文章（第1部分和第2部分）的读取。

Bookmarking your blog now

Great thanks for the feedback Eugene, I’m really glad you liked the tutorial series.

似乎没有fit_transform（）为你描述..

Any idea why ?

>>> ts

(‘The sky is blue’, ‘The sun is bright’)

>>> V7 = CountVectorizer（）

>>> v7.fit_transform(ts)

<2×2 sparse matrix of type '’

with 4 stored elements in COOrdinate format>

>>>打印v7.vocabulary_

{u’is’: 0, u’the’: 1}

Actually, there are two small errors in the first Python sample.

1. CountVectorizer should be instantiated like so:

`count_vectorizer = CountVectorizer(stop_words='english')`

这将确保“是”，“的”等被删除。

2.要打印的词汇，你必须在末尾添加下划线。

`打印“词汇：” count_vectorizer.vocabulary_`

Excellent tutorial, just small things. hoep it helps others.

谢谢ash. although the article was rather self explanatory, your comment made the entire difference.

I loved your article.

I’m using scikit learn v .14. Is there any reason my results for running the exact same code would result in different results?

谢谢for taking time to write this article. Found it very useful.

Its useful…..thank you explaining the TD_IDF very elaborately..

谢谢for the great explanation.

I have a question about calculation of the idf(t#).

In the first case, you wrote idf(t1) = log(2/1), because we don’t have such term in our collection, thus, we add 1 to the denominator. Now, in case t2, you wrote log(2/3), why the denominator is equal to 3 and not to 4 (=1+2+1)? In case t3, you write: log(2/3), thus the denominator is equal 3 (=1+1+1). I see here kind of inconsistency. Could you, please, explain, how did calculate the denominator value.

谢谢。

Hello Mike, thanks for the feedback. You’re right, I just haven’t fixed it yet due to lack of time to review it and recalculate the values.

你理解错了，分母你不把这个词的总和每个文档中，你只是总结所有具有词的至少一个aparition的文件。

yes, I had the same question…

This is good post

it is good if you can provide way to know how use ft-idf in classification of document. I see that example (python code) but if there is algorithm that is best because no all people can understand this language.

谢谢

Great post, really helped me understand the tf-idf concept!

漂亮的文章

尼斯。一种解释有助于正确看待这个事情。是TF-IDF的好办法做聚类（例如，从已知的语料用杰卡德分析或方差相对于平均值设定）？

Keep writing:)

Hi Christian,

这让我非常兴奋和幸运，读亚洲金博宝这篇文章。你理解的清晰反映了文件的清晰度。这让我重拾我的信心在机器学习领域。

谢谢a ton for the beautiful explanation.

Would like to read more from you.

谢谢，

Neethu

Great thanks for the kind wors Neethu ! I’m very glad you liked the tutorial series.

thank you very very much,very wonderful and useful.

感谢您的反馈Esra'a。

Thank you for the good wrap up. You mention a number of papers which compare L1 and L2 norm, I plan to study that a bit more in depth. You still know their names?

how can i calculate tf idf for my own text file which is located some where in my pc?

Brilliant article.

By far the easiest and most sound explanation of tf-tdf I’ve read. I really liked how you explained the mathematics behind it.

superb article for newbies

优良的材质。优秀的！！！

嗨，伟大的职位！我使用的是TfidVectorizer模块scikit学习产生与规范= L2的TF-IDF矩阵。我把它叫做tfidf_matrix语料的fit_transform后，我一直在检查TfidfVectorizer的输出。我总结了行，但他们并不总和为1的代码是VECT = TfidfVectorizer（use_idf =真，sublunar_tf =真，规范=” L2）。tfidf_matrix = vect.fit_transform（数据）。当我运行tfidf_matrix.sum（轴= 1）的载体是大于1也许我看错矩阵或我误解如何正常化的作品。我希望有人能澄清这一点！谢谢

Can I ask when you calculated the IDF, for example, log(2/1), did you use log to base 10 (e) or some other value? I’m getting different calculations!

伟大的教程，刚开始在ML一份新工作，这很清楚，因为它应该是解释的事情。亚洲金博宝

Execellent post….!!! Thanks alot for this article.

But I need more information, As you show the practical with python, Can you provide it with JAVA language..

I’m a little bit confused why tf-idf gives negative numbers in this case? How do we interpret them? Correct me if I am wrong, but when the vector has a positive value, it means that the magnitude of that component determines how important that word is in that document. If the it is negative, I don’t know how to interpret it. If I were to take the dot product of a vector with all positive components and one with negative components, it would mean that some components may contribute negatively to the dot product even though on of the vectors has very high importance for a particular word.

Hi,

非常感谢您对这个主题这个详细的解释，真是太好了。无论如何，你可以给我一个提示，这可能是我的错误，我不断看到的来源：

freq_term_matrix = count_vectorizer.transform（TEST_SET）

AttributeError: ‘matrix’ object has no attribute ‘transform’

Am I using a wrong version of sklearn?

Awesome simple and effective explaination.Please post more topics with such awesome explainations.Looking forward for upcoming articles.

谢谢

Thank you Chris, you are the only one on the web who was clear about the diagonal matrix.

Great tutorial for Tf-Idf. Excellent work . Please add for cosine similarity also:)

I understood the tf-idf calculation process. But what does that matrix mean and how can we use the tfidf matrix to calculate the similarity confuse me. can you explain that how can we use the tfidf matrix .thanks

THX为你的露骨和详细的解释。

谢谢，nice post, I’m trying it out

Thank you so much for such an amazing detailed explanation!

best explanation.. Very helpful. Can you please tell me how to plot vectors in text classification in svm.. I am working on tweets classification. I am confused please help me.

I learned so many things. Thanks Christian. Looking forward for your next tutorial.

Hi, I’m sorry if i have mistaken but i could not understand how is ||Vd4||2 = 1.

D4 =的值（0.0，0.89,0.44,0.0），因此归一化将是= SQRT（正方形（0.89）+平方（0.44））= SQRT（0.193）= 0.44

so what did i missed ? please help me to understand .

嗨，这是一个伟大的博客！

If I need to do bi-gram cases, how can I use sklearn to finish it?

it is very great. i love your teach. very very good

我没有得到相同的结果，当我执行相同的脚本。

print (“IDF:”, tfidf.idf_) : IDF: [ 2.09861229 1. 1.40546511 1. ]

My python version is: 3.5

Scikit了解的版本是：o.18.1

what does i need to change? what might be the possible error?

谢谢，

It can be many things, since you’re using a different Python interpreter version and also a different Scikit-Learn version, you should expect differences in the results since they may have changed default parameters, algorithms, rounding, etc.

I am also getting: IDF: [2.09861229 1. 1.40546511 1. ]

完美的介绍！

No hocus pocus. Clear and simple, as technology should be.

亚洲金博宝很有帮助

Thank you very much.

Keep posting!

Obrigado

为什么| d |= 2，在IDF方程。它不应该是4，因为| d |代表的审议的文件数量，我们有2从测试，2个来自火车。

这篇文章很有意思。我喜欢这个岗位？

clear cut and to the point explanations….great

hey , hii Christian

您的文章是真正帮助我了解从基础TFD-IDF。我在分类的一个项目，其中我使用向量空间模型，这导致在确定类别在我的测试文档应该存在。机器学习的一部分。如果你认为我有关的东西这将是巨大的。我被困在这一点上。

thank you

看到这个例子就知道如何使用它的文本分类过程。“这个”链接不起作用了。能否请您提供相关链接，例如。

谢谢

such a great explanation! thankyou!

wow, awesome post.Much thanks again. Will read on

Say, you got a nice post.Really thank you! Fantastic.

Wow, great article post.Much thanks again. Awesome.

There is certainly a great deal to learn about this subject. I really like all the points you made.

1vbXlh你提出了一个非常美妙的细节，欣赏它的职位。亚洲金博宝

I know this site provides quality based articles or

reviews and additional data, is there any other web page which presents these kinds of

information in quality?

在第一个例子。IDF（T1），日志（2/1）由计算器= 0.3010。为什么他们获得0.69 ..请有什么不对？