Concentration inequalities, or probability bounds, are very important tools for the analysis of Machine Learning algorithms or randomized algorithms. In statistical learning theory, we often want to show that random variables, given some assumptions, are close to its expectation with high probability. This article provides an overview of the most basic inequalities in the analysis of these concentration measures.
马尔可夫不等式
马尔可夫不等式是最基本的边界之一,它假定几乎一无所知的随机变量。The assumptions that Markov’s inequality makes is that the random variable \(X\) is non-negative \(X > 0\) and has a finite expectation \(\mathbb{E}\left[X\right] < \infty\). The Markov’s inequality is given by:
$$\underbrace{P(X \geq \alpha)}_{\text{Probability of being greater than constant } \alpha} \leq \underbrace{\frac{\mathbb{E}\left[X\right]}{\alpha}}_{\text{Bounded above by expectation over constant } \alpha}$$
What this means is that the probability that the random variable \(X\) will be bounded by the expectation of \(X\) divided by the constant \(\alpha\). What is remarkable about this bound, is that it holds for any distribution with positive values and it doesn’t depend on any feature of the probability distribution, it only requires some weak assumptions and its first moment, the expectation.
例: A grocery store sells an average of 40 beers per day (it’s summer !). What is the probability that it will sell 80 or more beers tomorrow ?
The Markov’s inequality doesn’t depend on any property of the random variable probability distribution, so it’s obvious that there are better bounds to use if information about the probability distribution is available.
Integrating from -1 to 1: \(\int_{-1}^{1} \frac{1}{\sqrt{2\pi}}e^{-x^2/2}\), we know that 68% of the data is within \(1\sigma\) (one standard deviation) from the mean \(\mu\) and 95% is within \(2\sigma\) from the mean. However, when it’s not possible to assume normality, any other amount of data can be concentrated within \(1\sigma\) or \(2\sigma\).
Chebyshev’s inequality provides a way to get a bound on the concentration for any distribution, without assuming any underlying property except a finite mean and variance. Chebyshev’s also holds for any random variable, not only for non-negative variables as in Markov’s inequality.
The Chebyshev’s inequality is given by the following relation:
For the concrete case of \(k = 2\), the Chebyshev’s tells us that at least 75% of the data is concentrated within 2 standard deviations of the mean. And this holds for任何分配。
Chebyshev’s Inequality and the Weak Law of Large Numbers
切比雪夫不等式,也可以用来证明weak law of large numbers, which says that the sample mean converges in probability towards the true mean.
这是可以做到如下:
Consider a sequence of i.i.d. (independent and identically distributed) random variables \(X_1, X_2, X_3, \ldots\) with mean \(\mu\) and variance \(\sigma^2\);
For the expectation of the sample mean we have: $$\mathbb{E}\left[M_n\right] = \frac{\mathbb{E}\left[X_1\right] + \ldots +\mathbb{E}\left[X_n\right]}{n} = \frac{n\mu}{n} = \mu$$
For the variance of the sample we have: $$Var\left[M_n\right] = \frac{Var\left[X_1\right] + \ldots +Var\left[X_n\right]}{n^2} = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}$$
By the application of the Chebyshev’s inequality we have: $$ P(\mid M_n – \mu \mid \geq \epsilon) \leq \frac{\sigma^2}{n\epsilon^2}$$ for any (fixed) \(\epsilon > 0\), as \(n\) increases, the right side of the inequality goes to zero. Intuitively, this means that for a large \(n\) the concentration of the distribution of \(M_n\) will be around \(\mu\).
Privacy-preserving computation or secure computation is a sub-field of cryptography where two (two-party, or 2PC) or multiple (multi-party, or MPC) parties can evaluate a function together without revealing information about the parties private input data to each other. The problem and the first solution to it were introduced in 1982 by an amazing breakthrough done by Andrew Yao on what later became known as the “Yao’s Millionaires’ problem“.
The Yao’s Millionaires Problem is where two millionaires, Alice and Bob, who are interested in knowing which of them is richer but不透露to each other their actual wealth. In other words, what they want can be generalized as that: Alice and Bob want jointly compute a function securely, without knowing anything other than the result of the computation on the input data (that remains private to them).
Formally what we want is to jointly evaluate the following function:
Such as the private valuesAandB举行私人到它的唯一拥有者,并在结果rwill be known to just one or both of the parties.
这似乎很违反直觉的亚洲金博宝,象这样的问题可能永远不会得到解决,但对许多人惊讶的是,有可能解决这个问题的一些安全要求。由于在技术,如FHE最近的事态发展(Fully Homomorphic Encryption)不经意传输,乱码电路, problems like that started to get practical for real-life usage and they are being nowadays being used by many companies in applications such as information exchange, secure location, advertisement, satellite orbit collision avoidance, etc.
我不打算进入这些技术细节,但如果你有兴趣在OT(不经意传输)背后的直觉,你一定要读由Craig Gidney完成了惊人的解释这里。There are also, of course, many different protocols for doing 2PC or MPC, where each one of them assumes some security requirements (semi-honest, malicious, etc), I’m not going to enter into the details to keep the post focused on the goal, but you should be aware of that.
The problem: sentence similarity
我们要实现什么是使用隐私保护的计算来计算句子之间的相似性,但不透露句子的内容。只是为了给一个具体的例子:鲍勃拥有一家公司,拥有许多不同的项目的句子,如描述:“This project is about building a deep learning sentiment analysis framework that will be used for tweets“, and Alice who owns another competitor company, has also different projects described in similar sentences.What they want to do is to jointly compute the similarity between projects in order to find if they should be doing partnership on a project or not, however, and this is the important point: Bob doesn’t want Alice to know the project descriptions and neither Alice wants Bob to be aware of their projects, they want to know the closest match between the different projects they run, but但不透露the project ideas (project descriptions).
Sentence Similarity Comparison
现在,我们怎么能没有透露关于该项目的描述信息,交换有关Alice和Bob的项目句子的信息?
One naive way to do that would be to just compute the hashes of the sentences and then compare only the hashes to check if they match. However, this would assume that the descriptions are exactly the same, and besides that, if the entropy of the sentences is small (like small sentences), someone with reasonable computation power can try to recover the sentence.
Another approach for this problem (this is the approach that we’ll be using), is to compare the sentences in the sentence embeddings space. We just need to create sentence embeddings using a Machine Learning model (we’ll useInferSentlater) and then compare the embeddings of the sentences. However, this approach also raises another concern: what if Bob or Alice trains a Seq2Seq model that would go from the embeddings of the other party back to an approximate description of the project ?
It isn’t unreasonable to think that one can recover an approximate description of the sentence given their embeddings. That’s why we’ll use the two-party secure computation for computing the embeddings similarity, in a way that Bob and Alice will compute the similarity of the embeddings不透露their embeddings, keeping their project ideas safe.
Bi-LSTM max-pooling network. Source: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Alexis Conneau et al.
他们使用了双向LSTM注意那t consistently surpassed many unsupervised training methods such as the SkipThought vectors. They also provide aPytorch implementation我们将用它来生成句子的嵌入。
注意:即使你没有GPU,你可以有几个句子合理的性能做的嵌入。
The first step to generate the sentence embeddings is to download and load a pre-trained InferSent model:
所以,如果我们标准化向量单位也没有m (that’s why the vectors are wearing hats in the equation above), we can make the computation of the cosine similarity become just a simple dot product. That will help us a lot in computing the similarity distance later when we’ll use a framework to do the secure computation of this dot product.
Now, for practical reasons, I’ll be using integer computation later for computing the similarity, however, the embeddings generated by InferSent are of course real values. For that reason, you’ll see in the code below that we create another function to缩放浮点值并删除小数点and将它们转换为整数。还有另一个重要的问题,我们将在以后使用安全计算框架不允许有符号整数,所以我们还需要剪辑嵌入值tween 0.0 and 1.0. This will of course cause some approximation errors, however, we can still get very good approximations after clipping and scaling with limited precision (I’m using 14 bits for scaling to avoid overflow issues later during dot product computations):
You can use floating-point in your secure computations and there are a lot of frameworks that support them, however, it is more tricky to do that, and for that reason, I used integer arithmetic to simplify the tutorial. The function above is just a hack to make it simple. It’s easy to see that we can recover this embedding later without too much loss of precision.
Now we just need to create some sentence samples that we’ll be using:
# The list of Alice sentences alice_sentences = [ 'my cat loves to walk over my keyboard', 'I like to pet my cat', ] # The list of Bob sentences bob_sentences = [ 'the cat is always walking over my keyboard', ]
并将其转换为嵌入物:
# Alice sentences alice_sentence1 = encode(model, alice_sentences[0]) alice_sentence2 = encode(model, alice_sentences[1]) # Bob sentences bob_sentence1 = encode(model, bob_sentences[0])
以来we have now the sentences and every sentence is also normalized, we can compute cosine similarity just by doing a dot product between the vectors:
As we can see, the first sentence of Bob is most similar (~0.87) with Alice first sentence than to the Alice second sentence (~0.62).
以来we have now the embeddings, we just need to convert them to scaled integers:
# Scale the Alice sentence embeddings alice_sentence1_scaled = scale(alice_sentence1) alice_sentence2_scaled = scale(alice_sentence2) # Scale the Bob sentence embeddings bob_sentence1_scaled = scale(bob_sentence1) # This is the unit vector embedding for the sentence >>> alice_sentence1 array([ 0.01698913, -0.0014404 , 0.0010993 , ..., 0.00252409, 0.00828147, 0.00466533], dtype=float32) # This is the scaled vector as integers >>> alice_sentence1_scaled array([278, 0, 18, ..., 41, 135, 76], dtype=int32)
Now with these embeddings as scaled integers, we can proceed to the second part, where we’ll be doing the secure computation between two parties.
Two-party secure computation
In order to perform secure computation between the two parties (Alice and Bob), we’ll use theABY framework。ABY实现了许多差异安全计算方案,并允许你描述你的计算像下面的图片,其中姚明的百万富翁的问题描述描绘的电路:
Yao’s Millionaires problem. Taken from ABY documentation (https://github.com/encryptogroup/ABY).
ABY是很容易使用,因为你可以描述你的输入,股票,盖茨和它会做休息,你如创建套接字通信信道,在需要的时候进行数据交换等。然而,实施完全是用C ++编写,并I’m not aware of any Python bindings for it (a great contribution opportunity).
After that, we just need to execute the application on two different machines (or by emulating locally like below):
# This will execute the server part, the -r 0 specifies the role (server) # and the -n 4096 defines the dimension of the vector (InferSent generates # 4096-dimensional embeddings). ~# ./innerproduct -r 0 -n 4096 # And the same on another process (or another machine, however for another # machine execution you'll have to obviously specify the IP). ~# ./innerproduct -r 1 -n 4096
And we get the following results:
Inner Product of alice_sentence1 and bob_sentence1 = 226691917 Inner Product of alice_sentence2 and bob_sentence1 = 171746521
Even in the integer representation, you can see that the inner product of the Alice’s first sentence and the Bob sentence is higher, meaning that the similarity is also higher. But let’s now convert this value back to float:
正如你所看到的,我们得到了很好的近似,即使在低亚洲金博宝精度数学和无符号整数需求的存在。Of course that in real-life you won’t have the two values and vectors, because they’re supposed to be hidden, but the changes to accommodate that are trivial, you just need to adjust ABY code to load only the vector of the party that it is executing it and using the correct IP addresses/port of the both parties.
TheGIMPS(Great Internet Mersenne Prime Search)已确认昨天新的已知的最大素数:277,232,917-1。This new largest known prime has 23,249,425 digits and is, of course, a梅森素数, prime numbers expressed in the form of 2n– 1, where the primality can be efficiently calculated usingLucas-Lehmerprimality test.
One of the most asked questions about these largest primes is how the number of digits is calculated, given the size of these numbers (23,249,425 digits for the new largest known prime). And indeed there is a trick that avoids you to evaluate the number to calculate the number of digits, using Python you can just do:
>>>进口numpy的作为NP >>>一个= 2 >>> B = 77232917 >>> NUM_DIGITS = INT(1 + B * np.log10的(a))>>>打印(NUM_DIGITS)23249425
The reason why this works is that the log base 10 of a number is how many times this number should be divided by 10 to get to 1, so you get the number of digits after 1 and just need to add 1 back.
Another interesting fact is that we can also get the last digit of this very large number again without evaluating the entire number by using congruence. Since we’re interested in the number国防部10and we know that the Mersenne prime has the form of 277,232,917-1,我们可以检查的权力2n有一个简单的循环模式:
(… repeat)
这意味着,在每4个号码的2次模10次重复的权力,因此,我们只需要计算77232917国防部4,这亚洲金博宝是1.由于the part 277,232,917在2结束,当你减去1,你有1最终成为最后一个数字,你可以通过查看确认entire number(~10Mb zipfile).
So, in mathematics we have the concept of普遍性in which we have laws like thelaw of large numbers, theBenford’s law(that I cited a lot in previous posts), thecentral limit theorem和许多其他法律行为像数学世界的物理定律。这些法律不是我们的发明,我的意思是,这些概念都是我们的发明但法律本身是普遍的,他们是真正的不管你是在地球上,或者如果你住得很远的宇宙,。这就是为什么Frank Drake,SETI的创始人之一,也是在搜寻地外文明计划的先驱之一来使用素数(普遍性的另一个例子)与遥远的世界进行沟通这一高招。弗兰克·德雷克有了想法是使用素数隐藏(实际上没有躲,而是使不言而喻的,你会明白以后)发送的图像尺寸的图像尺寸本身。
So, imagine you are receiving a message that is a sequence of dashes and dots like “—.-.—.-.——–…-.—” that repeats after a short pause and then again and again. Let’s suppose that this message has the size of 1679 symbols. So you begin analyzing the number, which is in fact asemiprime number(the same used in cryptography, a number that is a product of two prime numbers) that can be factored in prime factors as23*73=1679, and this is the only way to factor it in prime factors (actually all numbers have only a single set of prime factors that are unique, seeFundamental theorem of arithmetic)。So, since there are only two prime factors, you will try to reshape the signal in a 2D image and this image can have the dimension of 23×73 or 73×23, when you arrange the image in one of these dimensions you’ll see that the image makes sense and the other will be just a random and strange sequence. By using prime numbers (or semiprimes) you just used the total image size to define the only two possible ways of arranging the image dimension.
Arecibo Radio Telescope
这个想法是于1974年在现实中使用the阿雷西博射电望远镜when a消息播出in frequency modulation (FM) aiming the M13 globular star cluster at 25.000 light-years away:
M13 Globular Star Cluster
This message had the size (surprise) of 1679 binary digits and carried a lot of information of your world like: a graphical representation of an human, numbers from 1 to 10, a graphical representation of the Arecibo radio telescope, etc.
The message decoded as 23 rows and 73 columns is this:
Arecibo Message Shifted (Source: Wikipedia)
As you can see, the message looks a lot nonsensical, but when it is decoded as an image with 73 rows and 23 columns, it will show its real significance:
黎曼ζ函数是一个解析函数,并且被定义在所述复平面与一种复杂的变量表示为““. Riemann zeta is very important to mathematics due it’s deep relation with primes; the zeta function is given by:
对于。
So, letwhereand。
The first plot uses the tripletcoordinates to plot a 3D space where each component is given by:
(or,先前定义的);
(or,先前定义的);
(or,先前定义的);
在动画中使用的时间分量被称为and it’s given byor simply。
要绘制我用后续间隔动画:
For: from, this were calculated at every 0.01 step – shown in the plot at top right;
For: from, calculated at every 0.1 step – shown as the轴。
This plot were done using a fixed interval (no auto scale) for thecoordinates. Where()是当黎曼Zeta函数所在的非平凡零点。
Now see the same plot but this time using auto scale (automatically resized坐标):
Note theauto scaling.
See now from another plotting using a 2D space where each component is given by:
(or,先前定义的);
(蓝色)和(green);
在动画中使用的时间分量被称为and it’s given byor simply。
要绘制我用后续间隔动画:
For: from,此物在每0.01步骤中计算出 - 在顶部的情节标题所亚洲金博宝示;
For: from, calculated at every 0.1 step – shown as the轴。
This plot were done using a fixed interval (no auto scale) for thecoordinates. Where()是当黎曼Zeta函数所在的非平凡零点。The first 10 non-trivial zeroes from Riemann Zeta function is shown as a red dot, when the two series, theandcross each other on the red dot at the critical line () is where lies the zeroes of the Zeta Function, note how the real and imaginary part turns away from each other as the增大。
Now see the same plot but this time using auto scale (automatically resizedcoordinate):
If you are interested in more visualizations of Riemann Zeta function, you’ll like the well-done paper fromJ.阿里亚斯 - 德 - 雷纳called “X-Ray of Riemann zeta-function“.
I always liked the way visualization affects the understanding of math functions.Anscombe’s quartetis a clear example of how important visualization is.