Concentration inequalities, or probability bounds, are very important tools for the analysis of Machine Learning algorithms or randomized algorithms. In statistical learning theory, we often want to show that random variables, given some assumptions, are close to its expectation with high probability. This article provides an overview of the most basic inequalities in the analysis of these concentration measures.


马尔可夫不等式是最基本的边界之一,它假定几乎一无所知的随机变量。The assumptions that Markov’s inequality makes is that the random variable \(X\) is non-negative \(X > 0\) and has a finite expectation \(\mathbb{E}\left[X\right] < \infty\). The Markov’s inequality is given by:

$$\underbrace{P(X \geq \alpha)}_{\text{Probability of being greater than constant } \alpha} \leq \underbrace{\frac{\mathbb{E}\left[X\right]}{\alpha}}_{\text{Bounded above by expectation over constant } \alpha}$$

What this means is that the probability that the random variable \(X\) will be bounded by the expectation of \(X\) divided by the constant \(\alpha\). What is remarkable about this bound, is that it holds for any distribution with positive values and it doesn’t depend on any feature of the probability distribution, it only requires some weak assumptions and its first moment, the expectation.

: A grocery store sells an average of 40 beers per day (it’s summer !). What is the probability that it will sell 80 or more beers tomorrow ?

P(X \ GEQ \阿尔法)\当量\压裂{\ mathbb {E} \左[X \右]} {\阿尔法} \\\\
P(X \ GEQ 80)&\当量\压裂{40} {80} = 0.5 = 50 \%
\ {端对齐}

The Markov’s inequality doesn’t depend on any property of the random variable probability distribution, so it’s obvious that there are better bounds to use if information about the probability distribution is available.

Chebyshev’s Inequality

当我们对一个随机变量的基本分布的信息,我们可以利用这个分布的特性来更多地了解这个变量的浓度。让我们例如正态分布,平均\(\亩= 0 \)和单元的标准偏差\(\西格玛= 1 \)由下面的概率密度函数(PDF)给出:

$$ f(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2} $$

Integrating from -1 to 1: \(\int_{-1}^{1} \frac{1}{\sqrt{2\pi}}e^{-x^2/2}\), we know that 68% of the data is within \(1\sigma\) (one standard deviation) from the mean \(\mu\) and 95% is within \(2\sigma\) from the mean. However, when it’s not possible to assume normality, any other amount of data can be concentrated within \(1\sigma\) or \(2\sigma\).

Chebyshev’s inequality provides a way to get a bound on the concentration for any distribution, without assuming any underlying property except a finite mean and variance. Chebyshev’s also holds for any random variable, not only for non-negative variables as in Markov’s inequality.

The Chebyshev’s inequality is given by the following relation:

P(\中期X - \亩\中间\ GEQķ\西格马)\当量\压裂{1} {K ^ 2}

that can also be rewritten as:

P(\mid X – \mu \mid < k\sigma) \geq 1 – \frac{1}{k^2}

For the concrete case of \(k = 2\), the Chebyshev’s tells us that at least 75% of the data is concentrated within 2 standard deviations of the mean. And this holds for任何分配

现在,当我们比较这对结果\与正态分布的95%浓度(K = 2 \)\(2 \西格玛\),我们可以看到的是如何保守的切比雪夫的约束。然而,我们不能忘记,这适用于任何分配,而不是只为一个正态分布随机变量,和所有的切比雪夫的需求,是数据的第一和第二的时刻。一些需要注意的重要的是,在缺乏有关随机变量的详细信息,这不能得到改善。

Chebyshev’s Inequality and the Weak Law of Large Numbers

切比雪夫不等式,也可以用来证明weak law of large numbers, which says that the sample mean converges in probability towards the true mean.


  • Consider a sequence of i.i.d. (independent and identically distributed) random variables \(X_1, X_2, X_3, \ldots\) with mean \(\mu\) and variance \(\sigma^2\);
  • 样本均值是\(M_n = \压裂{X_1 + \ ldots + X_n} {N} \)和真实平均是\(\亩\);
  • For the expectation of the sample mean we have: $$\mathbb{E}\left[M_n\right] = \frac{\mathbb{E}\left[X_1\right] + \ldots +\mathbb{E}\left[X_n\right]}{n} = \frac{n\mu}{n} = \mu$$
  • For the variance of the sample we have: $$Var\left[M_n\right] = \frac{Var\left[X_1\right] + \ldots +Var\left[X_n\right]}{n^2} = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}$$
  • By the application of the Chebyshev’s inequality we have: $$ P(\mid M_n – \mu \mid \geq \epsilon) \leq \frac{\sigma^2}{n\epsilon^2}$$ for any (fixed) \(\epsilon > 0\), as \(n\) increases, the right side of the inequality goes to zero. Intuitively, this means that for a large \(n\) the concentration of the distribution of \(M_n\) will be around \(\mu\).



P(A \cap B) = P(A)P(B) \\
P(A \帽C)= P(A)P(C)\\
P(B \cap C) = P(B)P(C)

Which means that any pair (any two events) are independent, but not necessarily that:

P(A \cap B\cap C) = P(A)P(B)P(C)



引用本文为:基督教S. Perone,“集中不等式 - 第一部分,”在Terra Incognita,23/08/2018,188asia.net



Privacy-preserving computation or secure computation is a sub-field of cryptography where two (two-party, or 2PC) or multiple (multi-party, or MPC) parties can evaluate a function together without revealing information about the parties private input data to each other. The problem and the first solution to it were introduced in 1982 by an amazing breakthrough done by Andrew Yao on what later became known as the “Yao’s Millionaires’ problem“.

The Yao’s Millionaires Problem is where two millionaires, Alice and Bob, who are interested in knowing which of them is richer but不透露to each other their actual wealth. In other words, what they want can be generalized as that: Alice and Bob want jointly compute a function securely, without knowing anything other than the result of the computation on the input data (that remains private to them).

为了使问题具体,Alice有量的,如$ 10和Bob拥有量B,如$ 50,他们想知道的是哪一个是较大的,没有鲍勃揭示量B给Alice或翘透露出什么量的给Bob。这是要注意同样重要的是,我们也不想在第三方信任,否则问题将只是向可信方信息交换的一个简单的协议。

Formally what we want is to jointly evaluate the following function:

R = F(A,B)

Such as the private valuesAandB举行私人到它的唯一拥有者,并在结果rwill be known to just one or both of the parties.

这似乎很违反直觉的亚洲金博宝,象这样的问题可能永远不会得到解决,但对许多人惊讶的是,有可能解决这个问题的一些安全要求。由于在技术,如FHE最近的事态发展(Fully Homomorphic Encryption不经意传输,乱码电路, problems like that started to get practical for real-life usage and they are being nowadays being used by many companies in applications such as information exchange, secure location, advertisement, satellite orbit collision avoidance, etc.

我不打算进入这些技术细节,但如果你有兴趣在OT(不经意传输)背后的直觉,你一定要读由Craig Gidney完成了惊人的解释这里。There are also, of course, many different protocols for doing 2PC or MPC, where each one of them assumes some security requirements (semi-honest, malicious, etc), I’m not going to enter into the details to keep the post focused on the goal, but you should be aware of that.

The problem: sentence similarity

我们要实现什么是使用隐私保护的计算来计算句子之间的相似性,但不透露句子的内容。只是为了给一个具体的例子:鲍勃拥有一家公司,拥有许多不同的项目的句子,如描述:“This project is about building a deep learning sentiment analysis framework that will be used for tweets“, and Alice who owns another competitor company, has also different projects described in similar sentences.What they want to do is to jointly compute the similarity between projects in order to find if they should be doing partnership on a project or not, however, and this is the important point: Bob doesn’t want Alice to know the project descriptions and neither Alice wants Bob to be aware of their projects, they want to know the closest match between the different projects they run, but但不透露the project ideas (project descriptions).

Sentence Similarity Comparison


One naive way to do that would be to just compute the hashes of the sentences and then compare only the hashes to check if they match. However, this would assume that the descriptions are exactly the same, and besides that, if the entropy of the sentences is small (like small sentences), someone with reasonable computation power can try to recover the sentence.

Another approach for this problem (this is the approach that we’ll be using), is to compare the sentences in the sentence embeddings space. We just need to create sentence embeddings using a Machine Learning model (we’ll useInferSentlater) and then compare the embeddings of the sentences. However, this approach also raises another concern: what if Bob or Alice trains a Seq2Seq model that would go from the embeddings of the other party back to an approximate description of the project ?

It isn’t unreasonable to think that one can recover an approximate description of the sentence given their embeddings. That’s why we’ll use the two-party secure computation for computing the embeddings similarity, in a way that Bob and Alice will compute the similarity of the embeddings不透露their embeddings, keeping their project ideas safe.


Diagram overview of the entire process.

Generating sentence embeddings with InferSent

Bi-LSTM max-pooling network. Source: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Alexis Conneau et al.


他们使用了双向LSTM注意那t consistently surpassed many unsupervised training methods such as the SkipThought vectors. They also provide aPytorch implementation我们将用它来生成句子的嵌入。


The first step to generate the sentence embeddings is to download and load a pre-trained InferSent model:

进口numpy的从NP进口火炬#训练模型:https://github.com/facebookresearch/Infer金宝博游戏网址Sent GLOVE_EMBS = '../dataset/GloVe/glove.840B.300d.txt' INFERSENT_MODEL = 'infersent.allnli.pickle' #负荷训练InferSent模型模型= torch.load(INFERSENT_MODEL,map_location =拉姆达存储,在上述:存储)model.set_glove_path(GLOVE_EMBS)model.build_vocab_k_words(K = 100000)


COS(\ PMB的x,\ PMB Y)= \压裂{\ PMB X \ CDOT \ PMB Y} {|| \ PMB X ||\ CDOT || \ PMBÿ||}


cos(\hat{x}, \hat{y}) =\hat{x} \cdot\hat{y}

所以,如果我们标准化向量单位也没有m (that’s why the vectors are wearing hats in the equation above), we can make the computation of the cosine similarity become just a simple dot product. That will help us a lot in computing the similarity distance later when we’ll use a framework to do the secure computation of this dot product.


#该功能将转发文成模型和#得到的嵌入。在此之后,它会#正常化到一个单位向量。DEF编码(模型,文本):嵌入= model.encode([文本])[0]嵌入/ = np.linalg.norm(嵌入)返回嵌入


Now, for practical reasons, I’ll be using integer computation later for computing the similarity, however, the embeddings generated by InferSent are of course real values. For that reason, you’ll see in the code below that we create another function to缩放浮点值并删除小数点and将它们转换为整数。还有另一个重要的问题,我们将在以后使用安全计算框架不允许有符号整数,所以我们还需要剪辑嵌入值tween 0.0 and 1.0. This will of course cause some approximation errors, however, we can still get very good approximations after clipping and scaling with limited precision (I’m using 14 bits for scaling to avoid overflow issues later during dot product computations):

#此功能将为了缩放嵌入到#去掉小数点。DEF比例(嵌入):SCALE = 1 << 14 scale_embedding = np.clip(嵌入,0.0,1.0)* SCALE返回scale_embedding.astype(np.int32)

You can use floating-point in your secure computations and there are a lot of frameworks that support them, however, it is more tricky to do that, and for that reason, I used integer arithmetic to simplify the tutorial. The function above is just a hack to make it simple. It’s easy to see that we can recover this embedding later without too much loss of precision.

Now we just need to create some sentence samples that we’ll be using:

# The list of Alice sentences alice_sentences = [ 'my cat loves to walk over my keyboard', 'I like to pet my cat', ] # The list of Bob sentences bob_sentences = [ 'the cat is always walking over my keyboard', ]


# Alice sentences alice_sentence1 = encode(model, alice_sentences[0]) alice_sentence2 = encode(model, alice_sentences[1]) # Bob sentences bob_sentence1 = encode(model, bob_sentences[0])

以来we have now the sentences and every sentence is also normalized, we can compute cosine similarity just by doing a dot product between the vectors:

>>> np.dot(bob_sentence1, alice_sentence1) 0.8798542 >>> np.dot(bob_sentence1, alice_sentence2) 0.62976325

As we can see, the first sentence of Bob is most similar (~0.87) with Alice first sentence than to the Alice second sentence (~0.62).

以来we have now the embeddings, we just need to convert them to scaled integers:

# Scale the Alice sentence embeddings alice_sentence1_scaled = scale(alice_sentence1) alice_sentence2_scaled = scale(alice_sentence2) # Scale the Bob sentence embeddings bob_sentence1_scaled = scale(bob_sentence1) # This is the unit vector embedding for the sentence >>> alice_sentence1 array([ 0.01698913, -0.0014404 , 0.0010993 , ..., 0.00252409, 0.00828147, 0.00466533], dtype=float32) # This is the scaled vector as integers >>> alice_sentence1_scaled array([278, 0, 18, ..., 41, 135, 76], dtype=int32)

Now with these embeddings as scaled integers, we can proceed to the second part, where we’ll be doing the secure computation between two parties.

Two-party secure computation

In order to perform secure computation between the two parties (Alice and Bob), we’ll use theABY framework。ABY实现了许多差异安全计算方案,并允许你描述你的计算像下面的图片,其中姚明的百万富翁的问题描述描绘的电路:

Yao’s Millionaires problem. Taken from ABY documentation (https://github.com/encryptogroup/ABY).

正如你可以看到,我们有两个输入一个GT GATE(大于门),然后输出输入。该电路具有的3为每个输入的比特长度,并且如果爱丽丝输入大于(GT GATE)鲍勃输入更大的将计算。然后,计算双方的秘密分享他们的私人数据,然后可以用算术共享,布尔共享或共享姚明能够安全地评估这些门。

ABY是很容易使用,因为你可以描述你的输入,股票,盖茨和它会做休息,你如创建套接字通信信道,在需要的时候进行数据交换等。然而,实施完全是用C ++编写,并I’m not aware of any Python bindings for it (a great contribution opportunity).

幸运的是,对于一个ABY实现的示例可以为我们做点积计算,例如在这里。我不会在这里复制的例子,但只有一部分,我们必须改变读取嵌入矢量,我们之前创建的,而不是generating random载体和增加的比特长度为32比特。

After that, we just need to execute the application on two different machines (or by emulating locally like below):

# This will execute the server part, the -r 0 specifies the role (server) # and the -n 4096 defines the dimension of the vector (InferSent generates # 4096-dimensional embeddings). ~# ./innerproduct -r 0 -n 4096 # And the same on another process (or another machine, however for another # machine execution you'll have to obviously specify the IP). ~# ./innerproduct -r 1 -n 4096

And we get the following results:

Inner Product of alice_sentence1 and bob_sentence1 = 226691917 Inner Product of alice_sentence2 and bob_sentence1 = 171746521

Even in the integer representation, you can see that the inner product of the Alice’s first sentence and the Bob sentence is higher, meaning that the similarity is also higher. But let’s now convert this value back to float:

>>> SCALE = 1 << 14#这是点的产品,我们应该得到>>> np.dot(alice_sentence1,bob_sentence1)0.8798542#这是内部的产品,我们在安全计算>>> 226691917 / SCALE **了2.00.8444931#这是点的产品,我们应该得到>>> np.dot(alice_sentence2,bob_sentence1)0.6297632#这是内部的产品,我们在安全计算得到>>> 171746521 / SCALE ** 2.0 0.6398056

正如你所看到的,我们得到了很好的近似,即使在低亚洲金博宝精度数学和无符号整数需求的存在。Of course that in real-life you won’t have the two values and vectors, because they’re supposed to be hidden, but the changes to accommodate that are trivial, you just need to adjust ABY code to load only the vector of the party that it is executing it and using the correct IP addresses/port of the both parties.

我希望你喜欢它 !

- 基督教S. Perone

引用本文为:基督教S. Perone,“隐私保护使用InferSent的嵌入和安全两方计算句子的语义相似性,”在Terra Incognita,22/01/2018,//www.cpetem.com/2018/01/privacy-preserving-infersent/

New prime on the block

TheGIMPS(Great Internet Mersenne Prime Search)已确认昨天新的已知的最大素数:277,232,917-1。This new largest known prime has 23,249,425 digits and is, of course, a梅森素数, prime numbers expressed in the form of 2n– 1, where the primality can be efficiently calculated usingLucas-Lehmerprimality test.

One of the most asked questions about these largest primes is how the number of digits is calculated, given the size of these numbers (23,249,425 digits for the new largest known prime). And indeed there is a trick that avoids you to evaluate the number to calculate the number of digits, using Python you can just do:

>>>进口numpy的作为NP >>>一个= 2 >>> B = 77232917 >>> NUM_DIGITS = INT(1 + B * np.log10的(a))>>>打印(NUM_DIGITS)23249425

The reason why this works is that the log base 10 of a number is how many times this number should be divided by 10 to get to 1, so you get the number of digits after 1 and just need to add 1 back.

Another interesting fact is that we can also get the last digit of this very large number again without evaluating the entire number by using congruence. Since we’re interested in the number国防部10and we know that the Mersenne prime has the form of 277,232,917-1,我们可以检查的权力2n有一个简单的循环模式:

2^1 \equiv 2 \pmod{10}
2^2 \equiv 4 \pmod{10}
2^3 \equiv 8 \pmod{10}
2^4 \equiv 6 \pmod{10}
2^5 \equiv 2 \pmod{10}
2^6 \equiv 4 \pmod{10}
(… repeat)

这意味着,在每4个号码的2次模10次重复的权力,因此,我们只需要计算77232917国防部4,这亚洲金博宝是1.由于2^{77,232,917} \equiv 2^1 \pmod{10}the part 277,232,917在2结束,当你减去1,你有1最终成为最后一个数字,你可以通过查看确认entire number(~10Mb zipfile).

- 基督教S. Perone

本福德定律 - 指数

以来Benford’s law得到了一些关注,在过去的几年中,我决定让我在选举中舞弊,腐败,普遍性和素数的情况下问题所作以前的职位列表:

Despesas de Custeio e Lei de Benford(June 2014 –in Portuguese)

Universality, primes and space communication(January 2014)

An analysis of Benford’s law applied to Twitter(August 2009)

Benford’s Law and the Iran’s election(June 2009)


Delicious.com, checking user numbers against Benford’s Law(2009年4月)


我希望你喜欢它 !

- 基督教S. Perone

Universality, primes and space communication

So, in mathematics we have the concept of普遍性in which we have laws like thelaw of large numbers, theBenford’s law(that I cited a lot in previous posts), thecentral limit theorem和许多其他法律行为像数学世界的物理定律。这些法律不是我们的发明,我的意思是,这些概念都是我们的发明但法律本身是普遍的,他们是真正的不管你是在地球上,或者如果你住得很远的宇宙,。这就是为什么Frank Drake,SETI的创始人之一,也是在搜寻地外文明计划的先驱之一来使用素数(普遍性的另一个例子)与遥远的世界进行沟通这一高招。弗兰克·德雷克有了想法是使用素数隐藏(实际上没有躲,而是使不言而喻的,你会明白以后)发送的图像尺寸的图像尺寸本身。

So, imagine you are receiving a message that is a sequence of dashes and dots like “—.-.—.-.——–…-.—” that repeats after a short pause and then again and again. Let’s suppose that this message has the size of 1679 symbols. So you begin analyzing the number, which is in fact asemiprime number(the same used in cryptography, a number that is a product of two prime numbers) that can be factored in prime factors as23*73=1679, and this is the only way to factor it in prime factors (actually all numbers have only a single set of prime factors that are unique, seeFundamental theorem of arithmetic)。So, since there are only two prime factors, you will try to reshape the signal in a 2D image and this image can have the dimension of 23×73 or 73×23, when you arrange the image in one of these dimensions you’ll see that the image makes sense and the other will be just a random and strange sequence. By using prime numbers (or semiprimes) you just used the total image size to define the only two possible ways of arranging the image dimension.

Arecibo Radio Telescope

这个想法是于1974年在现实中使用the阿雷西博射电望远镜when a消息播出in frequency modulation (FM) aiming the M13 globular star cluster at 25.000 light-years away:

M13 Globular Star Cluster

This message had the size (surprise) of 1679 binary digits and carried a lot of information of your world like: a graphical representation of an human, numbers from 1 to 10, a graphical representation of the Arecibo radio telescope, etc.

The message decoded as 23 rows and 73 columns is this:

Arecibo Message Shifted (Source: Wikipedia)

As you can see, the message looks a lot nonsensical, but when it is decoded as an image with 73 rows and 23 columns, it will show its real significance:


Amazing, don’t you think ? I hope you liked it !

- 基督教S. Perone

引用本文为:基督教S. Perone,“普遍性,质数和空间通信,”在Terra Incognita,2014年9月1日,//www.cpetem.com/2014/01/universality-primes-and-space-communication/

Riemann Zeta function visualizations with Python

While playing withmpmpathand it’sRiemann Zeta functionevaluator, I came upon those interesting animated plottings using Matplotlib (thesource code在帖子的末尾)。

黎曼ζ函数是一个解析函数,并且被定义在所述复平面与一种复杂的变量表示为“s“. Riemann zeta is very important to mathematics due it’s deep relation with primes; the zeta function is given by:

\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots

对于Re(s) > 1

So, let\ζ电(S)= Vwheres = a + b\imathandV = U +ķ\ imath

The first plot uses the triplet(X,Y,Z)coordinates to plot a 3D space where each component is given by:

  • x = Re(v)(orX = U,先前定义的);
  • y = Im(v)(ory = k,先前定义的);
  • z = Im(s)(orz = b,先前定义的);
  • 在动画中使用的时间分量被称为\thetaand it’s given by\ THETA =的Re(S)or simply\ THETA =一


  • ForRe(s): from\左[0.01,10.0 \右), this were calculated at every 0.01 step – shown in the plot at top right;
  • ForIm(s): from\left[0.1, 200.0\right), calculated at every 0.1 step – shown as thez轴。

This plot were done using a fixed interval (no auto scale) for the(X,Y,Z)coordinates. WhereRe(s) = 1/2(\theta = 1/2)是当黎曼Zeta函数所在的非平凡零点。

Now see the same plot but this time using auto scale (automatically resizedx,y坐标):

Note the(X,Y)auto scaling.

See now from another plotting using a 2D space where each component is given by:

  • X = IM(S)(orx = b,先前定义的);
  • y = Im(v)(蓝色)和Re(v)(green);
  • 在动画中使用的时间分量被称为\thetaand it’s given by\ THETA =的Re(S)or simply\ THETA =一


  • ForRe(s): from\left[0.01,  10.0\right),此物在每0.01步骤中计算出 - 在顶部的情节标题所亚洲金博宝示;
  • ForIm(s): from\left[0.1, 50.0\right), calculated at every 0.1 step – shown as thex轴。

This plot were done using a fixed interval (no auto scale) for the(X,Y)coordinates. WhereRe(s) = 1/2(\theta = 1/2)是当黎曼Zeta函数所在的非平凡零点。The first 10 non-trivial zeroes from Riemann Zeta function is shown as a red dot, when the two series, theIM(V)andRe(v)cross each other on the red dot at the critical line (Re(s) = 1/2) is where lies the zeroes of the Zeta Function, note how the real and imaginary part turns away from each other as theRe(s)增大。

Now see the same plot but this time using auto scale (automatically resizedycoordinate):

If you are interested in more visualizations of Riemann Zeta function, you’ll like the well-done paper fromJ.阿里亚斯 - 德 - 雷纳called “X-Ray of Riemann zeta-function“.

I always liked the way visualization affects the understanding of math functions.Anscombe’s quartetis a clear example of how important visualization is.


Source code for the 2D plots


I hope you liked the post ! To make the plots and videos I’ve usedmatplotlib,mpmathandMEncoder

- 基督教S. Perone