## 188asia.net

### Introduction

Concentration inequalities, or probability bounds, are very important tools for the analysis of Machine Learning algorithms or randomized algorithms. In statistical learning theory, we often want to show that random variables, given some assumptions, are close to its expectation with high probability. This article provides an overview of the most basic inequalities in the analysis of these concentration measures.

### 马尔可夫不等式

$$\underbrace{P(X \geq \alpha)}_{\text{Probability of being greater than constant } \alpha} \leq \underbrace{\frac{\mathbb{E}\left[X\right]}{\alpha}}_{\text{Bounded above by expectation over constant } \alpha}$$

What this means is that the probability that the random variable $$X$$ will be bounded by the expectation of $$X$$ divided by the constant $$\alpha$$. What is remarkable about this bound, is that it holds for any distribution with positive values and it doesn’t depend on any feature of the probability distribution, it only requires some weak assumptions and its first moment, the expectation.

: A grocery store sells an average of 40 beers per day (it’s summer !). What is the probability that it will sell 80 or more beers tomorrow ?

\begin{align} P（X \ GEQ \阿尔法）\当量\压裂{\ mathbb {E} \左[X \右]} {\阿尔法} \\\\ P（X \ GEQ 80）＆\当量\压裂{40} {80} = 0.5 = 50 \％ \ {端对齐}

The Markov’s inequality doesn’t depend on any property of the random variable probability distribution, so it’s obvious that there are better bounds to use if information about the probability distribution is available.

### Chebyshev’s Inequality

$$f(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}$$

Integrating from -1 to 1: $$\int_{-1}^{1} \frac{1}{\sqrt{2\pi}}e^{-x^2/2}$$, we know that 68% of the data is within $$1\sigma$$ (one standard deviation) from the mean $$\mu$$ and 95% is within $$2\sigma$$ from the mean. However, when it’s not possible to assume normality, any other amount of data can be concentrated within $$1\sigma$$ or $$2\sigma$$.

Chebyshev’s inequality provides a way to get a bound on the concentration for any distribution, without assuming any underlying property except a finite mean and variance. Chebyshev’s also holds for any random variable, not only for non-negative variables as in Markov’s inequality.

The Chebyshev’s inequality is given by the following relation:

$$P（\中期X - \亩\中间\ GEQķ\西格马）\当量\压裂{1} {K ^ 2}$$

that can also be rewritten as:

$$P(\mid X – \mu \mid < k\sigma) \geq 1 – \frac{1}{k^2}$$

For the concrete case of $$k = 2$$, the Chebyshev’s tells us that at least 75% of the data is concentrated within 2 standard deviations of the mean. And this holds for任何分配

### Chebyshev’s Inequality and the Weak Law of Large Numbers

• Consider a sequence of i.i.d. (independent and identically distributed) random variables $$X_1, X_2, X_3, \ldots$$ with mean $$\mu$$ and variance $$\sigma^2$$;
• 样本均值是\（M_n = \压裂{X_1 + \ ldots + X_n} {N} \）和真实平均是\（\亩\）;
• For the expectation of the sample mean we have: $$\mathbb{E}\left[M_n\right] = \frac{\mathbb{E}\left[X_1\right] + \ldots +\mathbb{E}\left[X_n\right]}{n} = \frac{n\mu}{n} = \mu$$
• For the variance of the sample we have: $$Var\left[M_n\right] = \frac{Var\left[X_1\right] + \ldots +Var\left[X_n\right]}{n^2} = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}$$
• By the application of the Chebyshev’s inequality we have: $$P(\mid M_n – \mu \mid \geq \epsilon) \leq \frac{\sigma^2}{n\epsilon^2}$$ for any (fixed) $$\epsilon > 0$$, as $$n$$ increases, the right side of the inequality goes to zero. Intuitively, this means that for a large $$n$$ the concentration of the distribution of $$M_n$$ will be around $$\mu$$.

### 提高马尔科夫的切比雪夫和与切尔诺夫界

$$P(A \cap B) = P(A)P(B) \\ P（A \帽C）= P（A）P（C）\\ P(B \cap C) = P(B)P(C)$$

Which means that any pair (any two events) are independent, but not necessarily that:

$$P(A \cap B\cap C) = P(A)P(B)P(C)$$

## 隐私保护使用InferSent的嵌入和安全两方计算句子语义相似度

### 隐私保护计算

Privacy-preserving computation or secure computation is a sub-field of cryptography where two (two-party, or 2PC) or multiple (multi-party, or MPC) parties can evaluate a function together without revealing information about the parties private input data to each other. The problem and the first solution to it were introduced in 1982 by an amazing breakthrough done by Andrew Yao on what later became known as the “Yao’s Millionaires’ problem“.

The Yao’s Millionaires Problem is where two millionaires, Alice and Bob, who are interested in knowing which of them is richer but不透露to each other their actual wealth. In other words, what they want can be generalized as that: Alice and Bob want jointly compute a function securely, without knowing anything other than the result of the computation on the input data (that remains private to them).

Formally what we want is to jointly evaluate the following function: $R = F（A，B）$

Such as the private valuesAandB举行私人到它的唯一拥有者，并在结果rwill be known to just one or both of the parties.

### Sentence Similarity Comparison

One naive way to do that would be to just compute the hashes of the sentences and then compare only the hashes to check if they match. However, this would assume that the descriptions are exactly the same, and besides that, if the entropy of the sentences is small (like small sentences), someone with reasonable computation power can try to recover the sentence.

Another approach for this problem (this is the approach that we’ll be using), is to compare the sentences in the sentence embeddings space. We just need to create sentence embeddings using a Machine Learning model (we’ll useInferSentlater) and then compare the embeddings of the sentences. However, this approach also raises another concern: what if Bob or Alice trains a Seq2Seq model that would go from the embeddings of the other party back to an approximate description of the project ?

It isn’t unreasonable to think that one can recover an approximate description of the sentence given their embeddings. That’s why we’ll use the two-party secure computation for computing the embeddings similarity, in a way that Bob and Alice will compute the similarity of the embeddings不透露their embeddings, keeping their project ideas safe.

### Generating sentence embeddings with InferSent Bi-LSTM max-pooling network. Source: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Alexis Conneau et al.

The first step to generate the sentence embeddings is to download and load a pre-trained InferSent model:

进口numpy的从NP进口火炬＃训练模型：https://github.com/facebookresearch/Infer金宝博游戏网址Sent GLOVE_EMBS = '../dataset/GloVe/glove.840B.300d.txt' INFERSENT_MODEL = 'infersent.allnli.pickle' ＃负荷训练InferSent模型模型= torch.load（INFERSENT_MODEL，map_location =拉姆达存储，在上述：存储）model.set_glove_path（GLOVE_EMBS）model.build_vocab_k_words（K = 100000） $COS（\ PMB的x，\ PMB Y）= \压裂{\ PMB X \ CDOT \ PMB Y} {|| \ PMB X ||\ CDOT || \ PMBÿ||}$ $cos(\hat{x}, \hat{y}) =\hat{x} \cdot\hat{y}$

＃该功能将转发文成模型和＃得到的嵌入。在此之后，它会＃正常化到一个单位向量。DEF编码（模型，文本）：嵌入= model.encode（[文本]）嵌入/ = np.linalg.norm（嵌入）返回嵌入

Now, for practical reasons, I’ll be using integer computation later for computing the similarity, however, the embeddings generated by InferSent are of course real values. For that reason, you’ll see in the code below that we create another function to缩放浮点值并删除小数点and将它们转换为整数。还有另一个重要的问题，我们将在以后使用安全计算框架不允许有符号整数,所以我们还需要剪辑嵌入值tween 0.0 and 1.0. This will of course cause some approximation errors, however, we can still get very good approximations after clipping and scaling with limited precision (I’m using 14 bits for scaling to avoid overflow issues later during dot product computations):

＃此功能将为了缩放嵌入到＃去掉小数点。DEF比例（嵌入）：SCALE = 1 << 14 scale_embedding = np.clip（嵌入，0.0，1.0）* SCALE返回scale_embedding.astype（np.int32）

You can use floating-point in your secure computations and there are a lot of frameworks that support them, however, it is more tricky to do that, and for that reason, I used integer arithmetic to simplify the tutorial. The function above is just a hack to make it simple. It’s easy to see that we can recover this embedding later without too much loss of precision.

Now we just need to create some sentence samples that we’ll be using:

# The list of Alice sentences alice_sentences = [ 'my cat loves to walk over my keyboard', 'I like to pet my cat', ] # The list of Bob sentences bob_sentences = [ 'the cat is always walking over my keyboard', ]

# Alice sentences alice_sentence1 = encode(model, alice_sentences) alice_sentence2 = encode(model, alice_sentences) # Bob sentences bob_sentence1 = encode(model, bob_sentences)

>>> np.dot(bob_sentence1, alice_sentence1) 0.8798542 >>> np.dot(bob_sentence1, alice_sentence2) 0.62976325

As we can see, the first sentence of Bob is most similar (~0.87) with Alice first sentence than to the Alice second sentence (~0.62).

# Scale the Alice sentence embeddings alice_sentence1_scaled = scale(alice_sentence1) alice_sentence2_scaled = scale(alice_sentence2) # Scale the Bob sentence embeddings bob_sentence1_scaled = scale(bob_sentence1) # This is the unit vector embedding for the sentence >>> alice_sentence1 array([ 0.01698913, -0.0014404 , 0.0010993 , ..., 0.00252409, 0.00828147, 0.00466533], dtype=float32) # This is the scaled vector as integers >>> alice_sentence1_scaled array([278, 0, 18, ..., 41, 135, 76], dtype=int32)

Now with these embeddings as scaled integers, we can proceed to the second part, where we’ll be doing the secure computation between two parties.

### Two-party secure computation

In order to perform secure computation between the two parties (Alice and Bob), we’ll use theABY framework。ABY实现了许多差异安全计算方案，并允许你描述你的计算像下面的图片，其中姚明的百万富翁的问题描述描绘的电路：

ABY是很容易使用，因为你可以描述你的输入，股票，盖茨和它会做休息，你如创建套接字通信信道，在需要的时候进行数据交换等。然而，实施完全是用C ++编写，并I’m not aware of any Python bindings for it (a great contribution opportunity).

After that, we just need to execute the application on two different machines (or by emulating locally like below):

# This will execute the server part, the -r 0 specifies the role (server) # and the -n 4096 defines the dimension of the vector (InferSent generates # 4096-dimensional embeddings). ~# ./innerproduct -r 0 -n 4096 # And the same on another process (or another machine, however for another # machine execution you'll have to obviously specify the IP). ~# ./innerproduct -r 1 -n 4096

And we get the following results:

Inner Product of alice_sentence1 and bob_sentence1 = 226691917 Inner Product of alice_sentence2 and bob_sentence1 = 171746521

Even in the integer representation, you can see that the inner product of the Alice’s first sentence and the Bob sentence is higher, meaning that the similarity is also higher. But let’s now convert this value back to float:

>>> SCALE = 1 << 14＃这是点的产品，我们应该得到>>> np.dot（alice_sentence1，bob_sentence1）0.8798542＃这是内部的产品，我们在安全计算>>> 226691917 / SCALE **了2.00.8444931＃这是点的产品，我们应该得到>>> np.dot（alice_sentence2，bob_sentence1）0.6297632＃这是内部的产品，我们在安全计算得到>>> 171746521 / SCALE ** 2.0 0.6398056

- 基督教S. Perone

## New prime on the block

TheGIMPS(Great Internet Mersenne Prime Search）已确认昨天新的已知的最大素数：277,232,917-1。This new largest known prime has 23,249,425 digits and is, of course, a梅森素数, prime numbers expressed in the form of 2n– 1, where the primality can be efficiently calculated usingLucas-Lehmerprimality test.

One of the most asked questions about these largest primes is how the number of digits is calculated, given the size of these numbers (23,249,425 digits for the new largest known prime). And indeed there is a trick that avoids you to evaluate the number to calculate the number of digits, using Python you can just do:

>>>进口numpy的作为NP >>>一个= 2 >>> B = 77232917 >>> NUM_DIGITS = INT（1 + B * np.log10的（a））>>>打印（NUM_DIGITS）23249425

The reason why this works is that the log base 10 of a number is how many times this number should be divided by 10 to get to 1, so you get the number of digits after 1 and just need to add 1 back.

Another interesting fact is that we can also get the last digit of this very large number again without evaluating the entire number by using congruence. Since we’re interested in the number国防部10and we know that the Mersenne prime has the form of 277,232,917-1，我们可以检查的权力2n有一个简单的循环模式： $2^1 \equiv 2 \pmod{10}$ $2^2 \equiv 4 \pmod{10}$ $2^3 \equiv 8 \pmod{10}$ $2^4 \equiv 6 \pmod{10}$ $2^5 \equiv 2 \pmod{10}$ $2^6 \equiv 4 \pmod{10}$
(… repeat)

- 基督教S. Perone

## 本福德定律 - 指数

Despesas de Custeio e Lei de Benford(June 2014 –in Portuguese)

Universality, primes and space communication(January 2014)

An analysis of Benford’s law applied to Twitter(August 2009)

Benford’s Law and the Iran’s election(June 2009)

- 基督教S. Perone

## Universality, primes and space communication

So, in mathematics we have the concept of普遍性in which we have laws like thelaw of large numbers, theBenford’s law(that I cited a lot in previous posts), thecentral limit theorem和许多其他法律行为像数学世界的物理定律。这些法律不是我们的发明，我的意思是，这些概念都是我们的发明但法律本身是普遍的，他们是真正的不管你是在地球上，或者如果你住得很远的宇宙，。这就是为什么Frank Drake，SETI的创始人之一，也是在搜寻地外文明计划的先驱之一来使用素数（普遍性的另一个例子）与遥远的世界进行沟通这一高招。弗兰克·德雷克有了想法是使用素数隐藏（实际上没有躲，而是使不言而喻的，你会明白以后）发送的图像尺寸的图像尺寸本身。

So, imagine you are receiving a message that is a sequence of dashes and dots like “—.-.—.-.——–…-.—” that repeats after a short pause and then again and again. Let’s suppose that this message has the size of 1679 symbols. So you begin analyzing the number, which is in fact asemiprime number(the same used in cryptography, a number that is a product of two prime numbers) that can be factored in prime factors as23*73=1679, and this is the only way to factor it in prime factors (actually all numbers have only a single set of prime factors that are unique, seeFundamental theorem of arithmetic）。So, since there are only two prime factors, you will try to reshape the signal in a 2D image and this image can have the dimension of 23×73 or 73×23, when you arrange the image in one of these dimensions you’ll see that the image makes sense and the other will be just a random and strange sequence. By using prime numbers (or semiprimes) you just used the total image size to define the only two possible ways of arranging the image dimension.

This message had the size (surprise) of 1679 binary digits and carried a lot of information of your world like: a graphical representation of an human, numbers from 1 to 10, a graphical representation of the Arecibo radio telescope, etc.

The message decoded as 23 rows and 73 columns is this:

As you can see, the message looks a lot nonsensical, but when it is decoded as an image with 73 rows and 23 columns, it will show its real significance:

Amazing, don’t you think ? I hope you liked it !

- 基督教S. Perone

## Riemann Zeta function visualizations with Python

While playing withmpmpathand it’sRiemann Zeta functionevaluator, I came upon those interesting animated plottings using Matplotlib (thesource code在帖子的末尾）。 $\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots$

So, let $\ζ电（S）= V$where $s = a + b\imath$and $V = U +ķ\ imath$

The first plot uses the triplet $（X，Y，Z）$coordinates to plot a 3D space where each component is given by:

• $x = Re(v)$(or $X = U$，先前定义的）;
• $y = Im(v)$(or $y = k$，先前定义的）;
• $z = Im(s)$(or $z = b$，先前定义的）;
• 在动画中使用的时间分量被称为 $\theta$and it’s given by $\ THETA =的Re（S）$or simply $\ THETA =一$

• For $Re(s)$: from $\左[0.01，10.0 \右）$, this were calculated at every 0.01 step – shown in the plot at top right;
• For $Im(s)$: from $\left[0.1, 200.0\right)$, calculated at every 0.1 step – shown as the $z$轴。

This plot were done using a fixed interval (no auto scale) for the $（X，Y，Z）$coordinates. Where $Re(s) = 1/2$( $\theta = 1/2$）是当黎曼Zeta函数所在的非平凡零点。

Now see the same plot but this time using auto scale (automatically resized $x,y$坐标）：

Note the $（X，Y）$auto scaling.

See now from another plotting using a 2D space where each component is given by:

• $X = IM（S）$(or $x = b$，先前定义的）;
• $y = Im(v)$（蓝色）和 $Re(v)$(green);
• 在动画中使用的时间分量被称为 $\theta$and it’s given by $\ THETA =的Re（S）$or simply $\ THETA =一$

• For $Re(s)$: from $\left[0.01, 10.0\right)$，此物在每0.01步骤中计算出 - 在顶部的情节标题所亚洲金博宝示;
• For $Im(s)$: from $\left[0.1, 50.0\right)$, calculated at every 0.1 step – shown as the $x$轴。

This plot were done using a fixed interval (no auto scale) for the $（X，Y）$coordinates. Where $Re(s) = 1/2$( $\theta = 1/2$）是当黎曼Zeta函数所在的非平凡零点。The first 10 non-trivial zeroes from Riemann Zeta function is shown as a red dot, when the two series, the $IM（V）$and $Re(v)$cross each other on the red dot at the critical line ( $Re(s) = 1/2$) is where lies the zeroes of the Zeta Function, note how the real and imaginary part turns away from each other as the $Re(s)$增大。

Now see the same plot but this time using auto scale (automatically resized $y$coordinate):

If you are interested in more visualizations of Riemann Zeta function, you’ll like the well-done paper fromJ.阿里亚斯 - 德 - 雷纳called “X-Ray of Riemann zeta-function“.

I always liked the way visualization affects the understanding of math functions.Anscombe’s quartetis a clear example of how important visualization is.

Thesource-code用于创建情节都可以在这里：

Source code for the 2D plots

I hope you liked the post ! To make the plots and videos I’ve usedmatplotlib,mpmathandMEncoder

- 基督教S. Perone