隐私保护计算
Privacy-preserving computation or secure computation is a sub-field of cryptography where two (two-party, or 2PC) or multiple (multi-party, or MPC) parties can evaluate a function together without revealing information about the parties private input data to each other. The problem and the first solution to it were introduced in 1982 by an amazing breakthrough done by Andrew Yao on what later became known as the “Yao’s Millionaires’ problem“.
该Yao’s Millionaires Problem is where two millionaires, Alice and Bob, who are interested in knowing which of them is richer but不透露to each other their actual wealth. In other words, what they want can be generalized as that: Alice and Bob want jointly compute a function securely, without knowing anything other than the result of the computation on the input data (that remains private to them).
To make the problem concrete, Alice has an amount A such as $10, and Bob has an amount B such as $ 50, and what they want to know is which one is larger, without Bob revealing the amount B to Alice or Alice revealing the amount A to Bob. It is also important to note that we also don’t want to trust on a third-party, otherwise the problem would just be a simple protocol of information exchange with the trusted party.
Formally what we want is to jointly evaluate the following function:
如私有值A和B举行私人到它的唯一拥有者,并在结果rwill be known to just one or both of the parties.
这似乎很违反直觉的亚洲金博宝,象这样的问题可能永远不会得到解决,但对许多人惊讶的是,有可能解决这个问题的一些安全要求。由于在技术,如FHE最近的事态发展(全同态加密),不经意传输,乱码电路,这样的问题开始变得现实生活中使用的实用,他们正在时下正在应用,如信息交换,安全的位置,广告,卫星轨道防撞等许多公司所采用
我不打算进入这些技术细节,但如果你有兴趣在OT(不经意传输)背后的直觉,你一定要读由Craig Gidney完成了惊人的解释here。该re are also, of course, many different protocols for doing 2PC or MPC, where each one of them assumes some security requirements (semi-honest, malicious, etc), I’m not going to enter into the details to keep the post focused on the goal, but you should be aware of that.
该problem: sentence similarity
我们要实现什么是使用隐私保护的计算来计算句子之间的相似性,但不透露句子的内容。只是为了给一个具体的例子:鲍勃拥有一家公司,拥有许多不同的项目的句子,如描述:“此项目是为了建立深厚的学习情绪分析的框架,将用于鸣叫“, and Alice who owns another competitor company, has also different projects described in similar sentences.What they want to do is to jointly compute the similarity between projects in order to find if they should be doing partnership on a project or not, however, and this is the important point: Bob doesn’t want Alice to know the project descriptions and neither Alice wants Bob to be aware of their projects, they want to know the closest match between the different projects they run, but但不透露该项目的想法(项目说明)。
Sentence Similarity Comparison
现在,我们怎么能没有透露关于该项目的描述信息,交换有关Alice和Bob的项目句子的信息?
一种简单的方式来做到这一点是只计算句子的哈希值,然后只比较哈希值以检查它们是否匹配。然而,这假设的描述是完全一样的,再说,如果句子的熵是小(如小句),有人用合理的计算能力可以尝试恢复的句子。
Another approach for this problem (this is the approach that we’ll be using), is to compare the sentences in the sentence embeddings space. We just need to create sentence embeddings using a Machine Learning model (we’ll useInferSent更高版本),然后比较句子的嵌入物。不过,这种做法也引起了另一个问题:如果什么鲍勃或翘火车一Seq2Seq模式,将从对方回项目的大致描述的嵌入物去?
It isn’t unreasonable to think that one can recover an approximate description of the sentence given their embeddings. That’s why we’ll use the two-party secure computation for computing the embeddings similarity, in a way that Bob and Alice will compute the similarity of the embeddings没有透露他们的嵌入, keeping their project ideas safe.
该entire flow is described in the image below, where Bob and Alice shares the same Machine Learning model, after that they use this model to go from sentences to embeddings, followed by a secure computation of the similarity in the embedding space.

Generating sentence embeddings with InferSent

InferSent是Facebook开发的万能句表示的NLP技术,用途监督培训,生产出高转让交涉。
他们使用了一种双向LSTM注意力,始终超越了许多无人监督的训练方法,如SkipThought载体。他们还提供了一个Pytorch实施我们将用它来生成句子的嵌入。
注意:即使你没有GPU,你可以有几个句子合理的性能做的嵌入。
第一步,以产生句子的嵌入是下载和加载预训练InferSent模型:
进口numpy的从NP进口火炬#训练模型:https://github.com/facebookresearch/Infer金宝博游戏网址Sent GLOVE_EMBS = '../dataset/GloVe/glove.840B.300d.txt' INFERSENT_MODEL = 'infersent.allnli.pickle' #负荷训练InferSent模型模型= torch.load(INFERSENT_MODEL,map_location =拉姆达存储,在上述:存储)model.set_glove_path(GLOVE_EMBS)model.build_vocab_k_words(K = 100000)
Now we need to define a similarity measure to compare two vectors, and for that goal, I’ll the cosine similarity (188betcom网页版),因为它是非常简单的:
正如你所看到的,如果我们有两个单位向量(与标准1向量),在公式的分母两个方面将是1,我们将能够去除公式的分母整体,只留下:
So, if we normalize our vectors to have a unit norm (that’s why the vectors are wearing hats in the equation above), we can make the computation of the cosine similarity become just a simple dot product. That will help us a lot in computing the similarity distance later when we’ll use a framework to do the secure computation of this dot product.
So, the next step is to define a function that will take some sentence text and forward it to the model to generate the embeddings and then normalize them to unit vectors:
#这个函数提出了文本到国防部el and # get the embeddings. After that, it will normalize it # to a unit vector. def encode(model, text): embedding = model.encode([text])[0] embedding /= np.linalg.norm(embedding) return embedding
正如你所看到的,这个功能是非常简单的,它会将文本到模型中,然后将被嵌入标准划分嵌入载体。
Now, for practical reasons, I’ll be using integer computation later for computing the similarity, however, the embeddings generated by InferSent are of course real values. For that reason, you’ll see in the code below that we create another function toscale the float values and remove the radix point和converting them to integers. There is also another important issue, the framework that we’ll be using later for secure computation不允许有符号整数,所以我们还需要剪辑嵌入值tween 0.0 and 1.0. This will of course cause some approximation errors, however, we can still get very good approximations after clipping and scaling with limited precision (I’m using 14 bits for scaling to avoid overflow issues later during dot product computations):
# This function will scale the embedding in order to # remove the radix point. def scale(embedding): SCALE = 1 << 14 scale_embedding = np.clip(embedding, 0.0, 1.0) * SCALE return scale_embedding.astype(np.int32)
您可以在安全的计算使用浮点和有很多支持他们的框架,但是,它是更棘手的做到这一点,因为这个原因,我用整数运算来简化教程。上面的功能仅仅是一个黑客,使之简单。可以很容易地看到,我们可以收回这个以后没有嵌入的精度损失过多。
Now we just need to create some sentence samples that we’ll be using:
#爱丽丝句子alice_sentences =列表[“我的猫很喜欢我的键盘走了”,“我想爱抚我的猫”,]#鲍勃的句子bob_sentences名单= [“猫总是走在我的键盘”,]
并将其转换为嵌入物:
# Alice sentences alice_sentence1 = encode(model, alice_sentences[0]) alice_sentence2 = encode(model, alice_sentences[1]) # Bob sentences bob_sentence1 = encode(model, bob_sentences[0])
因为我们现在的句子,每个句子也被标准化,我们只要通过执行向量之间的点积计算亚洲金博宝余弦相似性:
>>> np.dot(bob_sentence1,alice_sentence1)0.8798542 >>> np.dot(bob_sentence1,alice_sentence2)0.62976325
As we can see, the first sentence of Bob is most similar (~0.87) with Alice first sentence than to the Alice second sentence (~0.62).
Since we have now the embeddings, we just need to convert them to scaled integers:
# Scale the Alice sentence embeddings alice_sentence1_scaled = scale(alice_sentence1) alice_sentence2_scaled = scale(alice_sentence2) # Scale the Bob sentence embeddings bob_sentence1_scaled = scale(bob_sentence1) # This is the unit vector embedding for the sentence >>> alice_sentence1 array([ 0.01698913, -0.0014404 , 0.0010993 , ..., 0.00252409, 0.00828147, 0.00466533], dtype=float32) # This is the scaled vector as integers >>> alice_sentence1_scaled array([278, 0, 18, ..., 41, 135, 76], dtype=int32)
Now with these embeddings as scaled integers, we can proceed to the second part, where we’ll be doing the secure computation between two parties.
两方安全计算
In order to perform secure computation between the two parties (Alice and Bob), we’ll use theABY framework。ABY实现了许多差异安全计算方案,并允许你描述你的计算像下面的图片,其中姚明的百万富翁的问题描述描绘的电路:

正如你可以看到,我们有两个输入一个GT GATE(大于门),然后输出输入。该电路具有的3为每个输入的比特长度,并且如果爱丽丝输入大于(GT GATE)鲍勃输入更大的将计算。然后,计算双方的秘密分享他们的私人数据,然后可以用算术共享,布尔共享或共享姚明能够安全地评估这些门。
ABY是很容易使用,因为你可以描述你的输入,股票,盖茨和它会做休息,你如创建套接字通信信道,在需要的时候进行数据交换等。然而,实施完全是用C ++编写,并I’m not aware of any Python bindings for it (a great contribution opportunity).
幸运的是,对于一个ABY实现的示例可以为我们做点积计算,例如在这里。I won’t replicate the example here, but the only part that we have to change is to read the embedding vectors that we created before instead ofgenerating random载体和增加的比特长度为32比特。
After that, we just need to execute the application on two different machines (or by emulating locally like below):
# This will execute the server part, the -r 0 specifies the role (server) # and the -n 4096 defines the dimension of the vector (InferSent generates # 4096-dimensional embeddings). ~# ./innerproduct -r 0 -n 4096 # And the same on another process (or another machine, however for another # machine execution you'll have to obviously specify the IP). ~# ./innerproduct -r 1 -n 4096
而我们得到如下结果:
Inner Product of alice_sentence1 and bob_sentence1 = 226691917 Inner Product of alice_sentence2 and bob_sentence1 = 171746521
即使在整数表示,你可以看到爱丽丝的第一句的内积和鲍勃句话是高的,这意味着相似性也更高。但是,让我们现在转换这个值回浮动:
>>> SCALE = 1 << 14 # This is the dot product we should get >>> np.dot(alice_sentence1, bob_sentence1) 0.8798542 # This is the inner product we got on secure computation >>> 226691917 / SCALE**2.0 0.8444931 # This is the dot product we should get >>> np.dot(alice_sentence2, bob_sentence1) 0.6297632 # This is the inner product we got on secure computation >>> 171746521 / SCALE**2.0 0.6398056
正如你所看到的,我们得到了很好的近似,即使在低亚洲金博宝精度数学和无符号整数需求的存在。Of course that in real-life you won’t have the two values and vectors, because they’re supposed to be hidden, but the changes to accommodate that are trivial, you just need to adjust ABY code to load only the vector of the party that it is executing it and using the correct IP addresses/port of the both parties.
我希望你喜欢它 !
- 基督教S. Perone