Privacy-preserving computation or secure computation is a sub-field of cryptography where two (two-party, or 2PC) or multiple (multi-party, or MPC) parties can evaluate a function together without revealing information about the parties private input data to each other. The problem and the first solution to it were introduced in 1982 by an amazing breakthrough done by Andrew Yao on what later became known as the “Yao’s Millionaires’ problem“.
该Yao’s Millionaires Problem is where two millionaires, Alice and Bob, who are interested in knowing which of them is richer but不透露to each other their actual wealth. In other words, what they want can be generalized as that: Alice and Bob want jointly compute a function securely, without knowing anything other than the result of the computation on the input data (that remains private to them).
To make the problem concrete, Alice has an amount A such as $10, and Bob has an amount B such as $ 50, and what they want to know is which one is larger, without Bob revealing the amount B to Alice or Alice revealing the amount A to Bob. It is also important to note that we also don’t want to trust on a third-party, otherwise the problem would just be a simple protocol of information exchange with the trusted party.
Formally what we want is to jointly evaluate the following function:
如私有值A和B举行私人到它的唯一拥有者，并在结果rwill be known to just one or both of the parties.
我不打算进入这些技术细节，但如果你有兴趣在OT（不经意传输）背后的直觉，你一定要读由Craig Gidney完成了惊人的解释here。该re are also, of course, many different protocols for doing 2PC or MPC, where each one of them assumes some security requirements (semi-honest, malicious, etc), I’m not going to enter into the details to keep the post focused on the goal, but you should be aware of that.
该problem: sentence similarity
我们要实现什么是使用隐私保护的计算来计算句子之间的相似性，但不透露句子的内容。只是为了给一个具体的例子：鲍勃拥有一家公司，拥有许多不同的项目的句子，如描述：“此项目是为了建立深厚的学习情绪分析的框架，将用于鸣叫“, and Alice who owns another competitor company, has also different projects described in similar sentences.What they want to do is to jointly compute the similarity between projects in order to find if they should be doing partnership on a project or not, however, and this is the important point: Bob doesn’t want Alice to know the project descriptions and neither Alice wants Bob to be aware of their projects, they want to know the closest match between the different projects they run, but但不透露该项目的想法（项目说明）。
Sentence Similarity Comparison
Another approach for this problem (this is the approach that we’ll be using), is to compare the sentences in the sentence embeddings space. We just need to create sentence embeddings using a Machine Learning model (we’ll useInferSent更高版本），然后比较句子的嵌入物。不过，这种做法也引起了另一个问题：如果什么鲍勃或翘火车一Seq2Seq模式，将从对方回项目的大致描述的嵌入物去？
It isn’t unreasonable to think that one can recover an approximate description of the sentence given their embeddings. That’s why we’ll use the two-party secure computation for computing the embeddings similarity, in a way that Bob and Alice will compute the similarity of the embeddings没有透露他们的嵌入, keeping their project ideas safe.
该entire flow is described in the image below, where Bob and Alice shares the same Machine Learning model, after that they use this model to go from sentences to embeddings, followed by a secure computation of the similarity in the embedding space.
Generating sentence embeddings with InferSent
进口numpy的从NP进口火炬＃训练模型：https://github.com/facebookresearch/Infer金宝博游戏网址Sent GLOVE_EMBS = '../dataset/GloVe/glove.840B.300d.txt' INFERSENT_MODEL = 'infersent.allnli.pickle' ＃负荷训练InferSent模型模型= torch.load（INFERSENT_MODEL，map_location =拉姆达存储，在上述：存储）model.set_glove_path（GLOVE_EMBS）model.build_vocab_k_words（K = 100000）
Now we need to define a similarity measure to compare two vectors, and for that goal, I’ll the cosine similarity (188betcom网页版），因为它是非常简单的：
So, if we normalize our vectors to have a unit norm (that’s why the vectors are wearing hats in the equation above), we can make the computation of the cosine similarity become just a simple dot product. That will help us a lot in computing the similarity distance later when we’ll use a framework to do the secure computation of this dot product.
So, the next step is to define a function that will take some sentence text and forward it to the model to generate the embeddings and then normalize them to unit vectors:
#这个函数提出了文本到国防部el and # get the embeddings. After that, it will normalize it # to a unit vector. def encode(model, text): embedding = model.encode([text]) embedding /= np.linalg.norm(embedding) return embedding
Now, for practical reasons, I’ll be using integer computation later for computing the similarity, however, the embeddings generated by InferSent are of course real values. For that reason, you’ll see in the code below that we create another function toscale the float values and remove the radix point和converting them to integers. There is also another important issue, the framework that we’ll be using later for secure computation不允许有符号整数,所以我们还需要剪辑嵌入值tween 0.0 and 1.0. This will of course cause some approximation errors, however, we can still get very good approximations after clipping and scaling with limited precision (I’m using 14 bits for scaling to avoid overflow issues later during dot product computations):
# This function will scale the embedding in order to # remove the radix point. def scale(embedding): SCALE = 1 << 14 scale_embedding = np.clip(embedding, 0.0, 1.0) * SCALE return scale_embedding.astype(np.int32)
Now we just need to create some sentence samples that we’ll be using:
＃爱丽丝句子alice_sentences =列表[“我的猫很喜欢我的键盘走了”，“我想爱抚我的猫”，]＃鲍勃的句子bob_sentences名单= [“猫总是走在我的键盘”，]
# Alice sentences alice_sentence1 = encode(model, alice_sentences) alice_sentence2 = encode(model, alice_sentences) # Bob sentences bob_sentence1 = encode(model, bob_sentences)
>>> np.dot（bob_sentence1，alice_sentence1）0.8798542 >>> np.dot（bob_sentence1，alice_sentence2）0.62976325
As we can see, the first sentence of Bob is most similar (~0.87) with Alice first sentence than to the Alice second sentence (~0.62).
Since we have now the embeddings, we just need to convert them to scaled integers:
# Scale the Alice sentence embeddings alice_sentence1_scaled = scale(alice_sentence1) alice_sentence2_scaled = scale(alice_sentence2) # Scale the Bob sentence embeddings bob_sentence1_scaled = scale(bob_sentence1) # This is the unit vector embedding for the sentence >>> alice_sentence1 array([ 0.01698913, -0.0014404 , 0.0010993 , ..., 0.00252409, 0.00828147, 0.00466533], dtype=float32) # This is the scaled vector as integers >>> alice_sentence1_scaled array([278, 0, 18, ..., 41, 135, 76], dtype=int32)
Now with these embeddings as scaled integers, we can proceed to the second part, where we’ll be doing the secure computation between two parties.
In order to perform secure computation between the two parties (Alice and Bob), we’ll use theABY framework。ABY实现了许多差异安全计算方案，并允许你描述你的计算像下面的图片，其中姚明的百万富翁的问题描述描绘的电路：
正如你可以看到，我们有两个输入一个GT GATE（大于门），然后输出输入。该电路具有的3为每个输入的比特长度，并且如果爱丽丝输入大于（GT GATE）鲍勃输入更大的将计算。然后，计算双方的秘密分享他们的私人数据，然后可以用算术共享，布尔共享或共享姚明能够安全地评估这些门。
ABY是很容易使用，因为你可以描述你的输入，股票，盖茨和它会做休息，你如创建套接字通信信道，在需要的时候进行数据交换等。然而，实施完全是用C ++编写，并I’m not aware of any Python bindings for it (a great contribution opportunity).
幸运的是，对于一个ABY实现的示例可以为我们做点积计算，例如在这里。I won’t replicate the example here, but the only part that we have to change is to read the embedding vectors that we created before instead ofgenerating random载体和增加的比特长度为32比特。
After that, we just need to execute the application on two different machines (or by emulating locally like below):
# This will execute the server part, the -r 0 specifies the role (server) # and the -n 4096 defines the dimension of the vector (InferSent generates # 4096-dimensional embeddings). ~# ./innerproduct -r 0 -n 4096 # And the same on another process (or another machine, however for another # machine execution you'll have to obviously specify the IP). ~# ./innerproduct -r 1 -n 4096
Inner Product of alice_sentence1 and bob_sentence1 = 226691917 Inner Product of alice_sentence2 and bob_sentence1 = 171746521
>>> SCALE = 1 << 14 # This is the dot product we should get >>> np.dot(alice_sentence1, bob_sentence1) 0.8798542 # This is the inner product we got on secure computation >>> 226691917 / SCALE**2.0 0.8444931 # This is the dot product we should get >>> np.dot(alice_sentence2, bob_sentence1) 0.6297632 # This is the inner product we got on secure computation >>> 171746521 / SCALE**2.0 0.6398056
正如你所看到的，我们得到了很好的近似，即使在低亚洲金博宝精度数学和无符号整数需求的存在。Of course that in real-life you won’t have the two values and vectors, because they’re supposed to be hidden, but the changes to accommodate that are trivial, you just need to adjust ABY code to load only the vector of the party that it is executing it and using the correct IP addresses/port of the both parties.
- 基督教S. Perone