隐私保护使用InferSent的嵌入和安全两方计算句子语义相似度

隐私保护计算

Privacy-preserving computation or secure computation is a sub-field of cryptography where two (two-party, or 2PC) or multiple (multi-party, or MPC) parties can evaluate a function together without revealing information about the parties private input data to each other. The problem and the first solution to it were introduced in 1982 by an amazing breakthrough done by Andrew Yao on what later became known as the “Yao’s Millionaires’ problem“.

该Yao’s Millionaires Problem is where two millionaires, Alice and Bob, who are interested in knowing which of them is richer but不透露to each other their actual wealth. In other words, what they want can be generalized as that: Alice and Bob want jointly compute a function securely, without knowing anything other than the result of the computation on the input data (that remains private to them).

To make the problem concrete, Alice has an amount A such as $10, and Bob has an amount B such as $ 50, and what they want to know is which one is larger, without Bob revealing the amount B to Alice or Alice revealing the amount A to Bob. It is also important to note that we also don’t want to trust on a third-party, otherwise the problem would just be a simple protocol of information exchange with the trusted party.

Formally what we want is to jointly evaluate the following function:

r = f(A, B)

如私有值AB举行私人到它的唯一拥有者,并在结果rwill be known to just one or both of the parties.

这似乎很违反直觉的亚洲金博宝,象这样的问题可能永远不会得到解决,但对许多人惊讶的是,有可能解决这个问题的一些安全要求。由于在技术,如FHE最近的事态发展(全同态加密),不经意传输,乱码电路,这样的问题开始变得现实生活中使用的实用,他们正在时下正在应用,如信息交换,安全的位置,广告,卫星轨道防撞等许多公司所采用

我不打算进入这些技术细节,但如果你有兴趣在OT(不经意传输)背后的直觉,你一定要读由Craig Gidney完成了惊人的解释here。该re are also, of course, many different protocols for doing 2PC or MPC, where each one of them assumes some security requirements (semi-honest, malicious, etc), I’m not going to enter into the details to keep the post focused on the goal, but you should be aware of that.

该problem: sentence similarity

我们要实现什么是使用隐私保护的计算来计算句子之间的相似性,但不透露句子的内容。只是为了给一个具体的例子:鲍勃拥有一家公司,拥有许多不同的项目的句子,如描述:“此项目是为了建立深厚的学习情绪分析的框架,将用于鸣叫“, and Alice who owns another competitor company, has also different projects described in similar sentences.What they want to do is to jointly compute the similarity between projects in order to find if they should be doing partnership on a project or not, however, and this is the important point: Bob doesn’t want Alice to know the project descriptions and neither Alice wants Bob to be aware of their projects, they want to know the closest match between the different projects they run, but但不透露该项目的想法(项目说明)。

Sentence Similarity Comparison

现在,我们怎么能没有透露关于该项目的描述信息,交换有关Alice和Bob的项目句子的信息?

一种简单的方式来做到这一点是只计算句子的哈希值,然后只比较哈希值以检查它们是否匹配。然而,这假设的描述是完全一样的,再说,如果句子的熵是小(如小句),有人用合理的计算能力可以尝试恢复的句子。

Another approach for this problem (this is the approach that we’ll be using), is to compare the sentences in the sentence embeddings space. We just need to create sentence embeddings using a Machine Learning model (we’ll useInferSent更高版本),然后比较句子的嵌入物。不过,这种做法也引起了另一个问题:如果什么鲍勃或翘火车一Seq2Seq模式,将从对方回项目的大致描述的嵌入物去?

It isn’t unreasonable to think that one can recover an approximate description of the sentence given their embeddings. That’s why we’ll use the two-party secure computation for computing the embeddings similarity, in a way that Bob and Alice will compute the similarity of the embeddings没有透露他们的嵌入, keeping their project ideas safe.

该entire flow is described in the image below, where Bob and Alice shares the same Machine Learning model, after that they use this model to go from sentences to embeddings, followed by a secure computation of the similarity in the embedding space.

Diagram overview of the entire process.

Generating sentence embeddings with InferSent

Bi-LSTM max-pooling network. Source: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Alexis Conneau et al.

InferSent是Facebook开发的万能句表示的NLP技术,用途监督培训,生产出高转让交涉。

他们使用了一种双向LSTM注意力,始终超越了许多无人监督的训练方法,如SkipThought载体。他们还提供了一个Pytorch实施我们将用它来生成句子的嵌入。

注意:即使你没有GPU,你可以有几个句子合理的性能做的嵌入。

第一步,以产生句子的嵌入是下载和加载预训练InferSent模型:

进口numpy的从NP进口火炬#训练模型:https://github.com/facebookresearch/Infer金宝博游戏网址Sent GLOVE_EMBS = '../dataset/GloVe/glove.840B.300d.txt' INFERSENT_MODEL = 'infersent.allnli.pickle' #负荷训练InferSent模型模型= torch.load(INFERSENT_MODEL,map_location =拉姆达存储,在上述:存储)model.set_glove_path(GLOVE_EMBS)model.build_vocab_k_words(K = 100000)

Now we need to define a similarity measure to compare two vectors, and for that goal, I’ll the cosine similarity (188betcom网页版),因为它是非常简单的:

COS(\ PMB的x,\ PMB Y)= \压裂{\ PMB X \ CDOT \ PMB Y} {|| \ PMB X ||\ CDOT || \ PMBÿ||}

正如你所看到的,如果我们有两个单位向量(与标准1向量),在公式的分母两个方面将是1,我们将能够去除公式的分母整体,只留下:

COS(\帽子{X},\帽子{Y})= \帽子{X} \ CDOT \帽子{Y}

So, if we normalize our vectors to have a unit norm (that’s why the vectors are wearing hats in the equation above), we can make the computation of the cosine similarity become just a simple dot product. That will help us a lot in computing the similarity distance later when we’ll use a framework to do the secure computation of this dot product.

So, the next step is to define a function that will take some sentence text and forward it to the model to generate the embeddings and then normalize them to unit vectors:

#这个函数提出了文本到国防部el and # get the embeddings. After that, it will normalize it # to a unit vector. def encode(model, text): embedding = model.encode([text])[0] embedding /= np.linalg.norm(embedding) return embedding

正如你所看到的,这个功能是非常简单的,它会将文本到模型中,然后将被嵌入标准划分嵌入载体。

Now, for practical reasons, I’ll be using integer computation later for computing the similarity, however, the embeddings generated by InferSent are of course real values. For that reason, you’ll see in the code below that we create another function toscale the float values and remove the radix pointconverting them to integers. There is also another important issue, the framework that we’ll be using later for secure computation不允许有符号整数,所以我们还需要剪辑嵌入值tween 0.0 and 1.0. This will of course cause some approximation errors, however, we can still get very good approximations after clipping and scaling with limited precision (I’m using 14 bits for scaling to avoid overflow issues later during dot product computations):

# This function will scale the embedding in order to # remove the radix point. def scale(embedding): SCALE = 1 << 14 scale_embedding = np.clip(embedding, 0.0, 1.0) * SCALE return scale_embedding.astype(np.int32)

您可以在安全的计算使用浮点和有很多支持他们的框架,但是,它是更棘手的做到这一点,因为这个原因,我用整数运算来简化教程。上面的功能仅仅是一个黑客,使之简单。可以很容易地看到,我们可以收回这个以后没有嵌入的精度损失过多。

Now we just need to create some sentence samples that we’ll be using:

#爱丽丝句子alice_sentences =列表[“我的猫很喜欢我的键盘走了”,“我想爱抚我的猫”,]#鲍勃的句子bob_sentences名单= [“猫总是走在我的键盘”,]

并将其转换为嵌入物:

# Alice sentences alice_sentence1 = encode(model, alice_sentences[0]) alice_sentence2 = encode(model, alice_sentences[1]) # Bob sentences bob_sentence1 = encode(model, bob_sentences[0])

因为我们现在的句子,每个句子也被标准化,我们只要通过执行向量之间的点积计算亚洲金博宝余弦相似性:

>>> np.dot(bob_sentence1,alice_sentence1)0.8798542 >>> np.dot(bob_sentence1,alice_sentence2)0.62976325

As we can see, the first sentence of Bob is most similar (~0.87) with Alice first sentence than to the Alice second sentence (~0.62).

Since we have now the embeddings, we just need to convert them to scaled integers:

# Scale the Alice sentence embeddings alice_sentence1_scaled = scale(alice_sentence1) alice_sentence2_scaled = scale(alice_sentence2) # Scale the Bob sentence embeddings bob_sentence1_scaled = scale(bob_sentence1) # This is the unit vector embedding for the sentence >>> alice_sentence1 array([ 0.01698913, -0.0014404 , 0.0010993 , ..., 0.00252409, 0.00828147, 0.00466533], dtype=float32) # This is the scaled vector as integers >>> alice_sentence1_scaled array([278, 0, 18, ..., 41, 135, 76], dtype=int32)

Now with these embeddings as scaled integers, we can proceed to the second part, where we’ll be doing the secure computation between two parties.

两方安全计算

In order to perform secure computation between the two parties (Alice and Bob), we’ll use theABY framework。ABY实现了许多差异安全计算方案,并允许你描述你的计算像下面的图片,其中姚明的百万富翁的问题描述描绘的电路:

Yao’s Millionaires problem. Taken from ABY documentation (https://github.com/encryptogroup/ABY).

正如你可以看到,我们有两个输入一个GT GATE(大于门),然后输出输入。该电路具有的3为每个输入的比特长度,并且如果爱丽丝输入大于(GT GATE)鲍勃输入更大的将计算。然后,计算双方的秘密分享他们的私人数据,然后可以用算术共享,布尔共享或共享姚明能够安全地评估这些门。

ABY是很容易使用,因为你可以描述你的输入,股票,盖茨和它会做休息,你如创建套接字通信信道,在需要的时候进行数据交换等。然而,实施完全是用C ++编写,并I’m not aware of any Python bindings for it (a great contribution opportunity).

幸运的是,对于一个ABY实现的示例可以为我们做点积计算,例如在这里。I won’t replicate the example here, but the only part that we have to change is to read the embedding vectors that we created before instead ofgenerating random载体和增加的比特长度为32比特。

After that, we just need to execute the application on two different machines (or by emulating locally like below):

# This will execute the server part, the -r 0 specifies the role (server) # and the -n 4096 defines the dimension of the vector (InferSent generates # 4096-dimensional embeddings). ~# ./innerproduct -r 0 -n 4096 # And the same on another process (or another machine, however for another # machine execution you'll have to obviously specify the IP). ~# ./innerproduct -r 1 -n 4096

而我们得到如下结果:

Inner Product of alice_sentence1 and bob_sentence1 = 226691917 Inner Product of alice_sentence2 and bob_sentence1 = 171746521

即使在整数表示,你可以看到爱丽丝的第一句的内积和鲍勃句话是高的,这意味着相似性也更高。但是,让我们现在转换这个值回浮动:

>>> SCALE = 1 << 14 # This is the dot product we should get >>> np.dot(alice_sentence1, bob_sentence1) 0.8798542 # This is the inner product we got on secure computation >>> 226691917 / SCALE**2.0 0.8444931 # This is the dot product we should get >>> np.dot(alice_sentence2, bob_sentence1) 0.6297632 # This is the inner product we got on secure computation >>> 171746521 / SCALE**2.0 0.6398056

正如你所看到的,我们得到了很好的近似,即使在低亚洲金博宝精度数学和无符号整数需求的存在。Of course that in real-life you won’t have the two values and vectors, because they’re supposed to be hidden, but the changes to accommodate that are trivial, you just need to adjust ABY code to load only the vector of the party that it is executing it and using the correct IP addresses/port of the both parties.

我希望你喜欢它 !

- 基督教S. Perone

引用本文为:基督教S. Perone,“隐私保护使用InferSent的嵌入和安全两方计算句子的语义相似性,”在Terra Incognita, 22/01/2018,//www.cpetem.com/2018/01/privacy-preserving-infersent/

Nanopipe: connecting the modern babel

logored

欲了解更多信息,请参见官方documentation siteor the official金宝博游戏网址Github上库

arch

Hello everyone, I just released the Nanopipe project. Nanopipe is a library that allows you to connect different message queue systems (but not limited to) together. Nanopipe was built to avoid the glue code between different types of communication protocols/channels that is very common nowadays. An example of this is: you have an application that is listening for messages on an AMQP broker (ie. RabbitMQ) but you also have a Redis pub/sub source of messages and also a MQTT source from a weird IoT device you may have. Using Nanopipe, you can connect both MQTT and Redis to RabbitMQ without doing any glue code for that. You can also build any kind of complex connection scheme using Nanopipe.

arch2

Simple and effective coin segmentation using Python and OpenCV

该new generation of OpenCV bindings for Python is getting better and better with the hard work of the community. The new bindings, called “cv2” are the replacement of the old “cv” bindings; in this new generation of bindings, almost all operations returns now native Python objects or Numpy objects, which is pretty nice since it simplified a lot and also improved performance on some areas due to the fact that you can now also use the optimized operations from Numpy and also enabled the integration with other frameworks like thescikit图像which also uses Numpy arrays for image representation.

In this example, I’ll show how to segment coins present in images or even real-time video capture with a simple approach using thresholding, morphological operators, and contour approximation. This approach is a lot simpler than the approach using Otsu’s thresholding and Watershed segmentationhere in OpenCV Python tutorials,我强烈建议你阅读,因为它的稳健性。不幸的是,使用大津的阈值的方法是高度依赖于照明正常化。人们可以提取图像的小补丁来实现类似的自适应大津的二进制的东西(像在Letptonica实现 - 由正方体OCR使用的框架)来解决这个问题,但让我们看到另一种方法。为了参考,见使用具有我与一个非归一化照明的摄像头所拍摄的图像的大津的阈值的输出:

原始图像VS大津二值化
原始图像VS大津二值化

1. Setting the Video Capture configuration

该first step to create a real-time Video Capture using the Python bindings is to instantiate the VideoCapture class, set the properties and then start reading frames from the camera:

import numpy as np import cv2 cap = cv2.VideoCapture(0) cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280) cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 720)

In newer versions (unreleased yet), the constants forCV_CAP_PROP_FRAME_WIDTH现在在CV2模块,现在,我们只需要使用CV2。cv模块。

2. Reading image frames

该next step is to use the VideoCapture object to read the frames and then convert them to gray color (we are not going to use color information to segment the coins):

而正确:RET,帧= cap.read()ROI =帧[0:500,0:500]灰色= cv2.cvtColor(ROI,cv2.COLOR_BGR2GRAY)

请注意,我在这里提取完整的图像(如硬币的位置)的一小部分,但你没有这样做,如果你有你的图像上只有硬币。在这个时刻,我们有以下的灰度图像:

该original Gray image captured.
该original Gray image captured.

3.应用自适应阈值

在这一步中,我们将采用高斯模糊内核来消除我们对图像中的噪声后,应用自适应阈值:

gray_blur = cv2.GaussianBlur(gray, (15, 15), 0) thresh = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 1)

See the effect of the Gaussian Kernel in the image:

该original gray image and the image after applying the Gaussian Kernel.
该original gray image and the image after applying the Gaussian Kernel.

And now the effect of the Adaptive Thresholding with the blurry image:

该effect of the adaptive thresholding into the blurry image

Note that at that moment we already have the coins segmented except for the small noisy inside the center of the coins and also in some places around them.

4. Morphology

Morphological Operatorsare used to dilate, erode and other operations on the pixels of the image. Here, due to the fact that sometimes the camera can present some artifacts, we will use the Morphological Operation of Closing to make sure that the borders of the coins are always close, otherwise, we may found a coin with a semi-circle or something like that. To understand the effect of the Closing operation (which is the operation of erosion of the pixels already dilated) see the image below:

Morphological Closing

你可以see that after some iterations of the operation, the circles start to become filled. To use the Closing operation, we’ll use themorphologyEx来自OpenCV的Python绑定功能:

内核= np.ones((3,3),np.uint8)闭合= cv2.morphologyEx(THRESH,cv2.MORPH_CLOSE,内核,迭代= 4)

See now the effect of the Closing operation on our coins:

Closing Operation in the coins

形态学算子的操作是很简单的,主要原理是元素的应用程序(在我们的情况下,我们的3×3块元素)插亚洲金博宝入图像的像素。如果你想了解它,请参阅这部动画解释侵蚀操作

5.轮廓检测和滤波

应用形态学算后,将所有我们要做的是找到每个硬币的轮廓,然后筛选具有面积比一枚硬币面积更小或更大的轮廓。你能想象在OpenCV中发现的轮廓为寻找连接组件及其边界的操作过程。为了做到这一点,我们将使用的OpenCVfindContours功能。

cont_img = closing.copy() contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

请注意,我们做出了关闭图像的副本,因为功能findContours将改变作为第一个参数图片过去了,我们也使用RETR_EXTERNALflag, which means that the contours returned are only the extreme outer contours. The parameterCHAIN_APPROX_SIMPLEwill also return a compact representation of the contour, for more information看这里

After finding the contours, we need to iterate into each one and check the area of them to filter the contours containing an area greater or smaller than the area of a coin. We also need to fit an ellipse to the contour found. We could have done this using the minimum enclosing circle, but since my camera isn’t perfectly above the coins, the coins appear with a small inclination describing an ellipse.

for cnt in contours: area = cv2.contourArea(cnt) if area < 2000 or area > 4000: continue if len(cnt) < 5: continue ellipse = cv2.fitEllipse(cnt) cv2.ellipse(roi, ellipse, (0,255,0), 2)

注意,在上面的代码中,我们迭代上的每个轮廓,滤波硬币与面积大于2000或大于4000(这些是硬编码的,我发现为巴西硬币在从照相机该距离值)越小,以后我们检查的数量轮廓由于功能点fitEllipse需要多个点更大或等于比5,最后我们使用椭圆function to draw the ellipse in green over the original image.

To show the final image with the contours we just use the imshow function to show a new window with the image:

cv2.imshow( '最终结果',ROI)

最后,这是结果在上面描述的所有步骤结束:

与轮廓的最终图像中检测
与轮廓的最终图像中检测

该complete source-code:

import numpy as np import cv2 def run_main(): cap = cv2.VideoCapture(0) cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280) cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 720) while(True): ret, frame = cap.read() roi = frame[0:500, 0:500] gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) gray_blur = cv2.GaussianBlur(gray, (15, 15), 0) thresh = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 1) kernel = np.ones((3, 3), np.uint8) closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=4) cont_img = closing.copy() contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: area = cv2.contourArea(cnt) if area < 2000 or area > 4000: continue if len(cnt) < 5: continue ellipse = cv2.fitEllipse(cnt) cv2.ellipse(roi, ellipse, (0,255,0), 2) cv2.imshow("Morphological Closing", closing) cv2.imshow("Adaptive Thresholding", thresh) cv2.imshow('Contours', roi) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() if __name__ == "__main__": run_main()

Accessing HP Cloud OpenStack Nova using Python and Requests

所以,我的要求对新的惠普云服务的自由和内测赛季进入由轻轻地接受惠普云团队, and today I finally got some time to play with the OpenStack API at HP Cloud. I’ll start with the first impressions I had with the service:

该user interface of the management is very user-friendly, the design is much like of the Twitter Bootstrap, see the screenshot below of the “Compute” page from the “Manage” section:

正如你所看到的,他们有一组4个Ubuntu的图像和CentOS的,我认为既然他们仍然在测试阶段,很快我们就会有更多的默认图像来使用。

Here is a screenshot of the instance size set:

Since they are using OpenStack, I really think that they should have imported the vocabulary of the OpenStack into the user interface, and instead of calling it “Size”, it would be more sensible to use “Flavour“.

该user interface still doesn’t have many features, something that I would really like to have is a “Stop” or something like that for the instances, only the “Terminate” function is present on the Manage interface, but those are details that they should be still working on since they’re only in beta.

举另一个重要信息是,进入实例通过SSH使用生成的RSA密钥,他们提供给你做。

Let’s dig into the OpenStack API now.

OpenStack API

To access the OpenStack API you’ll need the credentials for the authentication, HP Cloud services provide these keys on the Manage interface for each zone/service you have, see the screenshot below (with keysanonymized当然):

Now,OpenStack authentication在不同的方案可以做,该方案,我知道惠普支持IS令牌认证。我知道,有很多已经支持的OpenStack API(一些没有文档,有些人奇怪的API设计等)的客户,但这篇文章的目的是展示将​​是多么容易地创建一个简单的接口来访问使用Python和OpenStack的APIRequests(HTTP人类!)。

让我们开始通过子类的要求确定我们的身份验证方案AuthBase:

[enlighter lang=”python” ]
类OpenStackAuth(AuthBase):
高清__init __(自我,AUTH_USER,AUTH_KEY):
self.auth_key = AUTH_KEY
self.auth_user = auth_user

def __call__(self, r):
r.headers [“X-AUTH-用户”] = self.auth_user
r.headers[‘X-Auth-Key’] = self.auth_key
return r
[/ enlighter]

正如你所看到的,我们定义了X认证 - 用户和X-认证 - 重点与参数请求的报头。这些参数分别是你的账户ID和访问密码,我们引用前面。现在,所有你需要做的就是使用认证方案,这是非常容易使用的要求,使请求本身:

[enlighter lang=”python”]
ENDPOINT_URL =“https://az-1.region-a.geo-1.compute.hpcloudsvc.com/v1.1/”
ACCESS_KEY =“您的访问密钥”
ACCOUNT_ID = ‘Your Account ID’
响应= requests.get(ENDPOINT_URL,AUTH = OpenStackAuth(ACCOUNT_ID,ACCESS_KEY))
[/ enlighter]

这就是它,你用的代码,只需几行身份验证机制来完成,这是请求将如何被发送到惠普云服务的服务器:

该请求被发送到惠普云端点URL(https://az-1.region-a.geo-1.compute.hpcloudsvc.com/v1.1/)。现在让我们看看服务器如何回答了这个认证请求:

您可以通过打印使用的要求表明,该认证响应请求响应对象的属性。你可以see that the server answered our request with two important header items: X-Server-Management-URL and the X-Auth-Token. The management URL is now our new endpoint, is the URL we should use to do further requests to the HP Cloud services and the X-Auth-Token is the authentication Token that the server generated based on our credentials, these tokens are usually valid for 24 hours, although I haven’t tested it.

我们现在需要做的是子类中再次请求AuthBase类,但这次只定义令牌,我们需要在我们要做出管理URL每个新的请求使用的身份验证:

[enlighter lang=”python”]
类OpenStackAuthToken(AuthBase):
def __init__(self, request):
self.auth_token = request.headers[‘x-auth-token’]

def __call__(self, r):
r.headers [“X-AUTH-令牌”] = self.auth_token
return r
[/ enlighter]

Note that the OpenStackAuthToken is receiving now a response request as parameter, copying the X-Auth-Token and setting it on the request.

Let’s consume a service from the OpenStack API v.1.1, I’m going to call theList Servers API函数解析使用JSON结果,然后在屏幕上显示的结果:

[enlighter lang=”python”]
# Get the management URL from the response header
mgmt_url = response.headers[‘x-server-management-url’]

#创建使用/服务器路径管理URL的新请求
#和OpenStackAuthToken方案,我们创建了
r_server = requests.get(mgmt_url + ‘/servers’, auth=OpenStackAuthToken(response))

# Parse the response and show it to the screen
json_parse = json.loads(r_server.text)
print json.dumps(json_parse, indent=4)
[/ enlighter]

这就是我们在响应这一要求得到:

[enlighter]
{
“服务器”:
{
“ID”:22378,
“UUID”:“e2964d51-fe98-48f3-9428-f3083aa0318e”
“links”: [
{
“HREF”:“https://az-1.region-a.geo-1.compute.hpcloudsvc.com/v1.1/20817201684751/servers/22378”
“rel”: “self”
},
{
“href”: “https://az-1.region-a.geo-1.compute.hpcloudsvc.com/20817201684751/servers/22378”,
“相对”:“书签”
}
],
“名”:“服务器22378”
},
{
“id”: 11921,
“uuid”: “312ff473-3d5d-433e-b7ee-e46e4efa0e5e”,
“links”: [
{
“HREF”:“https://az-1.region-a.geo-1.compute.hpcloudsvc.com/v1.1/20817201684751/servers/11921”
“rel”: “self”
},
{
“HREF”:“https://az-1.region-a.geo-1.compute.hpcloudsvc.com/20817201684751/servers/11921”
“相对”:“书签”
}
],
“名”:“服务器11921”
}
]
}
[/ enlighter]

And that is it, now you know how to use Requests and Python to consume OpenStack API. If you wish to read more information about the API and how does it works, you can read thedocumentation here

- 基督教S. Perone

C++11 user-defined literals and some constructions

我正在看提案N2765(user-defined literals)已经在的发展中实现快照GCC 4.7我想在用户定义的文字如何被用来创造一些有趣和奇怪的,有时结构。

Introduction to user-defined literals

C ++ 03有一些文字,如在“12.2f”,其将双值浮在“F”。问题是,这些文字是不是很灵活,因为他们是非常固定的,所以你不能改变它们或创建新的。亚洲金博宝为了克服这种情况,C ++ 11引入的概念“用户定义的文字”这将使用户,创建新的自定义文字修饰的能力。新的用户定义的文本可以创建或者内置的类型(例如INT)或用户定义类型(例如类),以及事实上,他们可能是非常有用的是,它们可以代替仅返回原语的目的的效果。亚洲金博宝

该new syntax for the user-defined literals is:

[enlighter lang=”C++”]
OutputType operator “” _suffix(const char *literal_string);
[/ enlighter]

......在一个字符串的情况下。该OutputTypeis anything you want (object or primitive), the “_suffix” is the name of the literal modifier, isn’t required to use the underline in front of it, but if you don’t use you’ll get some warnings telling you that suffixes not preceded by the underline are reserved for future standardization.

Examples

KMH到MPH转换器

[enlighter LANG =” C ++”转义=”真”线=” 1000“]
// stupid converter class
类转换器
{
public:
Converter(double kmph) : m_kmph(kmph) {};
~Converter() {};

双to_mph(无效)
{ return m_kmph / 1.609344; }

私人的:
double m_kmph;
};

//用户定义文字
转换操作符“” KMPH(长双KMPH)
{ return Converter(kmph); }

INT主(无效)
{
的std :: COUT <<“转换器:” <<(80kmph).to_mph()<<的std :: ENDL;
//请注意,我用括号以
//可以称之为“to_mph”的方法
返回0;
}
[/ccb]

请注意,字面数值类型应该是long double(for floating point literals) orunsigned long long(for integral literals). There is no signed type, because a signed literal is parsed as an expression with a sign as unary prefix and the unsigned number part.

的std :: string字面

[enlighter LANG =” C ++”转义=”真”线=” 1000“]
std::string operator “” s (const char* p, size_t n)
{ return std::string(p,n); }

INT主(无效)
{
性病::法院<< “转换我一个字符串” s.length()<<的std :: ENDL;//这里你不需要括号,注意// C字符串被自动地转换成的std :: string返回0;} [/ CCB]

system() call

[enlighter LANG =” C ++”转义=”真”线=” 1000“]
int operator “” ex(const char *cmd, size_t num_chars)
{ return system(cmd); }

INT主(无效)
{
“ls -lah”ex;
返回0;
}
[/ccb]

别名和std ::地图

[enlighter LANG =” C ++”转义=”真”线=” 1000“]
的typedef的std ::地图 MyMap;
MyMap中create_map()
{
MyMap中米;
米[“笑”] = 7;
返回米;
}
自动米= create_map();

INT&运算符“” M(常量字符*键,为size_t长度)
{返回米[键]。}

INT主(无效)
{
性病::法院<< “笑” M <<的std :: ENDL;// 7 “笑” M = 2;性病::法院<< “笑” M <<的std :: ENDL;// 2返回0;} [/ CCB]

参考

Wikipedia :: C++11 (User-defined literals)

建议N2765

Pyevolve on Sony PSP ! (genetic algorithms on PSP)

索尼PSP可以比搞笑,有时更有趣。很少有人了解Stackless的Python的2.5.2的PSP移植版的潜力。这个伟大的蟒蛇的端口被做Carlos E.你可以找到有关的端口在他的博客的进展的更多信息。

Well, I’ve tested the Pyevolve GA framework on the Stackless Python for PSP and for my surprise, it worked without changing one single line of code on the framework due the fact of Pyevolve has been written in pure Python (except the platform specific issue like theInteractive Mode,但这个问题上不支持的平台是全自动禁用)。

So now,我们对PSP遗传算法

按照一些截图(点击放大):

psp2psp1

该GA running on the PSP screenshot is the minimization of theSphere function

下面是对PSP安装Pyevolve步骤:

PSP Stackless Python Installation

1)首先,创建一个名为目录“python”on your PSP under the“/ PSP / GAME”和directory structure“/蟒蛇/站点包/”在你的记忆棒根目录(这最后的目录将被用于以后放Pyevolve)。
2) Copy theEBOOT.PBPpython.zipfiles to this created directory;

Pyevolve安装

3)下载Pyevolve源和复制名为“pyevolve”的目录中创建的目录“/蟒蛇/站点包/”,最终的目录结构将是:“/python/site-packages/pyevolve”

Ready ! Now you can import Pyevolve modules inside scripts on your PSP, of course you can’t use the graphical plotting tool or some DB Adapters of Pyevolve, but the GA Core it’s working very well.

下面是一些基本用法of PSP Stackless Python port. I’ve used thepsp2dmodule to show the information on screen. When I got more time, I’ll port the visualization of the TSP problem to use thepsp2d,这将是很好看旅行商问题running with real-time visualization on PSP =)

Related Links

PSP Stackless的Python中的谷歌代码项目