论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)

论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)论文笔记:Non-ProfiledDeepLearning-basedSide-ChannelattackswithSensitivityAnalysis(DDLA)BenjaminTimoneShard,SingaporeContribution

大家好,又见面了,我是你们的朋友全栈君。

论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)

Benjamin Timon
eShard, Singapore

基础概念

Non-profiling attacks:
假设攻击者只能从目标设备获取跟踪。例如:
Differential Power Analysis (DPA), Correlation Power Analysis (CPA) , or Mutual Information Analysis (MIA).

Profiling attacks:
假定攻击者拥有与目标设备相同的可编程设备。例如:
Template Attacks, Stochastic attacks or Machine-Learning-based attacks.
1.在分析阶段:用收集的侧通道迹,对所有可能密钥值k∈K进行泄漏分析。
2.攻击阶段:基于泄漏分析对侧通道迹进行分类,恢复密钥值k

points of interest(POI): 兴趣点,即侧通道迹中的泄漏点,用POI进行分类。

Deep-Learning(DL): MLP,CNN

de-synchronized: 去同步,未对齐

Contribution

  • 提出Differential Deep Learning Analysis (DDLA) .
  • 着重实现Sensitivity Analysis in a Non-Profiled context.

DDLA

攻击算法
AES

DDLA具体算法
在这里插入图片描述

实验硬件&框架

  • PC:64 GB of RAM, a GeForce GTX 1080Ti GPU & two Intel Xeon E5-2620 v4 @2.1GHz CPUs.
  • ChipWhisperer-Lite(CW) ;Atmel XMEGA128 chip;
    ASCAD(collected from an 8-bit ATMega8515 board).
  • MLPexp & CNNexp.

实验参数
1 Trainings parameters and details

1.1 Loss function
	Mean Squared Error (MSE) loss function for all experiments.
1.2 Accuracy
	The accuracy was computed as the proportion of samples correctly classified.
1.3 Batch size
	A batch size of 1000 was used for all experiments.
1.4 Learning rate
	For all experiements we used a learning rate of 0.001.
1.5 Optimizer
	We used the Adam optimizer with default configuration (β~1~ = 0.9, β~2~ = 0.999, ε = 1e-08,no learning rate decay).
1.6 Input normalization
	We normalize the input traces by removing the mean of the traces and scaling the traces between -1 and 1.
1.7 Labeling
	• For all simulations (first and high-order), we used the MSB labeling.
	• For attacks on the unprotected CW and on ASCAD, we used the LSB labeling.
	• For the attack on the CW with 2 masks, we used the MSB labeling.
1.8 Deep Learning Framework
	We used PyTorch 0.4.1.

PyTorch

2 Networks architectures
Deeping Learning Book
Deep learning website

2.1 MLPsim
	• Dense hidden layer of 70 neurons with relu activation
	• Dense hidden layer of 50 neurons with relu activation
	• Dense output layer of 2 neurons with softmax activation
2.2 CNNsim
	• Convolution layer with 8 filters of size 8 (stride of 1, no padding) with relu activation.
	• Max pooling layer with pooling size of 2.
	• Convolution layer with 4 filters of size 4 (stride of 1, no padding) with relu activation.
	• Max pooling layer with pooling size of 2.
	• Dense output layer of 2 neurons with softmax activation

2.3 MLPexp
	• Dense hidden layer of 20 neurons with relu activation
	
	• Dense hidden layer of 10 neurons with relu activation
	• Dense output layer of 2 neurons with softmax activation
2.4 CNNexp
	• Convolution layer with 4 filters of size 32 (stride of 1, no padding) with relu activation.
	• Average pooling layer with pooling size of 2.
	• Batch normalization layer
	
	• Convolution layer with 4 filters of size 16 (stride of 1, no padding) with relu activation.
	• Average pooling layer with pooling size of 4.
	• Batch normalization layer
	• Dense output layer of 2 neurons with softmax activation

Experiment & result

CNN-DDLA against de-synchronized traces

Non-Profiled;N = 3, 000 未对齐侧信道迹;CNNexp .
(文章用了MLPexp和CPA,未能成功恢复)

High-Order DDLA simulations (掩码技术泄露处未知): MLPsim

N = 5, 000 traces,1 mask ;
• n = 50 samples per trace.
• leakage: t = 25 ;Sbox leakage:Sbox(di ⊕ k) ⊕ m1 + N (0, 1) ,di & m1 are random | k a fixed key byte.
leakage:t = 5 and defined as m1 + N (0, 1).
• All other points on the traces are random values in [0; 255]

N = 10, 000 traces, 2 masks.;
leakage:t = 45;Sbox leakage:Sbox(di ⊕ k∗) ⊕ m1 ⊕ m2 + N (0, 1).

Second order DDLA on ASCAD

16-byte fixed key;plaintexts & masks are random;ASCAD profiling set;MLPexp on the first 20, 000 traces;ne = 50 epochs per guess
在这里插入图片描述
Third order DDLA on ChipWhisperer

N = 50, 000 traces;n = 150 samples with mask m1,m2;masked Sbox:Sbox(d ⊕ k∗) ⊕ m1 ⊕ m2.
MLPexp network;ne = 100 epochs per guess;
reveal key after around 20 epochs per guess;without any leakages combination pre-processing nor any assumptions about the masking method.

在这里插入图片描述
这些是用DL训练每个bit的密钥,最后检查矩阵;
但是以某部分epochs为单位检查矩阵,直到某一步骤可以恢复密钥,可以减少复杂度。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/130352.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • 【深度学习入门】——亲手实现图像卷积操作[通俗易懂]

    【深度学习入门】——亲手实现图像卷积操作[通俗易懂]深度学习中有一个很重要的概念就是卷积神经网络CNN,卷积神经网络中又有卷积层、池化层的概念。尤其是卷积层,理解难度比较大,虽然书中或者是视频中都有详细介绍过它的基础概念,但对于求知欲望很强烈的我,我总心里痒痒的,总想亲手实现,看看效果,怕的就是自己会眼高手低,做技术人最可怕的就是眼高手低。所以,我打算用python来亲自验证一遍。什么是卷积?卷积(convolution)是数学知…

    2022年5月8日
    71
  • oracle的dba权限_用户组权限

    oracle的dba权限_用户组权限举例,我的用户名为terence1.赋予DBA权限grantdbatoterence;2.解除DBA权限是:revokedbafromterence;

    2022年9月26日
    0
  • navicat15.0.17万能激活码【在线注册码/序列号/破解码】

    navicat15.0.17万能激活码【在线注册码/序列号/破解码】,https://javaforall.net/100143.html。详细ieda激活码不妨到全栈程序员必看教程网一起来了解一下吧!

    2022年3月19日
    151
  • 论文投稿Cover letter[通俗易懂]

    论文投稿Cover letter[通俗易懂]转自:http://blog.sciencenet.cn/blog-479412-686426.html,感谢分享!1.第一次投稿Coverletter:主要任务是介绍文章主要创新以及声明没有一稿多投DearEditors:Wewouldliketosubmittheenclosedmanuscriptentitled“PaperTitle”,whichwewis…

    2022年5月3日
    45
  • 2023考研高数接力题典1800习题讲解

    2023考研高数接力题典1800习题讲解第一部分(函数、极限、连续)极限求法:①直接代入数值②约去不能代入的零因子③分子分母同除最高次幂④分子分母有理化⑤公式法⑥等价无穷小量的代换⑦洛必达法则⑧换底公式(对数)入门练习填空题讲解(1~4):第一题:我们通过观察,发现是0/0型的,自然想到了洛必达法则。百度百科:洛必达法则是在一定条件下通过分子分母分别求导再求极限来确定未定式值的方法。众所周知,两个无穷小之比或两个无穷大之比的极限可能存在,也可能不存在。因此,求这类极限时往往需要适当的变形,转化成可利用极限运算法则或重要

    2022年8月11日
    6
  • 笔试逻辑推理选择题_应聘逻辑测试题目及答案

    笔试逻辑推理选择题_应聘逻辑测试题目及答案【笔试面试】75道逻辑推理题及答案更多精彩请直接访问SkySeraph个人站点:www.skyseraph.com啰嗦:昨天公司开全球研发中心实习生沟通会,其中有个环节是做逻辑推理竞猜,还是得平时

    2022年8月4日
    13

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号