MRCTF

MISC:SPY_DOG

题目的逻辑是

提供狗的图片,需要处理图片,根据已知模型生成可以欺骗该模型,识别为猫的图片,这是一个对抗神经网络的白盒攻击任务。

https://blog.csdn.net/u010420283/article/details/83685140?depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-5&utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-5

另外附带要求,生成伪造图片需要保证预测为cat的分值大于0.99,像素差值小于10。

这里介绍的是通过梯度下降的方式生成样本进行迭代,使用的是keras库。

通过模型可视化输出查看:

输入层(Input):输入为1281283大小图像矩阵。

卷积层(Conv1):32个126*126大小的卷积核。

Pooling层(Pool1):Max Pooling窗口大小为2×2。

卷积层(Conv2):32个63*63大小的卷积核。

Pooling层(Pool2):Max Pooling窗口大小为2×2。

卷积层(Conv3):64个30*30大小的卷积核。

Pooling层(Pool2):Max Pooling窗口大小为2×2。

flatten层:一维化展开。

FC层(dense):将展开的25088的map整合成512的map

输出层\FC层(dense_1)(output):将512的map整合成2的map,即输出2个分类。

我们了解到这是一个4层的卷积神经网络模型,我们需要关注的是输入层和输出层,输入层是一个对1281283(长、宽、颜色通道)的图像处理层,输出层为输出2个神经节点(即是否为猫):

img
。。。

img

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
from operator import mod
import numpy as np

from keras.applications import inception_v3
from keras import backend as K
from keras.utils import plot_model
from PIL import Image
import hashlib,random
import struct
import cv2
from keras.models import load_model
from keras.utils import image_utils
import matplotlib.pyplot as plt

import tensorflow as tf
tf.compat.v1.disable_eager_execution()

def checkMask(model, img):
predict = model.predict(img)
return predict[0][1]

print("Loading......")
model = load_model("./simplenn.model")
print("You are a spy from the country of dog, try your best to get flag from cats.")
#plot_model(model, to_file='./model.png', show_shapes=True)
layer_1 = K.function([model.layers[0].input], [model.layers[-1].output]) # 第一个 model.layers[0],不修改,表示输入数据;第二个model.layers[you wanted],修改为你需要输出的层数的编号
# img = image_utils.load_img("./another.bmp", target_size=(128, 128))
# img_tensor = image_utils.img_to_array(img)
# img_tensor /= 255.
# img_tensor -= 0.5
# img_tensor *= 2.
# img_tensor = np.expand_dims(img_tensor, axis=0)
img = cv2.imread("./another.bmp")
img = cv2.resize(img, (128, 128))
img_tensor = np.expand_dims(img, axis=0)
img_tensor = img_tensor.astype(np.float32)
img_tensor /= 255.
# img = cv2.imread("./another.bmp")
# img = cv2.resize(img, (128, 128))
# img_tensor = np.expand_dims(img, axis=0)
# img_tensor = img_tensor.astype(np.float32)
# img_tensor /= 255.
f1 = layer_1([img_tensor])[0] # 只修改inpu_image,[:1]代表一张图片
# 第一层卷积后的特征图展示,输出是(样本个数,特征图尺寸长,特征图尺寸宽,特征图个数)
print("dog:",f1[:,0])
print("cat:",f1[:,1]) # 样本个数,特征图尺寸长,特征图尺寸宽,特征图个数
#for _ in range(32):
# show_img = f1[:, :, :, _] # 样本个数,特征图尺寸长,特征图尺寸宽,特征图个数
# print('show_img',show_img.shape) # show_img (1, 22, 22)
# show_img.shape = [126, 126] # 跟show_img的形状一直
# plt.subplot(4, 8, _ + 1)
# plt.imshow(show_img, cmap='gray')
# plt.axis('off')
#plt.show()
max_change_above = img_tensor + 0.039
max_change_below = img_tensor - 0.039
# Load pre-trained image recognition model
#model = inception_v3.InceptionV3()

# Grab a reference to the first and last layer of the neural net
# print("model.layers:")
# print(model.layers)
model_input_layer = model.layers[0].input
model_output_layer = model.layers[-1].output
# print("model.layers[-1].output:")
# print(type(model_output_layer))
# sess = tf.compat.v1.InteractiveSession()
# sess.run(tf.compat.v1.global_variables_initializer())
# print(type(model_output_layer[0, 0].eval()))
# print(model_output_layer[0, 0].eval())
# print(model_output_layer[0, 1].eval())
cost_function = model_output_layer[0, 1]

gradient_function = K.gradients(cost_function, model_input_layer)[0]
#gradient_function= K.GradientTape(cost_function, model_input_layer)[0]


# Choose an ImageNet object to fake
# The list of classes is available here: https://gist.github.com/ageitgey/4e1342c10a71981d0b491e1b8227328b
# Class #859 is "toaster"
# object_type_to_fake = 859


# # Load the image to hack
# img = image.load_img("cat.png", target_size=(299, 299))
# original_image = image.img_to_array(img)


# # Scale the image so all pixel intensities are between [-1, 1] as the model expects
# original_image /= 255.
# original_image -= 0.5
# original_image *= 2.


# # Add a 4th dimension for batch size (as Keras expects)
# original_image = np.expand_dims(original_image, axis=0)


# Pre-calculate the maximum change we will allow to the image
# We'll make sure our hacked image never goes past this so it doesn't look funny.
# A larger number produces an image faster but risks more distortion.
# max_change_above = original_image + 0.01
# max_change_below = original_image - 0.01


# Create a copy of the input image to hack on
#hacked_image = np.copy(original_image)
hacked_image = np.copy(img_tensor)


# How much to update the hacked image in each iteration
learning_rate = 0.1
# Define the cost function.
# Our 'cost' will be the likelihood out image is the target class according to the pre-trained model
#cost_function = model_output_layer[0, object_type_to_fake]


# We'll ask Keras to calculate the gradient based on the input image and the currently predicted class
# In this case, referring to "model_input_layer" will give us back image we are hacking.
#gradient_function = K.gradients(cost_function, model_input_layer)[0]
# Create a Keras function that we can call to calculate the current cost and gradient
grab_cost_and_gradients_from_model = K.function([model_input_layer, K.learning_phase()], [cost_function, gradient_function])
cost = 0.0
# In a loop, keep adjusting the hacked image slightly so that it tricks the model more and more
# until it gets to at least 80% confidence
while cost < 0.9994:
# Check how close the image is to our target class and grab the gradients we
# can use to push it one more step in that direction.
# Note: It's really important to pass in '0' for the Keras learning mode here!
# Keras layers behave differently in prediction vs. train modes!
cost,gradients = grab_cost_and_gradients_from_model([hacked_image, 0])
# Move the hacked image one step further towards fooling the model
hacked_image += gradients * learning_rate
# Ensure that the image doesn't ever change too much to either look funny or to become an invalid image
hacked_image = np.clip(hacked_image, max_change_below, max_change_above)
#hacked_image = np.clip(hacked_image, -1.0, 1.0)
img = hacked_image[0]
print("Model's predicted likelihood that the image is a cat: {:.8}%".format(cost * 100))


# De-scale the image's pixels from [-1, 1] back to the [0, 255] range
img = hacked_image[0]
img *= 255.

cv2.imwrite("hacked-image2.bmp",img)

# Save the hacked image!
# im = Image.fromarray(img.astype(np.uint8))
# im.save("hacked-image.bmp")

filename="./hacked-image2.bmp"
score = 0
img = cv2.imread(filename)
img = cv2.resize(img, (128, 128))
img_tensor = np.expand_dims(img, axis=0)
img_tensor = img_tensor.astype(np.float32)
img_tensor /= 255.
score=checkMask(model,img_tensor)
cost,gradients = grab_cost_and_gradients_from_model([img_tensor, 0])
print("finaly cost:",cost)
print("finaly score:",score)

将图片base64传上即可:
img

RE:CICADA

SMC的题,dicada.exe会解密一个PE文件,验证逻辑在这个PE文件中,分析处理逻辑,流程是输入32字节key经过一个很长处理后,根据返回值是否为true来判断我们的输入是否正确,目前使用angr跑了出来.
img
img
img

angr脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# coding=utf-8
import angr

p = angr.Project('./sub.exe')


# good = (0x18002536A)
# bad = (0x180025339)
good = (0x18000BD6D)
bad = (0x18000BD79)


start = 0x18000bd43
#state = p.factory.entry_state()
state = p.factory.blank_state(addr=start)
state_rsp=state.regs.rsp
print(state_rsp)#0x7ffffffffff0000
state.regs.rcx=state_rsp+0x38
state.regs.rdx=state.solver.BVS("lastbyte",8)
print(state.regs.rcx)
flag=state.solver.BVS("flag",16*8)
state.memory.store(state_rsp+0x38,flag,endless=p.arch.memory_endness)
simulation = p.factory.simgr(state)
simulation.explore(find=good, avoid=bad)
if simulation.found:
print(flag)
solution_state = simulation.found[0]
hexflag=solution_state.solver.eval(flag,cast_to=bytes)
print(''.join(['%02x' % b for b in hexflag]))
# for i in range(3):
# print (solution_state.posix.dumps(i))
else:
raise Exception("Could not find the solution")

img

IoT

参考:

http://www.ctfiot.com/37681.htmlhttp://www.ctfiot.com/37681.html

http://www.ctfiot.com/38677.htmlhttp://www.ctfiot.com/38677.html

查看文件,PNF-9010R.img镜像文件被加密,我们需要分析S34MLxx固件获取密钥及加密逻辑。 img

Nand Flash

NandFlash有特定的存储结构,可分为plane,block以及page。以Spansion S34ML0*为例,其由2快plane组成,每块plane由1024 block组成,每块block由64 page组成,每page有(2048 + 128)=2176字节。其中128字节为OOB区,用作校验和坏块管理,具体管理方式可参考该flash的datasheet。

内部ECC对于主要区域的每528字节(x8)和备用区域的每16字节(x8)提供了9位检测码和8位校正码。[…] 在PROGRAM操作过程中,在页面被写入NAND Flash阵列之前,设备会在缓存寄存器中的2k页面上计算ECC代码。ECC代码被存储在页面的备用区域。 在读操作中,页面数据从阵列中被读到缓存寄存器中,在那里ECC代码被计算出来,并与从阵列中读取的ECC代码值进行比较。如果检测出1-8位的错误,将通过高速缓存寄存器进行纠正。只有经过纠正的数据,才会在I/O总线上输出。
通过编程器提取该flash固件,可得文件大小285,212,672字节=0x11000000字节=2(plane)*(1024block)*64(page)*2176(byte),恰好符合datasheet描述。但需要注意的是,此时binwalk并不能有效识别该固件的组成及提取其中的文件系统。这是由于坏块以及OOB的存在,因此固件分析的第一步是筛选坏块并去除OOB区.
坏块筛选规则:the 1st byte in the spare area of the 1st or 2nd or last page does not contain FFh is a Bad block.
OOB去除,可去除每隔2048字节的128字节校验值。

然而,经过上述操作后,binwalk仍不能正确提取识别。

初次经过人工分析,所有全0字节内存页中,有如下现象:页起始偏移1040字节的14字节有数据;OOB区的前两字节为FF,后14字节数据为全0。不经让人怀疑是上下两部分的数据区和校验区发生了调换。而事实也却是如此,具体需要了解yaffs2在(2k+128) NandFlash的存储结构以及uboot烧写的yaffs过程[1 2 3]。

NAND闪存是以内存页为单位进行编程和读取的。一个内存页由2048字节的可用存储空间和128字节的OOB组成,后者用于存储纠错代码和坏块管理的标志,也就是说,页面的总长度为2176字节。不过,对于擦除操作来说,则是以块为单位进行的。根据Micron公司的文档,对于这个闪存部分,一个块由64页组成,总共有128KB的可用数据。该闪存由两个面组成,每个面包含1024个块,因此: 2 planes * 1024 blocks/plane * 64 pages/block * (2048 + 128) bytes/page = 285,212,672

尝试每隔2048字节移除128字节,发现得到的固件文件仍然无法正常提取,分析发现oob的3-16字节是有用的:

img

另外,发现每个page偏倚0x410处存在14字节无意义
img oob的格式与datasheet上介绍的不同。

删除修正后的oob:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import re
import struct
a=open(r'S34ML02G200BHI00@BGA63_948.BIN','rb')
unOOBBin=open('unoob2.bin','wb')
data=a.read()
i=0
filesize=len(data)
while(i<filesize):
unOOBBin.write(data[i:i+0x410])
unOOBBin.write(data[i+0x410+14:i+0x800])
unOOBBin.write(data[i+0x800+2:i+0x800+2+14])
i=i+2048+128
a=open(r'unoob2.bin','rb')
b=open(r'extractELF.sh','w+') #shell脚本

再次使用binwalk尝试提取,发现分离出的ext2可以正常读取了:

img img

搜索特征字符串获取到解密逻辑(ext2文件系统中得到的magic_update中的加密是针对另一型号设备的,后来发现针对题目给的镜像文件所属型号的加密是在固件中的单独提取的mainServer ELF文件中):

openssl enc -in PNF-9010R.img -aes-256-cbc -d -k STWPNF-9010R -out PNF-9010R-dec.img

对解密后的img解压即可看到flag。

img