### List of articles

# 1 introduce

In order to ensure the security of neural network algorithm , Different types of countermeasures have been put forward one after another ：

1）** Gradient shielding / confusion ** (*gradient masking/obfuscation*)： A considerable number of attackers use the gradient information of the classifier to attack , Therefore, masking or confusion gradients can also confuse attackers ;

2）** Robustness optimization ** (*robust optimization*)： Heavy training DNN Classifier can enhance its robustness , So that it can correctly predict the countermeasure samples ;

3）** Against sample testing ** (*adversarial examples detection*)： Learn the distribution of raw data , Thus, the countermeasure sample is detected and prohibited from being input into the classifier .

# 2 Gradient shielding / confusion

## 2.1 Defensive distillation (defensive distillation)

Distillation is used to reduce DNN Network scale technology , One for fighting FGSM、L-BFGS attack, perhaps DeepFool The method of attack ** Main steps ** as follows ：

1） Design softmax Temperature of $T$, Based on the training set $(X,Y)$ Training network $F$, It's about $T$ Of softmax The function is defined as follows ：

$softmax(x,T)_{i}=∑_{j}e_{Tx}e_{Tx} ,wherei=0,1,…,K−1(1)$ 2） Calculation $F(X)$ Of $softmax$ score , And calculate the temperature $T$ Under the $softmax$ score ;

3） Service temperature $T$ Under the $F(x)$ and $X$ Training distillation model $F_{T}$;

4） take $F_{T}$ The model corresponds to $T$ Designed as 1, Write it down as $F_{1}$, Then predict the test set containing countermeasure samples $X_{test}$.

The reason for this is by setting a larger $T$,$softmax$ Your input will be larger . For example setting $T=100$ when , sample $x$ And its neighborhood points $x_{′}$ Logical output of $Z(⋅)$ The difference will be a hundred times larger , among $Z(⋅)$ Used to get $softmax$ The input of . And when setting $T=1$ when ,$F_{1}$ The output of will become similar to $(ϵ,ϵ,…,1−(m−1)ϵ,ϵ,…,ϵ)$ In the form of , among $ϵ$ For computers, it is infinitely close to 0 Number of numbers . This will ** Make the score of the target output class close to 1**, This makes it difficult for attackers to find $F_{1}$ Gradient information .

## 2.2 Dispersion gradient (shattered gradients)

** Protect the model by preprocessing data **： Add a non smooth or non differentiable preprocessor $g(⋅)$, And based on $g(X)$ Training models $f$. classifier $f(g(⋅))$ About $x$ It's nondifferentiable , This will lead to the attacker's failure . For example, thermometer code (*thermometer encoding*) Image vector $x_{i}$ Discretization into $l$ Dimension vector $τ(x_{i})$, for example $l=10$ when ,$τ(0.66)=1111110000$, Finally, we train based on these vectors DNN Model . Other methods include clipping 、 Compress , And minimizing the total variance . These methods are ** Block the smooth connection between model input and output **, This makes it difficult for attackers to find gradient information $∂F(x)/∂x$.

## 2.3 Random gradient (stochastic/randomized gradients)

** By randomization DNN To confuse the attacker **. For example, training a classifier $s={F_{t}:t=1,2,…,k}$, sample $x$ The evaluation link is randomly selected $s$ A model in to predict labels $y$. Since the attacker does not know which classifier is used , So the probability of being attacked is reduced . Other operations include randomly discarding some nodes in the network , Change the size of the image and 0 fill .

## 2…4 Gradient explosion and disappearance (exploding & vanishing gradients)

PixelDefend and Defense-GAN Before classification, the generation model is used to project the potential countermeasure samples onto the benign data manifold , This will lead to the final classification model is a very deep neural network . The reason for the success of this method is , The cumulative product of the partial derivatives of each layer will lead to the gradient $∂x∂L(x) $ Extremely small or extremely large , This will prevent the attacker from accurately locating the countermeasure sample .

## 2.5 Gradient confusion or masking methods are not safe

The disadvantage of this method is , Can only confuse the attacker , Instead of eliminating confrontation samples . for example C&W′s attack Broke through the defensive distillation ,2.2-4 Methods have also been exploded one after another .

# 3 Robustness optimization

change DNN To improve the robustness of the model , Study how to learn the model parameters to make the desired prediction of potential confrontation samples . The main of this type of method ** concerns ** lie in ：

1） Learning model parameters $θ_{∗}$ To minimize the average confrontation loss ：

$θ_{∗}=θ∈Θargmin E_{x∼D}∥x_{′}−x∥≤ϵmax L(θ,x_{′},y),(2)$ 2） Learning model parameters $θ_{∗}$ To maximize the average minimum disturbance distance ：

$θ_{∗}=θ∈Θargmax E_{x∼D}C(x_{′}) =ymin ∥x_{′}−x∥.(3)$ A robust optimization algorithm should have a priori knowledge about its potential threats , That is, against space $D$, Then the defender establishes the targeted classifier of these attack means . For most related work , The goal is to defend based on minimal $l_{p}$ ( especially $l_{∞}$ and $l_{2}$) Countermeasure samples generated by norm perturbation , This is also the focus of this section .

## 3.1 Regularization method (regularization methods)

Some early research on defensive antagonism focused on the use of robust DNN Has certain properties to resist the counter sample . for example ,Szegedy Et al. Believe that a robust model should still be stable when the input is distorted , I.e. constraints Lipschitz Constant to impose the output of the model “ stability ”.** The training of these regularizations can sometimes heuristically help the model become more robust **：

1） Penalty layer Lipschitz constant (*penalize layer’s Lipschitz constant*)： When Szegydy They first found DNN When dealing with the vulnerability of the counter sample , They also add some regularization to the surface to make the model more stable . It is recommended to add... Between any two network layers Lipschitz Constant $L_{k}$：

$∀x,δ,∥h_{k}(x;W_{k})−h_{k}(x+δ;W_{k})∥≤L_{k}∥δ∥.(4)$ In this way, the output of the network will not be easily affected by the slight disturbance of the input .Parseval The network makes the confrontational risk of the model correctly depend on $L_{k}$：

$x∼DE L_{adv}(x)≤x∼DE L(x)+x∼DE [∥x_{′}−x∥≤ϵmax ∣L(F(x_{′}),y)−L(F(x),y)∣]≤x∼DE L(x)+λ_{p}k=1∏K L_{k}, (5)$ among $λ_{p}$ Is the of the loss function Lipschitz Constant . The formula surface in the training process , By punishing each hidden layer $L_{k}$, It can reduce the confrontation risk of the model and continuously increase the robustness of the model . The follow-up has been extended to semi supervised and unsupervised defense .

2） The partial derivative of the penalty layer (*penalize layer′s partial derivative*)： For example, a deep contraction network is introduced to regularize training . The deep contraction network shows that the penalty for the partial derivative of each layer is increased in the standard back propagation framework , It can make the change of input data not cause the output of each layer to change greatly . therefore , It is difficult for the classifier to give different predictions for the disturbed data samples .

## 3.2 Confrontation training (adversarial training)

1） be based on FGSM Do confrontation training ：

# 3 Robustness optimization

# 4 Against sample testing

# reference

【1】**Adversarial Attacks and Defenses in Images, Graphs and Text: A Review**

thank