1.1 Project source code 【https://github.com/haoyye/OFDM_DNN】
1.2 Simplified code 【https://github.com/TianLin0509/DNN_detection_via_keras】
1.3 Refer to the doctor's brother's article for the code 【https://zhuanlan.zhihu.com/p/166159478】
Using the neural network composed of the whole connecting layer to realize the traditional OFDM A receiver in a communication system . The receiver includes channel estimation and signal detection . take OFDM The receiver is seen as a complete black box . This neural network is called FC-DNN. This is different from another paper ComNet Combination of Deep Learning andExpert Knowledge in OFDM Receivers.pdf Medium ComNet The design of the .ComNet It is to separate channel estimation and signal detection and use neural network to calculate , for FC-DNN Improvement . It's true that all the performance improvements .
（1） The neural network in this paper is composed of 5 layers , Three hidden layers , Used separately 256,500, 250, 120, 16 Neurons . Every time 16 Bit transmitted data is grouped , And the prediction is based on a single model trained independently , Then connect it to the final model and output , Except for the last layer Sigmoid Function maps the output to an interval [0,1] Outside , Most layers use Relu Function as activation function .
（2）OFDM The structure of the communication system
OFDM The structure of the communication system , It's actually the transmitter + channel + Receiver . The model implementation is ： After the incoming signal + Serial to parallel conversion （S/P）+ Add the pilot + Fourier transformation + Add a cyclic prefix + Parallel string conversion （P/S）–> Enter the channel –> Receiver implementation ： Parallel string conversion （S/P）+ Remove the cyclic prefix + Inverse Fourier transform + Parallel string conversion + Channel estimation + Signal detection .
The DNN In order to obtain an efficient algorithm for joint channel estimation and symbol detection DNN Model , There are two stages . The first is in the offline training phase , Using variance information sequence to generate OFDM sample , Under different channel conditions , Such as typical urban or hilly terrain delay profile , Train the model . The second is the online deployment phase ,DNN The model generates the output of the recovery transmission data , Without explicitly estimating the wireless channel .‘
The loss function in the training model is
ˆX（k） It's prediction ,X（k） It's surveillance news , It's the case of symbols transmitted in that message .
The system uses a 64 Subcarriers , The length is 16 Of OFDM System . The carrier frequency is 2.6GHz, The number of paths is 24, The maximum delay used is 16 A typical city channel . use QPSK As a modulation mode .
Comparison of the LS and mmse Two channel estimation and detection methods , Where each frame uses 64 Pilot for channel estimation .
LS The performance of the method is the worst , Because no prior statistical information of the channel is used to detect . contrary ,MMSE The method performs best , Because the second-order statistics of the channel are considered to be known , And for symbol detection . The method based on deep learning is better than lse Method has better performance , It can be done with MMSE Comparable methods .
Cyclic prefix can reduce the bit error rate . chart 4 It gives the No CP OFDM The bit error rate curve of the system , As you can see from the diagram ,MMSE and LS Can't estimate the channel effectively . When SNR is greater than 15dB when , Accuracy tends to saturate . However , Deep learning still works .
because OFDM There is a significant disadvantage of the PAPR （PAPR）, To lower this PAPR, Need to use clipping and filtering Method . The experimental study was carried out in MMSE Below the detection method , The detection performance of deep learning method is better than using the above two methods . The effect of deep learning is better than clipping and filtering Better .
chart 6 Lieutenant general DNN And MMSE Methods are compared , That is, only use 8 Two pilots , Omit CP, And there is clipping noise . As you can see from the diagram ,DNN Than MMSE The method is much better , But ideally ,DNN The detection performance of this method is similar to that of MMSE There is a big improvement compared with the method
Because the generated training data is from the simulation data , In practical applications , Generally it doesn't match . Mismatch means that the simulated data is different from the real data , The trained model must analyze the data according to the set of simulation rules . In the real world , Then we need to retrain the model with the data in the real environment .
chart 7 It shows the BER curve when the maximum delay and the number of paths in the test phase are different from the parameters used in the training phase described at the beginning of this section .
Source download https://github.com/TianLin0509/DNN_detection_via_keras
Because my computer is an ordinary notebook , No, GPU The program is too slow to run , Running for more than ten hours , iteration epochs only 400/10000. I've never been in contact with python The program
Can't run the program , Just slowly ponder over the following questions , If it's solved , I'll add .
- The size of the input data matrix 【height weight channel】
input_bits = Input(shape=(payloadBits_per_OFDM * 2,))
The input shape How big ？ How many * A matrix of several , I didn't understand
- The structure of neural networks , Why is the last layer 16, How big is that frame of data , How long is a symbol .dataset How big is the
- What are the parameters of training , Learning rate ,Batch size ,epochs size . Corresponding minbatch What is it? ,Maxepochs How much is the
- 31 Yes Model What does that mean? ？31 That's ok ～34 What does line code mean ？ I didn't understand , Who started training .