Modeling of cushioning characteristics of honeycom

  • Detail

Modeling of cushioning characteristics of honeycomb paperboard based on BP neural network

honeycomb paperboard is a kind of green packaging material rising in the world in recent years. It has the advantages of light weight, high strength, not easy to deform, good cushioning, temperature and sound insulation, and environmental protection. After proper process treatment, it can also have the effects of flame retardant, moisture-proof, mildew proof, waterproof and so on, which has a wide development and application prospect

when packaging products with honeycomb paperboard, we must first understand the cushioning characteristics of this kind of paperboard. At present, the main method to establish its cushioning characteristic model is to collect experimental data through dynamic compression experiment and draw the maximum acceleration static stress curve. We know that honeycomb paperboard cushioning system is a typical nonlinear system. The previous method is to simplify it into a linear system, and use polynomial fitting and other methods suitable for linear systems to fit the experimental data and get the corresponding curve. However, the analysis shows that the accuracy of the model established by applying polynomial to CMA National Quality Inspection Center Directory model is low; If the accuracy of the model is very high, it is necessary to consider looking for other methods. This paper uses neural network to establish the cushioning characteristic model of honeycomb paperboard. Because of the special advantage of neural network in nonlinear aspect, it is very suitable to deal with the nonlinear problem of cushioning packaging such as honeycomb paperboard

1. BP neural network

in 1989, Robert Hecht Nielson et al proved that any continuous function in a closed interval can be approximated by a three-layer BP neural network (including a hidden layer), so a three-layer BP neural network can complete any n-dimensional to m-dimensional mapping. This is the theoretical basis of BP neural network for nonlinear system modeling. BP network is a kind of neural network using back propagation algorithm. It is mainly used in function approximation, pattern recognition, classification and data compression. The basic idea of BP network is that the learning process consists of two processes: the forward propagation of input samples and the back propagation of errors. The input samples are transferred from the input layer, processed layer by layer by hidden layer, and then transferred to the output layer. If the error between the actual output and the expected output of the output layer does not meet the predetermined requirements, it will be transferred to the back-propagation process of the error. That is, the error is returned along the original connection path, and the error is gradually reduced by modifying the connection weight of each layer of neurons. This kind of input sample forward propagation recovery enterprise's recovery amount is rarely repeated with the process of error back propagation until the error reaches the predetermined requirements

II. Simulation of BP network in neural network toolbox

this paper uses the neural network toolbox of MATLAB6.5, and takes the dynamic impact experimental data of honeycomb paperboard with a thickness of 50mm and a drop height of 40cm as an example to establish a network model. There are 13 groups of experimental data, including 10 groups of data that have a critical impact on the curve shape of Wenzhou enterprises, as the training data of the network, and the other 3 groups as the test data to verify the prediction performance of the network

① establishment of BP network

when establishing BP neural network, we must first determine the network structure according to the application problems, that is, select the number of layers and hidden layer nodes of the network. Because there are few experimental data in this example, the most basic two-layer network can be used to approach the unknown function well. The selection of the number of hidden layer nodes has always been a complex problem in the application of neural networks: too many hidden layer nodes will lead to insufficient prediction ability of networks, and it is easy to cause networks to fall into local minima and difficult to jump out; The number of hidden layer nodes is too small, and the network cannot be trained, or the samples that did not exist before cannot be recognized, and the fault tolerance is poor. In the design, the more practical way is to train and compare the networks with different numbers of neurons to find out the number of hidden layer nodes when the network effect is the best. In this example, after a lot of training and comparison, the number of nodes in the middle hidden layer is finally taken as 10. On the other hand, BP hidden layer transfer function adopts tangent sigmoid function Tansig, which can approximate any nonlinear function; The output layer neuron adopts the linear function purelin, which can release the output value to any value. So far, a neural network model has been established

② training of BP network

matlab neural network Toolbox provides users with three training functions that can be used for BP network: trainbp, trainbpx and trainlm. They are similar in usage and adopt different learning rules. Trainlm training function uses Levenberg Marquardt algorithm, which is the one with the least number of iterations and the fastest training speed among the three rules. The disadvantage is that the calculation amount of this algorithm in each iteration is larger than that of other algorithms, so it requires a lot of storage space, which is not practical for applications with large parameters. Considering that the parameters of the problem to be processed are small, trainlm training function is used. The target error is set to 0.01, and the maximum number of training steps is set to 10000. After setting the parameters, start training the network. The training results show that the network reaches the target error of 0.01 after 32 times of training, and the training stops

③ BP network test

because the initial value is a random value, the results obtained from each training are different. After many times of training, the best results are obtained, and the weights and thresholds at this time are recorded. So far, a fixed network can be used to predict the maximum acceleration static stress value of other non experimental points. In order to test whether the network has good prediction ability, the differences between the high-end testing machines in the three groups of test data are substituted into the network for prediction. The results show that the average relative error between the predicted data and the original data is 3.2726%, which shows that the fitting result is quite accurate. So far, a cushioning characteristic model of honeycomb paperboard with a thickness of 50mm and a drop height of 40cm has been successfully established by using BP network

this paper takes the special advantages of BP network in nonlinear modeling, and takes the honeycomb paperboard with a thickness of 50mm and a drop height of 40cm as an example to establish a neural network model. The test shows that the model can predict non experimental data with high accuracy. However, some problems are also found in the process of establishing the model, mainly in two aspects: first, the number of samples is too small, it is difficult to accurately reflect the characteristics of the model to be built, and it is easy to cause the network to exit the local minimum in the learning process. The solution is to increase the number of training samples by adding experimental points in the experimental part. Second, there are some problems in the BP network itself, which are mainly manifested in the slow convergence speed. Sometimes it converges at the local minimum and cannot find the global minimum. In this case, other algorithms such as simulated annealing and genetic algorithm can be considered to ensure that the network energy converges to the global minimum

(author/Luo Guanglin, Wang Tiantian)

Guangdong packaging magazine

Copyright © 2011 JIN SHI