Modeling of Buffering Characteristics of Honeycomb Paperboard Based on BP Neural Network

Honeycomb cardboard is a kind of green packaging material that has emerged in the world in recent years. It has the advantages of light weight, high strength, non-deformation, good cushioning, noise insulation, environmental protection and other advantages. Appropriate process treatment can also have the effects of flame retardant, moisture resistance, mildew proof, waterproof, etc., and has a wide range of development and application prospects.

To pack products with honeycomb paperboard, you must first understand the cushioning properties of the kind of paperboard. At present, the method of establishing its buffer characteristic model is mainly to collect experimental data through dynamic compression experiment and draw the maximum acceleration-static stress curve. We know that the honeycomb paperboard cushioning system is a typical nonlinear system. In the past, it was simplified to a linear system, and the curve corresponding to the experimental data was fitted by using a method suitable for a linear system such as polynomial fitting. However, the analysis shows that the accuracy of the model obtained by applying polynomial modeling is low; if the accuracy of the model is very high, it is necessary to consider other methods. In this paper, we use neural network to establish the model of honeycomb paperboard cushioning. Due to the special advantages of neural network in nonlinearity, it is very suitable for dealing with non-linear problems such as honeycomb paperboard and other buffer packaging.

First, BP neural network

In 1989, Robert Hecht-Nielson et al. demonstrated that any continuous function in a closed interval can be approximated by a three-layer (including a hidden layer) BP neural network, so a three-layer BP neural network can be completed. Arbitrary n-dimensional to m-dimensional mappings. This is the theoretical basis of BP neural network modeling for nonlinear systems. The BP network is a neural network using a Back Propagation Algorithm. Mainly used in function approximation, pattern recognition, classification and data compression and other fields. The basic idea of ​​the BP network is that the learning process consists of two processes: forward propagation of the input sample and back propagation of the error. The input sample is transmitted from the input layer and processed by each hidden layer layer by layer and then transmitted to the output layer. If the error between the actual output of the output layer and the expected output does not meet the predetermined requirement, the error propagation is reversed. That is, the error is returned along the original connecting path, and the error is gradually reduced by modifying the connection weights of the neurons in each layer. This process of forward propagation of the input sample and back propagation of the error is repeated until the error reaches a predetermined requirement.

Second, the BP network in the neural network toolbox simulation

In this paper, using Matlab6.5 neural network toolbox, the network model is established with the example of dynamic impact experimental data of honeycomb paperboard with thickness of 50mm and drop height of 40cm. There are 13 groups of experimental data. The 10 groups of data that have a key influence on the shape of the curve are used as the training data of the network. The other 3 groups are used as test data to verify the prediction performance of the network.

Establishment of 1BP network

When establishing a BP neural network, we must first determine the network structure according to the application problem, that is, select the number of network layers and hidden layer nodes. Since there are few experimental data in this case, using the most basic two-layer network can well approximate the unknown function. The choice of the number of hidden layer nodes has always been a complicated problem in the application of neural networks: too many hidden layer nodes will lead to the network's lack of predictability, and it is easy to cause the network to fall into the local minimum value is difficult to jump out; the number of hidden layer nodes If the number is too small, the network will not be trained, or the samples that were not previously available will not be identified, and fault tolerance will be poor. In the design, a more practical approach is to train and contrast networks of different numbers of neurons to find the number of hidden layer nodes with the best network effect. In this example, after a lot of training and comparison, the number of nodes in the middle hidden layer is finally taken as 10. On the other hand, the BP hidden layer transfer function adopts the tangent Sigmoid function tansig, which can approximate any nonlinear function; the output layer neuron uses the linear function purelin, which can release the output value to any value. At this point, a 1-1-1 neural network model was established.

2BP network training

The Matlab neural network toolbox provides users with three training functions that can be used in the BP network: trainbp, trainbpx, and trainlm. They are similar in usage and adopt different learning rules. The trainlm training function uses the Levenberg-Marquardt algorithm, which is one of the three rules with the least number of iterations and the fastest training speed. The disadvantage is that the algorithm is more computationally intensive than other algorithms at each iteration, and therefore requires a large amount of storage space. The application with large parameters is not practical, considering the small parameters of the problem to be processed, so the trainlm training function is adopted. The target error is set to 0.01 and the maximum number of training steps is set to 10 000. After setting the parameters, the training network is started. The training result shows that the network reaches the target error of 0.01 after training for 32 times, and the training stops.

3BP Network Testing

Since the initial value is a random value, each training result is different. After many trainings, the best results are obtained, and the weights and thresholds at this time are recorded. At this point, a fixed network can be used to predict the maximum acceleration-static stress value at other non-experimental points. In order to test whether the network has better prediction ability, three groups of test data are substituted into the network for prediction. The results show that the average relative error between the prediction data and the original data is 3.2766%. It can be seen that the fitting result is quite accurate. of. So far, a buffering characteristic model of honeycomb paperboard with a thickness of 50mm and a drop height of 40cm has been successfully established using a BP network.

This paper makes use of the special advantages of BP neural network in nonlinear modeling. For example, a honeycomb web with a thickness of 50 mm and a drop height of 40 cm is used as an example to establish a neural network model. Tests have shown that the model can predict non-experimental data with high accuracy. However, some problems were found in the process of establishing the model. There are two main problems: First, too few samples, it is difficult to accurately reflect the characteristics of the model to be built, and it is easy to cause the network to be difficult to exit the local pole during the learning process. Small value. The solution is to increase experimental points in the experimental part to increase the number of training samples. Second, there are some problems in the BP network itself. The main problems are that the convergence speed is very slow, and sometimes it converges at the local minimum value and cannot find the global minimum value. For such situations, other algorithms such as simulated annealing, genetic algorithm, etc. may be considered to ensure that the network can converge to a global minimum.

(author/Luo Guanglin Wang Tiantian)

Guangdong Packaging Magazine

Digital Printer is capable of printing on black, red, white and any color T-Shirts, and other materials,color-printing without silk screen and printing plate. It can accomplish printing white, gradient color picture at once. No color fading and comfortable handle. Digital printer accords with the international universal standard and uses green environmental protective inks which completely reach to the highest European quality inspection standard.

digital printer

Name
Digital printer/T Shirt Printer
Print head
DX5
Print size
33*43cm(33*60cm)
Print color
CMYK+WW
Print speed
A4 photo/111s
Printing resolution
5760*1440dpi
Nozzles
90*6=540
Interface
USB2.0
Net weight/Gross weight
55Kg/60Kg
Printer size
75*63*54cm
Height adjustment
Automatic
Working power
110-220V   50-60Hz  30-75W
Operation system
Windows 10/8/7/2000/XP/VISTA etc
Temperature
5-35º

Digital Printer

Digital Printer,Digital Printer Machine,Digital Cylinder Foil Printer,Digital Desktop Printer

Shenzhen Refinecolor Technology Co., LTD. , http://www.rfcprinter.com

Posted on