-149-Optimization of The Backpropagation Method with Nguyen-widrow in Face Image Classification

learning procedures. This algorithm takes advantage of signal errors, the neural network modifies the weight of its synaptic connections to improve system performance 2014). The purpose of back propagation is to modify the ABSTRACT In this study, it is proven that the Nguyen-widrow algorithm can optimize the Backpropagation method in terms of initializing weights and bias. With the Nguyen-widrow algorithm, the Backpropagation method can recognize facial images faster with better accuracy. In the testing process with hidden layer 6 neurons, at a target error of 0.01, the standard Backpropagation method obtained an accuracy of 96%, while the optimization Backpropagation method obtained a higher accuracy of 98%. So on with hidden layers 7, 8, 9, and 10 neurons. then in other words this research can be used to advance human technology in facilitating all aspects of life with these algorithms and methods. With the accuracy and speed of capturing facial images, this can be used as policy like a surveillance camera, traffic protection, and many other things that can be used in social life.

weights to train the neural network to correctly map inputs to outputs (Amrutha & Remya, 2018).
Neupane & Shakya in their research detecting network intrusion using the Backpropagation method explained that neural networks can characterize examples, the results of which can be used as part of an intrusion detection system for attack classification. Back propagation calculations are well used for the preparation of neural networks as a promising element (Neupane & Shakya, 2017). Meanwhile, Kharola& Kumar explained that the Backpropagation method is a method that can learn very quickly and produce more accurate future weather predictions (Kharola & Kumar, 2014).
The Backpropagation method requires initializing the initial weights in the training and testing process. Usually, the initialization weights are determined randomly. This results in less efficient performance of the Backpropagation method because the training time can be very long, therefore the algorithm proposed in the weight initialization in this study is Nguyen-widrow. Nguyen-widrow is a neural network weight initialization algorithm to reduce training time. In the initialization of weights, a small number of random values for the weights are assigned to the operation of the Backpropagation network (Wayahdi, Zarlis & Putra, 2019).
Wayahdiet al. in their research on temperature data classification, using the Nguyenwidrow algorithm to initialize weights in the Backpropagation method and obtain the results that using the Nguyen-widrow algorithm as a weight initialization can speed up training time by about two times faster without reducing the level of accuracy (Wayahdi, Zarlis & Putra, 2019). Meanwhile, Masood et al. explained that the results obtained from experimental statistical analysis strongly suggest that the Nguyen-widrow algorithm for weight initialization produces a minimum test error value so as to obtain the best results in most cases (Masood, Doja & Chandra, 2016).
From the above explanations, the neural network Backpropagation method will be optimized with the Nguyen-widrow algorithm as an initialization weight. The method used will be further analyzed in order to increase the accuracy and decrease the squared error in the facial image classification process. Based on the accuracy of the facial image classification process, it can help improve the accuracy in monitoring human faces and detecting data quickly. This research can also later have an impact on human technological progress and provide benefits for human social life.

RESEARCH METHODS
At this stage, an analysis of image extraction is carried out to obtain knowledge about the algorithm and method being analyzed, namely the Nguyen-widrow algorithm as a weight initialization of the Backpropagation neural network method. The process flow of this research, namely:

Image Processing
Image processing begins by taking a sample of 150 images of the face. The image obtained is then selected to be the main part of the face, then resized to 100 x 100 pixels and cut into several parts such as the eyes, nose, and lips (mouth). After that the image is extracted to produce the mean value for the input variable value (xi) and target (y) as a dataset.Illustration of image processing starting from taking pictures to becoming a dataset can be seen in the picture.

Data Cleaning & Preparation
There is a term Garbage In -Garbage Out which means that the results of the model building will be bad if the input is also bad. The data obtained still has many shortcomings, so it needs to be processed (cleaning) first. General things that can be done at the data cleaning stage include format consistency, data scale, data duplication, missing value, and skewness. Data preparation is needed as initial preparation when cleaning data has been completed.

Data Storage
Processed data is entered into certain data stores so that it can be processed again at a later time. The tools used to support this are the concept of the Relational Database Management System (RDBMS).

Data Classification
Before processing the classification, the dataset is normalized first so that the distance between one value and another is not significantly different. The normalization process is carried out using the formula: Where a is the minimum value of the dataset, b is the maximum value of the dataset, x is the data to be normalized, while x'is the result of normalization.
Then initial initialization will be given such as: number of hidden layer neurons, initial weight of input to hidden layer, initial weight of bias to hidden layer, initial weight of hidden layer to output layer, initial weight of bias to output layer, learning rate, maximum epoch, and target error randomly for the classification process with Backpropagation. Then put the Nguyen-widrow algorithm as the initialization weight as an effort to optimize the Backpropagation method in the image classification process. In this study, the dataset is divided into training data and testing data. The description of the artificial neural network model in this study can be seen in Figure 2.  In Figure 2, it can be seen that the artificial neural network model that was built with input layer 4 neurons, hidden layer 6 neurons, and output layer 1 neurons. The artificial neural network model that is built can be modified or varied the number of neurons in the hidden layer. Testing will be carried out with variations in the number of neurons in the hidden layer to obtain the best model.

Analysis
After the classification process is carried out, the next stage is the analysis process in which the results of the image classification using the Backpropagation method with random weights are compared with the results of the Backpropagation method image classification with initialized weights using the Nguyen-widrow algorithm. At this stage it can be seen whether the initialization of weights with Nguyen-widrow can speed up the training process and improve accuracy and can reduce the value of the squared error or not.

RESULTS AND DISCUSSION
This section describes the results of research and discussion of the optimization of the Backpropagation neural network method with the Nguyen-widrow algorithm in facial image classification. The analysis process begins by determining the existing training image in the dataset. Then the dataset is trained according to initial initialization values such as weight and bias values. Repeat training as many epochs as specified or target error specified. Save the new weights and bias values for testing. In this test using 10 test data with the number of neurons in the varied hidden layer, namely 6 neurons, 7 neurons, 8 neurons, 9 neurons, and 10 neurons. As well as variations of the target error ranging from 0.05, 0.04, 0.03, 0.02, and 0.01. This applies to the initialization of random weights as well as with Nguyen-widrow. The total test data were all tested 500 times.The values used in the input layer are eyes and eyebrows (right and left), nose, and lips (mouth) by providing the initials x1, x2, x3, and x4. While the output layer or the target in the test is given the initials y. Result of training data with learning rate = 1, max. epoch = 10,000, target error = 0.05 to 0.01, with hidden layer = 6 neurons to 10 neurons can be seen in Table 1. In Table 1, you can see the results of training in image datasets by comparing the results of the epoch from the Backpropagation method and Backpropagation optimization with Nguyen-widrow on hidden layers 6 to 10 neurons. The training resulted that the smaller the target error, the greater the number of epochs. It can also be seen that the number of epochs using the Backpropagation and Nguyen-widrow methods produces a smaller number of epochs compared to training using only the standard Backpropagation method. After the training process is complete, the next step is to save the weight of the training results. The training result weights will be used as the initial weights in the testing process.The comparison of the number of epochs in the training process can be visualized in graphical form in Figure 3.

Figure 3. Visualization of Comparison of Epoch and Error
In Figure 3 it can be seen the comparison of the number of epochs and errors of the standard Backpropagation method and the optimization of the Backpropagation method with the Nguyen-widrow algorithm. The number of epochs obtained by the optimization method is smaller than the standard method.The test results with hidden layer 6 neurons to hidden layer 10 neurons can be seen in Figure 4.  In Figure 4 it can be seen a comparison of the accuracy level of the test results of the standard Backpropagation method with Backpropagation optimization (with Nguyenwidrow). Backpropagation with Nguyen-widrow results in an accuracy rate higher than 500 times the test performed. It can be seen in the hidden layer 10 neurons, the Backpropagation method with Nguyen-widrow can obtain an accuracy of 100%. This shows that the Nguyenwidrow algorithm is highly recommended in the weight initialization of the Backpropagation neural network method.

CONCLUSION
Based on the results of testing and analysis of the optimization of the Backpropagation neural network method with the Nguyen-widrow algorithm in facial image classification, it is proven that the Backpropagation method can recognize facial patterns quite well. The Nguyen-widrow algorithm can optimize the Backpropagation method in terms of initializing weights and bias. With the Nguyen-widrow algorithm, the Backpropagation method can recognize facial images faster with better accuracy.
In a hidden layer with 6 neurons, at a target error of 0.01, the standard Backpropagation method can recognize training patterns at the 2,213 epoch, while the optimization Backpropagation method can recognize training patterns at the 909th epoch. And in the testing process with 6 neurons hidden layer, at a target error of 0.01, the standard Backpropagation method obtained an accuracy of 96%, while the optimization Backpropagation method obtained a higher accuracy of 98%. So as with hidden layers 7, 8, 9, and 10 neurons, in the Backpropagation optimization method each can recognize training patterns more quickly and the testing data yields higher accuracy.
So, with accuracy of 98% is very high quality to capture and monitor the face image with minor error to be used as security camera to help human. Such as security camera in traffic protection, CCTV, and others.