ULEPSZONY ALGORYTM EWOLUCJI RÓŻNICOWEJ Z ADAPTACYJNYMI GRANICAMI WAG DLA EFEKTYWNEGO SZKOLENIA SIECI NEURONOWYCH

Saithip Limtrakul


Khon Kaen University, Faculty of Science, Department of Mathematics (Tajlandia)
http://orcid.org/0000-0002-7207-6640

Jeerayut Wetweerapong

wjeera@kku.ac.th
Khon Kaen University, Faculty of Science, Department of Mathematics (Tajlandia)
http://orcid.org/0000-0001-5053-3989

Abstrakt

Sztuczne sieci neuronowe są niezbędnymi inteligentnymi narzędziami do realizacji różnych zadań uczenia się. Ich szkolenie stanowiwyzwanie ze względu na charakter zbioru danych, wiele wag treningowych i ich zależności, co powoduje powstanie skomplikowanej, wielowymiarowejfunkcji błędu do minimalizacji. Dlatego alternatywnym podejściem stały się metody optymalizacji globalnej. Wiele wariantów ewolucji różnicowej (DE)zostało zastosowanych jako metody treningowe do aproksymacji wag sieci neuronowej. Jednak badania empiryczne pokazują, że cierpią one z powoduogólnie ustalonych granic wag. W tym badaniu proponujemy ulepszony algorytm ewolucji różnicowej z adaptacyjnym dopasowaniem granic wag (DEAW)dla efektywnego szkolenia sieci neuronowych. Algorytm DEAW wykorzystuje małe początkowe granice wag i adaptacyjne dostosowanie w procesiemutacji. Stopniowo rozszerza on granice, gdy składowa wektora mutacji osiąga swoje granice. Eksperymentujemy również z wykorzystaniem kilku skalfunkcji aktywacji z algorytmem DEAW. Następnie, stosujemy proponowaną metodę z jej odpowiednim ustawieniem do rozwiązywania problemówaproksymacji funkcji. DEAW może osiągnąć zadowalające rezultaty w porównaniu z rozwiązaniami dokładnymi.


Słowa kluczowe:

sieć neuronowa, ewolucja różnicowa, trening sieci neuronowej, aproksymacja funkcji

Baioletti M., Di Bari G., Milani A., Poggioni V.: Differential Evolution for Neural Networks Optimization. Mathematics 8(1), 2020, 69 [http://doi.org/10.3390/math8010069].
DOI: https://doi.org/10.3390/math8010069   Google Scholar

Bartlett P. L.: For Valid Generalization, the Size of the Weights is More Important than the Size of the Network. Proceedings of the 9th International Conference on Neural Information Processing Systems, 1996, 134–140.
  Google Scholar

Bartlett P. L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory 44, 1998, 525–536 [http://doi.org/10.1109/18.661502].
DOI: https://doi.org/10.1109/18.661502   Google Scholar

Chen L.: A global optimization algorithm for neural network training. Proceedings of International Conference on Neural Networks 1993, 443–446 [http://doi.org/10.1109/IJCNN.1993.713950].
DOI: https://doi.org/10.1109/IJCNN.1993.713950   Google Scholar

Chihaoui M., Bellil W., Amar C. B.: Multi Mother Wavelet Neural Network based on Genetic Algorithm for 1D and 2D Functions Approximation. Proceedings of the International Conference on Fuzzy Computation and International Conference on Neural Computation 2010, 429–434 [http://doi.org/10.5220/0003083704290434].
DOI: https://doi.org/10.5220/0003083704290434   Google Scholar

Cong H., Nguyen N., Huy V. N., Bùi T.: The Influence of Initial Weights on Neural Network Training. Journal of Science and Technology 95, 2013, 18–25.
  Google Scholar

Das S., Suganthan P. N.: Differential Evolution: A Survey of the State-of-the-Art. IEEE Transactions on Evolutionary Computation 15, 2011, 4–31 [http://doi.org/10.1109/TEVC.2010.2059031].
DOI: https://doi.org/10.1109/TEVC.2010.2059031   Google Scholar

Das S., Mullick S. S., Suganthan P. N.: Recent advances in differential evolution – An updated survey. Swarm and Evolutionary Computation 17, 2016, 1–30 [http://doi.org/10.1016/j.swevo.2016.01.004].
DOI: https://doi.org/10.1016/j.swevo.2016.01.004   Google Scholar

Dhar V. K., Tickoo A. K., Koul R., Dubey B. P.: Comparative performance of some popular artificial neural network algorithms on benchmark and function approximation problems. Pramana 74, 2010, 307–324 [http://doi.org/10.1007/s12043-010-0029-4].
DOI: https://doi.org/10.1007/s12043-010-0029-4   Google Scholar

Gao Y., Liu J.: A modified differential evolution algorithm and its application in the training of BP neural network. IEEE/ASME International Conference on Advanced Intelligent Mechatronics 2008, 1373–1377.
  Google Scholar

Garro B. A., Sossa H., Vázquez R. A.: Evolving Neural Networks: A Comparison between Differential Evolution and Particle Swarm Optimization. Advances in Swarm Intelligence 2011, 447–454 [http://doi.org/10.1007/978-3-642-21515-5_53].
DOI: https://doi.org/10.1007/978-3-642-21515-5_53   Google Scholar

Hahm N., Hong B. I.: An approximation by neural networkswith a fixed weight. Computers and Mathematics with Applications 47, 2004, 1897–1903 [http://doi.org/10.1016/j.camwa.2003.06.008].
DOI: https://doi.org/10.1016/j.camwa.2003.06.008   Google Scholar

Ismailov V. E.: Approximation by neural networks with weights varying on a finite set of directions. Journal of Mathematical Analysis and Applications 389, 2012, 72–83 [http://doi.org/10.1016/j.jmaa.2011.11.037].
DOI: https://doi.org/10.1016/j.jmaa.2011.11.037   Google Scholar

Jesus R. J., Antunes M. L., da Costa R. A., Dorogovtsev S. N., Mendes J. F., Aguiar R. L.: Effect of the initial configuration of weights on the training and function of artificial neural networks. Mathematics 9, 2021, 1–16 [http://doi.org/10.3390/math9182246].
DOI: https://doi.org/10.3390/math9182246   Google Scholar

Mendes R., Cortez P., Rocha M., Neves J.: Particle swarms for feedforward neural network training. Proceedings of the International Joint Conference on Neural Networks – IJCNN'02 2002, 1895–1899 [http://doi.org/10.1109/IJCNN.2002.1007808].
DOI: https://doi.org/10.1109/IJCNN.2002.1007808   Google Scholar

Mezura M. E., Velázquez R. J., Coello C.: A comparative study of differential evolution variants for global optimization. GECCO 2006 – Genetic and Evolutionary Computation Conference 1, 2006, 485–492 [http://doi.org/10.1145/1143997.1144086].
DOI: https://doi.org/10.1145/1143997.1144086   Google Scholar

Migdady H.: Boundness of a Neural Network Weights Using the Notion of a Limit of a Sequence. International Journal of Data Mining and Knowledge Management Process 4, 2014, 1–8 [http://doi.org/10.5121/ijdkp.2014.4301].
DOI: https://doi.org/10.5121/ijdkp.2014.4301   Google Scholar

Mirjalili S. A., Hashim S. Z. M., Sardroudi H. M. Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Applied Mathematics and Computation 218, 2012, 11125–11137 [http://doi.org/10.1016/j.amc.2012.04.069].
DOI: https://doi.org/10.1016/j.amc.2012.04.069   Google Scholar

Morse G., Stanley K. O.: Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent in Neural Networks. Proceedings of the Genetic and Evolutionary Computation Conference 2016, 477–484 [http://doi.org/10.1145/2908812.2908916].
DOI: https://doi.org/10.1145/2908812.2908916   Google Scholar

Mohamad F. A., Nor A. M. I., Wei H. L., Koon M. A.: Differential evolution: A recent review based on state-of-the-art works. Alexandria Engineering Journal 61(5), 2022, 3831–3872 [http://doi.org/10.1016/j.aej.2021.09.013].
DOI: https://doi.org/10.1016/j.aej.2021.09.013   Google Scholar

Piotrowski A. P.: Differential Evolution algorithms applied to Neural Network training suffer from stagnation. Applied Soft Computing 21, 2014, 382–406 [http://doi.org/10.1016/j.asoc.2014.03.039].
DOI: https://doi.org/10.1016/j.asoc.2014.03.039   Google Scholar

Prechelt L.: A quantitative study of experimental evaluations of neural network learning algorithms: Current research practice. Neural Networks 9, 1996, 457–462 [http://doi.org/10.1016/0893-6080(95)00123-9].
DOI: https://doi.org/10.1016/0893-6080(95)00123-9   Google Scholar

Prieto A., Prieto B., Ortigosa E. M., Ros E.: Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 214, 2016, 242–268 [http://doi.org/10.1016/j.neucom.2016.06.014].
DOI: https://doi.org/10.1016/j.neucom.2016.06.014   Google Scholar

Rumelhart D. E., Hinton G. E., Williams R. J.: Learning representations by back-propagating errors. Nature 323, 1986, 533–536 [http://doi.org/10.1038/323533a0].
DOI: https://doi.org/10.1038/323533a0   Google Scholar

Si T., Hazra S., Jana N. D.: Artificial Neural Network Training Using Differential Evolutionary Algorithm for Classification. Advances in Intelligent and Soft Computing 232, 2012, 769–778 [http://doi.org/10.1007/978-3-642-27443-5-88].
DOI: https://doi.org/10.1007/978-3-642-27443-5_88   Google Scholar

Storn R., Price K.: Differential Evolution A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization 11, 1997, 341–359 [http://doi.org/10.1023/A:1008202821328].
DOI: https://doi.org/10.1023/A:1008202821328   Google Scholar

Subudhi B., Jena D.: An improved differential evolution trained neural network scheme for nonlinear system identification. International Journal of Automation and Computing 6, 2009, 137–144 [http://doi.org/10.1007/s11633-009-0137-0].
DOI: https://doi.org/10.1007/s11633-009-0137-0   Google Scholar

Yang S., Ting T. O., Man K. L., Guan S. U.: Investigation of Neural Networks for Function Approximation. Procedia Computer Science 17, 2013, 586–594 [http://doi.org/10.1016/j.procs.2013.05.076].
DOI: https://doi.org/10.1016/j.procs.2013.05.076   Google Scholar

Zainuddin Z., Pauline O.: Function Approximation Using Artificial Neural Networks. International Journal of Systems Applications, Engineering and Development 1, 2007, 173–178 [http://doi.org/10.5555/1466915.1466916].
  Google Scholar

Zhang J. R., Lok T. M., Lyu M. R.: A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Applied Mathematics and Computation 185, 2007, 1026–1037 [http://doi.org/10.1016/j.amc.2006.07.025].
DOI: https://doi.org/10.1016/j.amc.2006.07.025   Google Scholar

UCI machine learning benchmark repository. the UC Irvine Machine Learning Repository, 2019 [http://archive.ics.uci.edu/ml/datasets.php].
  Google Scholar


Opublikowane
2023-03-31

Cited By / Share

Limtrakul, S. ., & Wetweerapong, J. (2023). ULEPSZONY ALGORYTM EWOLUCJI RÓŻNICOWEJ Z ADAPTACYJNYMI GRANICAMI WAG DLA EFEKTYWNEGO SZKOLENIA SIECI NEURONOWYCH. Informatyka, Automatyka, Pomiary W Gospodarce I Ochronie Środowiska, 13(1), 4–13. https://doi.org/10.35784/iapgos.3366

Autorzy

Saithip Limtrakul 

Khon Kaen University, Faculty of Science, Department of Mathematics Tajlandia
http://orcid.org/0000-0002-7207-6640

Autorzy

Jeerayut Wetweerapong 
wjeera@kku.ac.th
Khon Kaen University, Faculty of Science, Department of Mathematics Tajlandia
http://orcid.org/0000-0001-5053-3989

Statystyki

Abstract views: 157
PDF downloads: 198