Abstract
Neuro-Evolution is a field of study that has recently gained significantly increased traction in the deep learning community. It combines deep neural networks and evolutionary algorithms to improve and/or automate the construction of neural networks. Recent Neuro-Evolution approaches have shown promising results, rivaling hand-crafted neural networks in terms of accuracy.
A two-step approach is introduced where a convolutional autoencoder is created that efficiently compresses the input data in the first step, and a convolutional neural network is created to classify the compressed data in the second step. The creation of networks in both steps is guided by by an evolutionary process, where new networks are constantly being generated by mutating members of a collection of existing networks. Additionally, a method is introduced that considers the trade-off between compression and information loss of different convolutional autoencoders. This is used to select the optimal convolutional autoencoder from among those evolved to compress the data for the second step.
The complete framework is implemented, tested on the popular CIFAR-10 data set, and the results are discussed. Finally, a number of possible directions for future work with this particular framework in mind are considered, including opportunities to improve its efficiency and its application in particular areas.
A two-step approach is introduced where a convolutional autoencoder is created that efficiently compresses the input data in the first step, and a convolutional neural network is created to classify the compressed data in the second step. The creation of networks in both steps is guided by by an evolutionary process, where new networks are constantly being generated by mutating members of a collection of existing networks. Additionally, a method is introduced that considers the trade-off between compression and information loss of different convolutional autoencoders. This is used to select the optimal convolutional autoencoder from among those evolved to compress the data for the second step.
The complete framework is implemented, tested on the popular CIFAR-10 data set, and the results are discussed. Finally, a number of possible directions for future work with this particular framework in mind are considered, including opportunities to improve its efficiency and its application in particular areas.
Original language | English |
---|---|
Title of host publication | Machine Learning, Optimization, and Data Science - 4th International Conference, LOD 2018, Revised Selected Papers |
Editors | Giuseppe Nicosia, Giovanni Giuffrida, Giuseppe Nicosia, Panos Pardalos, Vincenzo Sciacca, Renato Umeton |
Place of Publication | Cham |
Publisher | Springer |
Pages | 293-304 |
Number of pages | 12 |
ISBN (Electronic) | 978-3-030-13709-0 |
ISBN (Print) | 978-3-030-13708-3 |
DOIs | |
Publication status | Published - Feb 2019 |
Event | 4th International Conference on Machine Learning, Optimization, and Data Science, LOD 2018 - Volterra, Italy Duration: 13 Sep 2018 → 16 Sep 2018 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 11331 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 4th International Conference on Machine Learning, Optimization, and Data Science, LOD 2018 |
---|---|
Country | Italy |
City | Volterra |
Period | 13/09/18 → 16/09/18 |
Keywords
- Convolutional autoencoders
- Convolutional neural networks
- Genetic algorithms
- Neuro-evolution