TY - GEN
T1 - Methods for Prediction Optimization of the Constrained State-Preserved Extreme Learning Machine
AU - Goodman, Garrett
AU - Hirt, Quinn
AU - Shimizu, Cogan
AU - Ktistakis, Iosif Papadakis
AU - Alamaniotis, Miltiadis
AU - Bourbakis, Nikolaos
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Finding the maximum testing accuracy in Machine Learning has been the goal since its conception. From this goal, neural networks have been the primary source of continual improvements in prediction performance. Traditionally, backpropagation has been the primary way of training neural networks and the Levenberg-Marquardt (LM) backpropagation has become the fastest method. Recently, the Extreme Learning Machine was introduced which randomizes weights and biases of hidden layers and uses the Moore-Penrose generalized inverse of a matrix to calculate the output weights and biases, providing competitive results at significantly faster training times. In this study, we continue our work on the Constrained State-Preserved Extreme Learning Machine (CSPELM) with a Forest optimization (CSPELMF) and \varepsilon constraint Rangefinder (CSPELMR). Furthermore, we provide hyper-parameter settings for the CSPELM to optimize accuracy over training time. Our results show that our methods outperformed the LM backpropagation in a majority of the 13 tested datasets and that the CSPELMF and CSPELMR matched or outperformed the CSPELM in all classification datasets.
AB - Finding the maximum testing accuracy in Machine Learning has been the goal since its conception. From this goal, neural networks have been the primary source of continual improvements in prediction performance. Traditionally, backpropagation has been the primary way of training neural networks and the Levenberg-Marquardt (LM) backpropagation has become the fastest method. Recently, the Extreme Learning Machine was introduced which randomizes weights and biases of hidden layers and uses the Moore-Penrose generalized inverse of a matrix to calculate the output weights and biases, providing competitive results at significantly faster training times. In this study, we continue our work on the Constrained State-Preserved Extreme Learning Machine (CSPELM) with a Forest optimization (CSPELMF) and \varepsilon constraint Rangefinder (CSPELMR). Furthermore, we provide hyper-parameter settings for the CSPELM to optimize accuracy over training time. Our results show that our methods outperformed the LM backpropagation in a majority of the 13 tested datasets and that the CSPELMF and CSPELMR matched or outperformed the CSPELM in all classification datasets.
KW - Accuracy Optimization
KW - Constrained State-Preserved Extreme Learning Machine
KW - Extreme Learning Machine
KW - Machine Learning
KW - Neural Networks
KW - Training
UR - https://corescholar.libraries.wright.edu/cse/738
UR - https://www.scopus.com/pages/publications/85098785437
UR - https://www.scopus.com/pages/publications/85098785437#tab=citedBy
U2 - 10.1109/ICTAI50040.2020.00103
DO - 10.1109/ICTAI50040.2020.00103
M3 - Conference contribution
AN - SCOPUS:85098785437
T3 - Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
SP - 639
EP - 646
BT - Proceedings - IEEE 32nd International Conference on Tools with Artificial Intelligence, ICTAI 2020
A2 - Alamaniotis, Miltos
A2 - Pan, Shimei
PB - IEEE Computer Society
T2 - 32nd IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2020
Y2 - 9 November 2020 through 11 November 2020
ER -