TY - GEN
T1 - Constrained State-Preserved Extreme Learning Machine
AU - Goodman, Garrett
AU - Shimizu, Cogan
AU - Papadakis Ktistakis, Iosif
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Reducing the training time for neural networks is a primary focus of research in the field of machine learning. Currently, the Levenberg-Marquardt (LM) method is one of the fastest backpropagation methods. A recently popular alternative to LM backpropagation is the Extreme Learning Machine (ELM), which produces a closed form optimization of a Single Layer Feed Forward Network for an initially randomized input weight matrix. In this study, we further extend the performance of an ELM by incrementally building on the state of the art, the State-Preserved ELM (SPELM), to produce a Constrained SPELM (CSPELM). To do so, we introduce a constraint, ', which randomly perturbs the input weight matrix after each training cycle, providing a honing mechanism during the search for a better local optimum. We evaluated CSPELM against 13 benchmark datasets, both categorical and continuous. For 8 of the 13 benchmark datasets, CSPELM outperformed, with respect to average accuracy and RMSE, the ELM, SPELM, and LM methods. Further, the results show that in 8 of the 13 benchmark datasets used, CSPELM was the best performing model and only reached a maximum of 195.10 seconds total training time in one example. The results show a more consistent and higher accuracy than the ELM and SPELM and competitive or better results with LM with training time being only approximately 10% of traditional LM backpropagation training time.
AB - Reducing the training time for neural networks is a primary focus of research in the field of machine learning. Currently, the Levenberg-Marquardt (LM) method is one of the fastest backpropagation methods. A recently popular alternative to LM backpropagation is the Extreme Learning Machine (ELM), which produces a closed form optimization of a Single Layer Feed Forward Network for an initially randomized input weight matrix. In this study, we further extend the performance of an ELM by incrementally building on the state of the art, the State-Preserved ELM (SPELM), to produce a Constrained SPELM (CSPELM). To do so, we introduce a constraint, ', which randomly perturbs the input weight matrix after each training cycle, providing a honing mechanism during the search for a better local optimum. We evaluated CSPELM against 13 benchmark datasets, both categorical and continuous. For 8 of the 13 benchmark datasets, CSPELM outperformed, with respect to average accuracy and RMSE, the ELM, SPELM, and LM methods. Further, the results show that in 8 of the 13 benchmark datasets used, CSPELM was the best performing model and only reached a maximum of 195.10 seconds total training time in one example. The results show a more consistent and higher accuracy than the ELM and SPELM and competitive or better results with LM with training time being only approximately 10% of traditional LM backpropagation training time.
KW - Extreme Learning Machine (ELM)
KW - Machine Learning (ML)
KW - Neural Networks (NN)
KW - Single Layer Feedforward Network (SLFN)
KW - Training
UR - https://www.scopus.com/pages/publications/85081082604
UR - https://www.scopus.com/pages/publications/85081082604#tab=citedBy
UR - https://corescholar.libraries.wright.edu/cse/747
U2 - 10.1109/ICTAI.2019.00109
DO - 10.1109/ICTAI.2019.00109
M3 - Conference contribution
AN - SCOPUS:85081082604
T3 - Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
SP - 752
EP - 759
BT - Proceedings - IEEE 31st International Conference on Tools with Artificial Intelligence, ICTAI 2019
PB - IEEE Computer Society
T2 - 31st IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2019
Y2 - 4 November 2019 through 6 November 2019
ER -