A Novel Algorithm to Train Multilayer Hardlimit Neural Networks Based on a Mixed Integer Linear Program Model

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Downloads (Pure)

Abstract

In a previous work we showed that hardlimit multilayer neural networks have more computational power than sigmoidal multilayer neural networks [1]. In 1962 Minsky and Papert showed the limitations of a single perceptron which can only solve linearly separable classification problems and since at that time there was no algorithm to find the weights of a multilayer hardlimit perceptron research on neural networks stagnated until the early eighties with the invention of the Backpropagation algorithm [2]. Nevertheless since the sixties there have arisen some proposals of algorithms to implement logical functions with threshold elements or hardlimit neurons that could have been adapted to classification problems with multilayer hardlimit perceptrons and this way the stagnation of research on neural networks could have been avoided. Although the problem of training a hardlimit neural network is NP-Complete, our algorithm based on mathematical programming, a mixed integer linear model (MILP), takes few seconds to train the two input XOR function and a simple logical function of three variables with two minterms. Since any linearly separable logical function can be implemented by a perceptron with integer weights, varying them between -1 and 1 we found all the 10 possible solutions for the implementation of the two input XOR function and all the 14 and 18 possible solutions for the implementation of two logical functions of three variables, respectively, with a two layer architecture, with two neurons in the first layer. We describe our MILP model and show why it consumes a lot of computational resources, even a small hardlimit neural network translates into a MILP model greater than 1G, implying the use of a more powerful computer than a common 32 bits PC. We consider the reduction of computational resources as the near future work main objective to improve our novel MILP model and we will also try a nonlinear version of our algorithm based on a MINLP model that will consume less memory.
Original languageEnglish
Title of host publicationADVANCES IN COMPUTATIONAL INTELLIGENCE, PT II
EditorsI. Rojas, G. Joya, A. Catala
Place of PublicationGermany
PublisherSpringer-Verlag
Pages477-487
ISBN (Electronic)978-3-319-19221-5
ISBN (Print)978-3-319-19222-2
DOIs
Publication statusPublished - Jun 2015
Event13th International Work-Conference on Artificial Neural Networks, IWANN 2015 - Palma de Mallorca, Spain
Duration: 10 Jun 201512 Jun 2015
Conference number: 13th

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer-Verlag
Volume9095
ISSN (Print)0302-9743

Conference

Conference13th International Work-Conference on Artificial Neural Networks, IWANN 2015
Abbreviated titleIWANN 2015
CountrySpain
CityPalma de Mallorca
Period10/06/1512/06/15

Keywords

  • Hardlimit neural networks
  • Mixed integer linear programming
  • Training a hardlimit neural network with a MILP model
  • Solving a MILP model with the cplex solver

Fingerprint Dive into the research topics of 'A Novel Algorithm to Train Multilayer Hardlimit Neural Networks Based on a Mixed Integer Linear Program Model'. Together they form a unique fingerprint.

Cite this