Every algorithm runs 50 times, each test is random and then recor

Every algorithm runs 50 times, each test is random and then records the PR-171 solubility average value, listing them in Table 2. Table 2 The comparison of the performance of each algorithm for wine data set. 4.4. Results Tables ​Tables11 and ​and22 illustrate that, from the training success rate (the success times within 50 training times) aspect, GA optimized RBF algorithm is superior to the traditional RBF algorithm; from the training error and test error aspect, RBF and GA-RBF-L algorithm are equivalent, or slightly better than GA-RBF algorithm; from the operation time aspect, the operation time of GA optimized RBF algorithm is slightly longer, because running the genetic algorithm

will take longer time; from the recognition precision aspect, the GA-RBF-L algorithm’s classification precision is the best. 5. Conclusion and Discussion In this paper, we propose a new algorithm that uses GA to optimize the RBF neural network structure (hidden layer neurons) and connect weight simultaneously and then use LMS method to adjust the network further. The new algorithm optimized the number of the hidden neurons and at the same time completely optimized the connection weights. New algorithm takes longer running time in genetic algorithm optimizing, but it can reduce the time which is spent in constructing the network. Through these two experiments analysis, the results show that the new algorithm greatly improves in generalization

capability, operational efficiency, and classification precision of RBF neural network. The network structure will affect the generalization capability of the algorithm, comparing RBF, GA-RBF, and GA-RBF-L;

while the RBF algorithm gets the small training error, its recognition precision is not as good as GA-RBF-L algorithm whose hidden layer neurons are fewer. Genetic algorithm is effective for the evolution of the network structure; it can find a better network structure, but it is not good at optimizing connection weights. After 500 generations of iteration, the downtrend of the training error turns slow, so that we use LMS method further to adjust the weights and then get the optimal algorithm. The new algorithm is a self-adapted and intelligent algorithm, a precise model; it is worthy of further promotion. Acknowledgments This work is supported by the National Nature Science Foundation of China (nos. 60875052, 61203014, and 61379101); Priority Academic Program Development of Jiangsu Higher Brefeldin_A Education Institutions; Major Projects in the National Science & Technology Pillar Program during the Twelfth Five-Year Plan Period (no. 2011BAD20B06); The Specialized Research Fund for the Doctoral Program of Higher Education of China (no. 20133227110024); Ordinary University Graduate Student Research Innovation Projects of Jiangsu Province (no. KYLX 14_1062). Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>