Using Neural. Networks in Reliability Prediction. NACHIMUTHU KARUNANITHI, DARRELL YASHWANT K. MALAIYA, Colorado State University

Similar documents
Knowledge Transfer in Deep Convolutional Neural Nets

Python Machine Learning

A Neural Network GUI Tested on Text-To-Phoneme Mapping

Artificial Neural Networks written examination

INPE São José dos Campos

Introduction to Simulation

Test Effort Estimation Using Neural Network

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Lecture 1: Machine Learning Basics

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Software Maintenance

SARDNET: A Self-Organizing Feature Map for Sequences

BENCHMARK TREND COMPARISON REPORT:

Evolution of Symbolisation in Chimpanzees and Neural Nets

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Generating Test Cases From Use Cases

Artificial Neural Networks

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures

Algebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview

Lecture 10: Reinforcement Learning

On the Combined Behavior of Autonomous Resource Management Agents

Reinforcement Learning by Comparing Immediate Reward

The Good Judgment Project: A large scale test of different methods of combining expert predictions

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

4.0 CAPACITY AND UTILIZATION

Self Study Report Computer Science

An empirical study of learning speed in backpropagation

A Pipelined Approach for Iterative Software Process Model

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

A Reinforcement Learning Variant for Control Scheduling

Learning Methods for Fuzzy Systems

CHAPTER 4: REIMBURSEMENT STRATEGIES 24

An Empirical and Computational Test of Linguistic Relativity

TIMSS ADVANCED 2015 USER GUIDE FOR THE INTERNATIONAL DATABASE. Pierre Foy

Grade 6: Correlated to AGS Basic Math Skills

Learning to Schedule Straight-Line Code

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

Softprop: Softmax Neural Network Backpropagation Learning

Course Content Concepts

Evidence for Reliability, Validity and Learning Effectiveness

Visit us at:

(Sub)Gradient Descent

The dilemma of Saussurean communication

Speaker Identification by Comparison of Smart Methods. Abstract

NCEO Technical Report 27

Mandarin Lexical Tone Recognition: The Gating Paradigm

Evolutive Neural Net Fuzzy Filtering: Basic Description

GCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education

Knowledge-Based - Systems

Practical Applications of Statistical Process Control

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

CS Machine Learning

Linking the Ohio State Assessments to NWEA MAP Growth Tests *

Classification Using ANN: A Review

Probability and Statistics Curriculum Pacing Guide

Computerized Adaptive Psychological Testing A Personalisation Perspective

CSC200: Lecture 4. Allan Borodin

A Process-Model Account of Task Interruption and Resumption: When Does Encoding of the Problem State Occur?

The Impact of Test Case Prioritization on Test Coverage versus Defects Found

Seminar - Organic Computing

IMGD Technical Game Development I: Iterative Development Techniques. by Robert W. Lindeman

Generative models and adversarial training

Abstractions and the Brain

I-COMPETERE: Using Applied Intelligence in search of competency gaps in software project managers.

Mathematics subject curriculum

Data Integration through Clustering and Finding Statistical Relations - Validation of Approach

Objectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition

95723 Managing Disruptive Technologies

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

School Size and the Quality of Teaching and Learning

Title:A Flexible Simulation Platform to Quantify and Manage Emergency Department Crowding

PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school

TD(λ) and Q-Learning Based Ludo Players

Assignment 1: Predicting Amazon Review Ratings

Neuro-Symbolic Approaches for Knowledge Representation in Expert Systems

Learning From the Past with Experiment Databases

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Susan K. Woodruff. instructional coaching scale: measuring the impact of coaching interactions

The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.

Statewide Framework Document for:

arxiv: v1 [cs.cl] 2 Apr 2017

PERFORMING ARTS. Unit 2 Proposal for a commissioning brief Suite. Cambridge TECHNICALS LEVEL 3. L/507/6467 Guided learning hours: 60

A Coding System for Dynamic Topic Analysis: A Computer-Mediated Discourse Analysis Technique

Longitudinal Analysis of the Effectiveness of DCPS Teachers

Guide to Teaching Computer Science

Constructing a support system for self-learning playing the piano at the beginning stage

Probability estimates in a scenario tree

Mathematics process categories

PH.D. IN COMPUTER SCIENCE PROGRAM (POST M.S.)

Why Did My Detector Do That?!

A Case Study: News Classification Based on Term Frequency

AGN 331 Soil Science Lecture & Laboratory Face to Face Version, Spring, 2012 Syllabus

Running Head: STUDENT CENTRIC INTEGRATED TECHNOLOGY

On the Formation of Phoneme Categories in DNN Acoustic Models

Using focal point learning to improve human machine tacit coordination

Transcription:

Using Neural Networks in Reliability Prediction NACHMUTHU KARUNANTH, DARRELL YASHWANT K. MALAYA, Colorado State University WHTLEY, and 4b The neural network model requires only failure histo? as input and predictsfitwe failures more accurately than some ana& models. But the approach is ve?y neu. research, the concern is how to develop general prediction models. Existing models typically rely on assumptions about development environments, the nature of software failures, and the probability of individual failures occurring. Because all these assumptions must be made before the project begins, and because many projects are unique, the best you can hope for is statistical techques that predict failure on the basis of failure data from similar projects. These models are called reliabilitygrowth models because they predict when reliability has grown enough to warrant product release. Because reliabilitygrowth models exhibit different predictive capabilities at different testing phases both within a project and across projects, researchers are &ding it nearly impossible to develop a universal model that wil provide accurate predictions under all circumstances. A pible solution is to develop models that don t require malung assumptions about either the development environment or extemal parameters. Recent advances in neural networks show that they can be used in applications that involve predictions. An interesting and difficult application is timeseries prediction, which predicts a complexsequential process like reliability growth. One drawback of neural networks is that you can t interpret the knowledge stored in their weights in simple terms that are drectly related to sohare metria which is somedung you can do with some analyhc models. Neuralnetwork models have a significant advantage over analytic models, though, because they require only failure hstory as input, no assumptions. Using that input, the neuralnetwork model automatically develops its own internal model of the failure process and predicts

futurc Mires. Because it adjusts model co~nplesi~ to match the complexity of the failure history, it can be more accurate than some commonly used analpc models. n ow experiments, we found &S to be mle. TALORNG NEURAL NETWORKS FOR PREDCTON Reliability prediction can be stated in the folloning way. Given a sequence of cu~iiulativ execution times (2,..., ik) E &), and the corresponding observed accumulated fiults (0,..., ok) E ok(t) up to the present time t, and the cumulative execution time at the end of a future test session k+h, zk+,,(t+a), predict the correspondmg cumulative fdts ok+h(t+a). For the prediction horizon h=l, the prediction is cxlled the nextstep prediction (also known as shortterm prediction), and for h=n(> 2) consecutive test intenals, it is known as the nstepahead prediction, or longterm prediction. A type of longterm prediction is endpoint predic tion, which involves predicting an output for some fume fixed point in time. n endpoint prediction, the prediction window becomes shorter as you approach the fixed point of interest. Here k+h A= Dl j=k+ represents the cumulative execution time of h consecutive future test sessions. You can use A to predict the number of accumulated faults after some specified amount of testing. From the predicted accumulated faults, you can infer both the current reliability and how much testing may be needed to meet the particular reliability criterion. This reliabilityprediction problem can be stated in terms of a neural network mapping: p: {(lk(t), ok(t)), ik+h(t+a)} + Ok+h(t+A) where (k(t),ok(t)) represents the failure hstory of the software system at time t used in training the network and o&+/,(t+a) is the network s prediction. Training the network is the process of adjusting the neuron s (neurons are defined in the box below) interconnection strength using part of the software s failure history. After a neural network is trained, you can use it to predict the total number of faults to be deteded at the end of a future test session k+h by inputting ik+/,(t+a). The three steps of developing a neural network for reliability prediction are specifying a suitable network architecture, choosing the training data, and training the network. Spedfying an architecture. Both prediction accuracy and resource allocation to simulation can be compromised if the architecture is not suitable. Many of the algorithms used to train neural networks require you to decide the network archtecture ahead of time or by trial and error. To provide a more suitable means of selecting the appropriate network architecture for a project, Scott Fahlman and colleagues developed the cascadecorre._ WHAT ARE NEURAL NETWORKS? Neural networks are a computational metaphor inspired by studies of the brain and nervous system in biological organisms. They are highly idealized mathematical models of haw we understand the essence of these simple nervous systems. The basic characteristics of a neural network are + t consists of many simple processing units, called neurons, that perform a local computation on their input to produce an output. + Many weighted neuron interconnections encode the knowledge of the network. + The network has a leaming algorithm that lets it automatically develop internal representations. One of the most widely used processingunit models is based on the logistic function. The resulting transfer function is given by output = ~ + eswhere Sum is the aggregate of weighted inputs. Figure Ashows the actual /O response of this unit model, where Sum is computed as a weighted sum of inputs. The unit is nonlinear and continuous. Richard Lippman describes manyneuralnetworkmodels and learning procedures. Two wellknown classes suitable for prediction applications are feedforward networks and recurrent networks. n the main text of the article, we are concerned with feedforward networks and a variant class of recurrent networks, called Jordan networks. We selected these two model classes because we found them to be more accurate in reliability predictions than other networkmodes.2~~ REFERENCES. R Lippmann, An nmduction to Computing with Neural Nets, X0.W Sum= wo x,, t t wli x, BEE Acmq Speech, and Sip Fmcerrng, Apr. 987, pp. 422. 2. N. Karmanithi, Y. Malaiya, and D. Whitley, Prediction of Software Reliability Using Neural Networks, Pm tt? Spp. SofFWure ReliabZiy Eng., May 99, pp. 24 30. 3. N. Karmanithi, D. Whitley, and Y. Malaiya, Prediction of Software Reliability Using Connectionisr Apploaehs, EEE Trm. Sofhure fig. (to appear). J Oulpul D 54 JULY 992

lation learning algorithm. The algorithm, which dynamically constructs feedforward neural networks, combines the idea of incremental archtecture and learning in one training algorithm. t starts with a minimal network (consisting of an input and an output layer) and dynamically trains and adds hidden units one by one, until it builds a suitable multilayer architecture. As the box on the facing page describes, we chose feedforward and Jordan networks as the two classes of models most suitable for our prediction experiments. Figure la shows a typical threelayer feedforward network; Figure lb shows a Jordan network. A typical feedforward neural network comprises neurons do not perform any computation; they merely copy the input values and associate them with weights, feeding the neurons in the (first) hdden layer. Feedforward networks can propagate activations only in the forward direction; Jordan networks, on the other hand, have both forward and feedback connections. The feedback connection in the Jordan network in Figure b is from the output layer to the hidden layer through a recurrent input unit. At time t, the recurrent unit receives as input the output unit's output at time t. That is, the output of the additional input unit is the same as the output of the network that corresponds to the previous input pattem. n Figure b, the dashed h e represents a fixed connection with a weight of.0. This weight copies the output to the additional recur The coxodecorrelation rent input unit and is not rithm to construct both feedforward and Jordan networks. Figure 2 shows a typical feedforward network developed by the cascadecorrelation algorithm. The cascade network differs from the feedforward network in Figure a because it has feedforward connections between /O layers, not just among hidden units. n our experiments, all neural networks use one output unit. On the input layer the feedforward nets use one input unit; the Jordan networks use two units, the normal input unit and the recurrent input unit. Choosing lraiting data. A neural network's predictive ability can be affected by what it learns and in what sequence. Figure 3 shows two reliabilityprediction regimes: generalization training and prediction training. Generalization training is the standard way of training feedforward networks. During training, each input i, at time t is associated with the corresponding output ot. Thus the network learns to model the actual functionahty between the independent (or input) variable and the dependent (or output) variable. Prediction training, on the other hand, is the general approach for training recur rent networks. Under th~s training, the value of the input variable it at time t is associated with the actual value ofthe output variable at time t+. Here, the network leams to predict outputs anticipated at the next time step. Thus if you combine these two training regimes with the feedforward network and the Jordan network, you get four Output layer (rumulstive fauhs) t A Output loyer (tumulotive faults) Q, nput layer (execution time) ~nput layer ~5 (execution time) Hidden units ~. Figure. (A) A standard feedforward network and (B) ajordan netvmk Figure 2. Afeedfmward network deoeloped by the cascadecowelation alprithm. EEE SOFTWARE 55

~ ~ output / nput io [Bl!3 ri il Time Figure 3. Two networktraining regimes: (A) generalizatim trnining and (B) prediction trainhig...., before you attempt to use a neural network, you may have to represent the problem s U0 variables in a range suitable for the neural network. n the simplest representation, you can use a direct scaling, whch scales execution time and cumulative faults from 0.0 to.0. We did not use &S simple representa ~ 0 20 40 60 80 00 Normalized execution lime ~~ Figure 4. Endpoint predictions of neuralnemork models. neural network prediction models: FFN generalization, FFN prediction, JN generahzation, andm prediction. Troini the network. Most feedforward networks and Jordan networks are trained using a supervised learning algorithm. Under supervised learning, the algorithm adjusts the network weights using a quantified error feedback There are several supervised learning algorithms, but one of the most widely used is back propagation an iterative procedure that adjusts network weights by pro agating the error back into the network. P Typically, training a neural network involves several iterations (also known as epochs). At the beginning of training, the algorithm initializes network weights with a set of small random values (between +.0 and.0). During each epoch, the algorithm presents the network with a sequence of training pairs. We used cumulative execution time as input and the corresponding cumulative faults as the desired output to form a training pair. The algorithm then calculates a sum squared error between the desired outputs and the network s actual outputs. t uses the gradient of the sum squared error (with respect to weights) to adapt the network weights so that the error measure is smaller in future epochs. Training terminates when the sum squared error is below a specified tolerance lunit. PREDCTON EXPERMENT We used the testing and debugging data fiom an actual project described by Yoshiro Tohma and colleagues to illustrate the prediction accuracy of neural networks. n thls data (Toha s Table 4), execution time was reported in terms of days Method. Most training methods initialize neuralnetwork weights with random values at the beginning of training, whch causes the network to converge to different weight sets at the end of each training session. You can thus get different prediction results at the end of each training session. To compensate for these prediction variations, you can take an average over a large number of trials. n our experiment, we trained the network with 50 random 56 JULY 992

Model Average error Maximum error st half 2nd half Overall st half 2nd half Overall ' seeds for each trainingset size and averaged their predictions. Results. %er training the neural network with a failure history up to time t (where t is less than the total testing and debugging time of 44 days), you can use the network to predict the cumulative faults at the end of a future testing and debugging session. To evaluate neural networks, you can use the following extreme prediction horizons: the nextstep prediction (at t=t+l) and the endpoint prediction (at t=46). Since vou alreadv know the actual cu Neuralnet models FFNgeneralization 7.34.9 3.36 0.48 2.85 0.48 FEN prediction 6.25.0 2.92 8.69 3.8 8.69 JN generalization 4.26 3.03 3.47.00 3.97.00 JN prediction 5.43 2.08 3.26 7.76 3.48 7.76 Analpc models Logarithmic 2.59 6.6.6 35.75 3.48 35.75 nverse polynomial.97 5.65 7.88 20.36.65 20.36 Exponential 23.8 6.88 2.85 40.85 5.25 40.85 Power 38.30 6.39 7.66 76.52 5.64 76.52 Delayed Sshape 43.0 7. 9.78 54.52 22.38 54.52 mulanve faults for those two future testing and debuggmg sessions, you can compute the netw&%'sprediction. error at t. Then the relative prediction error is given by (predicted faults actual faults)/actual faults.4 Figures 4 and 6 show the relative prediction error curves of the neural network models. n these figures the percentage prediction error is plotted against the percentage normalized execution time t/%. Figures 4 and 5 show the relative error curves for endpoint predictions of neural networks and five wellknown analytic models. Results fkom the analytic models are included because they can provide a better basis for evaluating neural networks. Yashwant Malaiya and colleagues give details about the analpc models and fitting The graphs suggest that neural networks are more accurate than analytic models. Table gives a summary of Figures 4 and 5 in terms of average and maximum error measures. The columns under Average error represent the following: + First hulfis the model's average prediction error in the first half of the testing and debugging session. + Secmad half is the model's average prediction error in the second half of the testing and debugging session. + &wall is the model's average prediction error for the entire testing and debugging session. These average error measures also suggest that neural networks are more accurate than analytlc models. Firsthalfresults are interesting because the neuralnet i 0 20 40 60 80 00 Normulized exetutioii tiiiie Figure 5. Endpoiizt predictions of'nnallltic model. work models' average prediction errors are less than eight percent of the total defects disclosed at the end of the testing and debugging session. This result is significant because such reliable predictions at early stages of testing can be valuable in longterm planning. Among the neural network models, the difference in accuracy is not significant; whereas, the analpc models exhibit considerable variations. Among the analytlc models the inverse polynomial model and the logarithmic model seem to perform reasonably well. The maximum prediction errors in the table show how unrealistic a model can be. These values also suggest that the neuralnetwork models have fewer worstcase predictions than the analyuc models at various phases of testing and debugging. Figure 6 represents the nextstep predictions of both the neural networks and the analpc models. These graphs suggest that the neuralnetwork models have only slightly less nextstep predicrion accuracy than the analytic ~nodels. 57

5 20 5 O 5 5 k f 0 c._ z5 ej a.. 0 20 t 25 0 20 40 60 80 00 Normalized exetution time Figure 6. Nextrtep predictions of neuralnetwork models and anabttc mdeh the size of the training set. On average, the neural networks used one hidden unit when the normalized execution time was below 60 to 75 percent and zero hdden units afterward. However, occasionally two or three hidden unitswere used before training was complete. Though we have not shown a similar comparison between Jordan network models and equivalent analytlc models, extending the feedforward network comparison is straightforward. However, the models developed by the Jordan network can be more complex because of the additional feedback connection and the weights from the additional input unit. Model Average error Maximum error ~ st half 2nd half Overall st half 2nd half Overall j FFN genemlization. n h s method, with no hidden unit, the network's actual computation is the same as a simple logistic expression: o = +,p~0+"' t,) where wo and w are weights from the bias unit and the input unit, respectively, and t, is the cumulative execution time at the end of ith test session. This expression is equivalent to a twoparameter logisticfunction model, whose p(tj is given by Table 2 shows the summary of Figure 6 in terms of average and maximum errors. Since the neuralnetwork models' average errors are above the analytic models in the first half by only two to four percent and the difference in the second halfis less than two percent, these two approaches don't appear to be that different. But worstcase prediction errors may suggest that the analytlc models have a slight edge over the neuralnetwork models. However, the difference in overall average errors is less than two percent, which suggests that both the neuralnetwork models and the 6.34 7.83 7.83 analpc models have a similar nextstep prediction accuracy. NEURAL NETWORKS VS. ANALYTC MODELS n comparing the five analytlc models and the neural networks in our experiment, we used the number of parameters as a measure of complexity; the more parameters, the more complex the model. Since we used the cascadecorrelation algorithm for evolving network archtecme, the number of hdden units used to learn the problem varied, depending on where PO and p are parameters. t is easy to see that P O = wo and p = wl. Thus, training neural networks (finding weights) is the same as estimatingthese parameters. f the network uses one hdden unit, the model it develops is the same as a threeparameter model: rl(tr) = ~ +,(PO+Pl 4+Pz h,) where PO, P, and pz are the model parameters, which are determined by weights feeding the output unit. n thls model, PO = WO and p = u, and pz = wh (the weight from the hidden unit). However, the output of h, is an intermediate value computed using another twoparameter logisticfunction expression: h +?(U 3+"4 til 58 JULY 992

Thus, the model has five parameters that correspond to the five weights in the network. FFN prediiion. n hs model, for the network with no hidden unit, the equivalent twoparameter model is where the trl is the cumulative execution time at the zlth instant. For the network with one hidden unit, the equivalent fiveparameter model is MtJ = +,(PO+Pl trl+pz b,) mpliin~. These expressions imply that the neuralnetwork approach develops models that can be relatively complex. These expressions also suggest that neural networks use models of varying complexity at different phases of testing. n contrast, the analyttc models have only two or three parameters and their complexity remain static. Thus, the main advantage of neuralnetwork models is that model coniplexity is automatically adjusted to the complexity of the failure history. e have demonstrated how you can W use neuralnetwork models and training regimes for reliability prediction. Results with actual testing and debugging data suggest that neuralnetwork models are better at endpoint predictions than analpc models. Though the results presented here are for only one data set, the results are consistent with 3 other data sets we tested. The najor advantages in using the neuralnetwork approach are + t is a blackbox approach; the user need not know much about the underlying failure process of the project. + t is easy to adapt models of varying complexity at different phases of testing wihn a project as well as across projects + You can simultaneously construct a model and estimate its parameters if you use a training algorithm like cascade correlation. Ve recognize that our experiments are dy beginning to tap the potential ofneualnetwork models in reliability, but we believe that &S class of models will evenually offer significant benefits. We also ACKNOWLEDGMENTS recognize that our approach is very new and still needs research to demonstrate its practicality on a broad range of software projects. + We thank EEE Sofnuare reviewers for their useful comments and suggestions. We also thank Scott Fahhan for providing the code for his cascadecorrelation algorithm. This research was supported in part by NSFgrant N900546, and in part by a project funded by the SDOflST and monitored by the Office of Naval Research. REFERENCES. S. Fahlman and C. Lebiere, The CascadedCxrrelation Learning Architecture, Tech. Report (MU(3 9000, CS Dept., CarnegieMellon Univ., Pittsburgh, Feb. 990. 2. D. Rumelhart, G. Hmton, and R. \Villiamns, Leaming ntemal Representations by Error Propagation, in Parallel Dimbuted Pmcessmg, Volume, MT Press, Cambridge, Mass., 986, pp. 3 862. 3. Y. Tohma et al., Parameter Esdmation ofthe HyperGometric Distribution Model for Real Test/Debug Data, Tech. Report 90002, CS Dept., ToLyo nst. of ltchnology, 990. 4. J. Musa, A. annino, and K. Okunioto,.Sofii,ure Reliability Measurmrent, U~dh~n, Appluutio?rr, ;McGraw HiU,NewYork, 987. 5. Y Mabya, N. Karunanithi, and P. Verina, Predictability Measures for Software Reliability.Wxkk, EEE Trans. Relizbility Eng. (to appear). 6. Sojhare Reliability Models: Theowhcal Dmelqwents, Erulirutron a~zjappirnnunr, Y. Malaiya and P. Srunani, eds., EEE C;S Press, Los Alamitos, Calif., 990. Nachimuthu Karunanithi S a PhD candidate in computer science at C~iiliiradi~ State University. His research interests are neural ncnrrirks, genetic algorithnis, and sofhvarereliability modeling. Kanmanithi received a BE in clectric.il enpnccring from PSG Tech., 3ladras University, in 982 and an ME in ciimputer science k0ni Anna Uniremity, hladrds, in 984. He is a member of the suhcominittee iin software rehdhility cn+ming ofthe EEF. Chnputer Society s.khnical (:onimittcc on Softuare F,nginccring. Darrell Whitley is an associate professor of computer science at Colorado State C niversity. He has published inore than 30 papers on neural netuorks and genetic dgolithms. Whitley received an.ms in computer science and a PhD in anthropology, both from Southem llinois University. C serve.; on the <k)vcrning B od of the nternational Society for Genetichlgorithms and is propm chair ofboth the lw2 Workshop on Combinations of Genetic hlgorithm\ and Neural Networks and the 092 Foundations of Genetic iugorithms Vorksh(ip. Yashwant K. Malaiya is a gue~t editor ofthi? q)rcidl issue. His phiitograph and biography appcar on p.?. Address questions dlxut this arhck til Kininanithi ar CS Dept., Ci~lorado State Vnhersity, Fort <;ollins, <;O 80523; ntemet kanindniqcs.co~ostate.e(~u. EEE SOFTWARE 59