Multi-objective Optimization of Parallel Machine Scheduling Using Neural Networks

Similar documents
Learning Methods for Fuzzy Systems

Artificial Neural Networks written examination

Axiom 2013 Team Description Paper

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

INPE São José dos Campos

Lecture 1: Machine Learning Basics

Evolutive Neural Net Fuzzy Filtering: Basic Description

Seminar - Organic Computing

Classification Using ANN: A Review

Python Machine Learning

Given a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations

A Reinforcement Learning Variant for Control Scheduling

University of Groningen. Systemen, planning, netwerken Bosman, Aart

Lecture 10: Reinforcement Learning

Human Emotion Recognition From Speech

A theoretic and practical framework for scheduling in a stochastic environment

Laboratorio di Intelligenza Artificiale e Robotica

Learning to Schedule Straight-Line Code

A Comparison of Annealing Techniques for Academic Course Scheduling

Reinforcement Learning by Comparing Immediate Reward

AN EXAMPLE OF THE GOMORY CUTTING PLANE ALGORITHM. max z = 3x 1 + 4x 2. 3x 1 x x x x N 2

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

Soft Computing based Learning for Cognitive Radio

Major Milestones, Team Activities, and Individual Deliverables

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Test Effort Estimation Using Neural Network

Speaker Identification by Comparison of Smart Methods. Abstract

Early Model of Student's Graduation Prediction Based on Neural Network

TD(λ) and Q-Learning Based Ludo Players

A Pipelined Approach for Iterative Software Process Model

Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA

Knowledge-Based - Systems

Laboratorio di Intelligenza Artificiale e Robotica

TABLE OF CONTENTS TABLE OF CONTENTS COVER PAGE HALAMAN PENGESAHAN PERNYATAAN NASKAH SOAL TUGAS AKHIR ACKNOWLEDGEMENT FOREWORD

A SURVEY OF FUZZY COGNITIVE MAP LEARNING METHODS

An Effective Framework for Fast Expert Mining in Collaboration Networks: A Group-Oriented and Cost-Based Method

Probabilistic Latent Semantic Analysis

A Neural Network GUI Tested on Text-To-Phoneme Mapping

A student diagnosing and evaluation system for laboratory-based academic exercises

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

BMBF Project ROBUKOM: Robust Communication Networks

Speech Emotion Recognition Using Support Vector Machine

Softprop: Softmax Neural Network Backpropagation Learning

arxiv: v1 [math.at] 10 Jan 2016

Radius STEM Readiness TM

Henry Tirri* Petri Myllymgki

Software Maintenance

Improving Fairness in Memory Scheduling

A simulated annealing and hill-climbing algorithm for the traveling tournament problem

Purdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study

A New Perspective on Combining GMM and DNN Frameworks for Speaker Adaptation

Evolution of Symbolisation in Chimpanzees and Neural Nets

The Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence Algorithms

Artificial Neural Networks

Lecture 1: Basic Concepts of Machine Learning

Circuit Simulators: A Revolutionary E-Learning Platform

Abstractions and the Brain

Knowledge Transfer in Deep Convolutional Neural Nets

Strategies for Solving Fraction Tasks and Their Link to Algebraic Thinking

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Computed Expert System of Support Technology Tests in the Process of Investment Casting Elements of Aircraft Engines

Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees Cognitive Skills in Algebra on the SAT

Automating the E-learning Personalization

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

Time series prediction

CSL465/603 - Machine Learning

On-Line Data Analytics

arxiv: v1 [cs.lg] 15 Jun 2015

The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma

Lecture Notes on Mathematical Olympiad Courses

Mathematics subject curriculum

(Sub)Gradient Descent

Rule Learning With Negation: Issues Regarding Effectiveness

ICTCM 28th International Conference on Technology in Collegiate Mathematics

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Ph.D in Advance Machine Learning (computer science) PhD submitted, degree to be awarded on convocation, sept B.Tech in Computer science and

Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses

Longest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays

Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

BANGLA TO ENGLISH TEXT CONVERSION USING OPENNLP TOOLS

arxiv: v2 [cs.ro] 3 Mar 2017

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

MASTER OF SCIENCE (M.S.) MAJOR IN COMPUTER SCIENCE

Dinesh K. Sharma, Ph.D. Department of Management School of Business and Economics Fayetteville State University

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems

A study of speaker adaptation for DNN-based speech synthesis

Utilizing Soft System Methodology to Increase Productivity of Shell Fabrication Sushant Sudheer Takekar 1 Dr. D.N. Raut 2

Leveraging MOOCs to bring entrepreneurship and innovation to everyone on campus

Calibration of Confidence Measures in Speech Recognition

ME 443/643 Design Techniques in Mechanical Engineering. Lecture 1: Introduction

ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering

Australian Journal of Basic and Applied Sciences

CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS

EVOLVING POLICIES TO SOLVE THE RUBIK S CUBE: EXPERIMENTS WITH IDEAL AND APPROXIMATE PERFORMANCE FUNCTIONS

Research Article Hybrid Multistarting GA-Tabu Search Method for the Placement of BtB Converters for Korean Metropolitan Ring Grid

Transcription:

Multi-objective Optimization of Parallel Machine Scheduling Using Neural Networks A.Muralidhar Department of Mechanical Engineering Thanthai Periyar Government Institute of Technology, Vellore T. Alwarsamy Liaison Officer, Directorate of Technical Education, Chennai Abstract - This paper considers the problem of scheduling jobs on Parallel Machines with the combined objective to minimize the make span, total tardiness and total earliness. Neural Network is a technique which can deal with impreciseness and non linearity in problem solving. Neural Network technique was found to be effective and used to select the best optimal schedule which minimizes the make span, total tardiness and total earliness. Keywords: Parallel Machines, Combinatorial optimization, Neural Network, Optimal Schedule Notation SPT Shortest Processing Time LPT Longest Processing Time EDD Earliest Due Date WSPT Weighted Shortest Processing Time WLPT Weighted Longest Processing Time SA Simulated Annealing NN Neural Network I. INTRODUCTION The case of identical jobs within a batch is common in manufacturing systems, where products or jobs have identical processing requirements. Individual products may be subjected to different constraints, while all units of the product require equal processing times on the same machine [6]. Research in identical Parallel Machine Scheduling problems has predominantly been concerned with minimization of make span or total completion time. Many of the researchers applied Genetic algorithms to solve the scheduling problems. Hybrid operation methods capable of providing a better solution in less time was achieved through the combination of Genetic algorithms and other evolutionary algorithms such as Fuzzy logic, Mimetic algorithms etc. The key feature of these evolutionary algorithms is to use available knowledge about the problem and have been found most successful approximation techniques for Non-Polynomial optimization problems. Parallel machine scheduling is used to schedule jobs on a series of same function machines in order to achieve certain objective functions. The complexity usually grows with the number of machines, making the problem intractable. This problem, like all deterministic scheduling problems belongs to wide class of combinatorial optimization problems, which are known to be NP-hard [5]. This research effort proposes two different approaches to solve the Parallel Machine Scheduling Problem with multi objective optimization. Section 2 of this paper presents the literature reviewed on Neural Networks. Section 3 and 4 presents the mathematical model and discusses the Neural Network approach. Section 5 presents the experimental results. Section 6 gives the conclusion and future directions of research. Vol. 2 Issue 2 March 2013 127 ISSN: 2278-621X

II. LITERATURE REVIEW Considerable research has already been conducted in the field of Parallel Machine scheduling. In the subsequent sub sections a brief survey of the literature on multi objective scheduling, Application of Neural Networks are presented. 2.1 Multi objective scheduling: In the literature, different approaches have been found considering multi objective scheduling problems in [3] and [4]. The main approaches found are as follows: Simultaneous method aims to generate the complete Pareto set or to approximate a set of efficient solutions. Weighting objectives method creates a weighted linear combination of the objectives to obtain a single function, which can be solved using any single optimization method. Hierarchical optimization method allows the decision maker to rank the objectives in a descending order of importance. Each objective function is then minimized individually subject to a constraint that does not allow the minimum for the new function to exceed a prescribed fraction of a minimum of the previous function. Goal Programming method takes the objectives into constraints which express satisfying goals. The aim is to find a solution which provides good values of predefined goals for each objective. Measuring the relative goodness of the selected solution by comparing it with another solution in the feasible region. K. Raja et al presented that conventional methods are not efficient in handling multi objective functions and GA technique had been applied for generating multiple schedules with multiple objectives. Fuzzy logic is then applied to select the best optimal schedule satisfying the multiple objectives. Similarly Fuzzy logic technique had been applied for generating multiple schedules when the available data is insufficient and imprecise. And also simulated annealing technique had been found very useful for the problems with multiple objectives [10]. 2.2. Neural Networks : Neural network computing is an approach that attempts to mimic certain processing capabilities of the brain. This machine learning technology has the ability to represent knowledge based on massive parallel processing and recognize patterns based on experience. Since the 1980s, the drastic breakthrough of the computing technology has led to an increasing amount of neural network research on a wide variety of functional applications. In recent years, there was an increase interest shown in the utilization of neural networks for various research fields such as robotics, optimization, linear and non-linear programming etc. A great advantage of neural network approach is that most of intense computation takes place during training process. Once the neural network is trained for particular task, operation is relatively fast and unknown samples can be rapidly identified. An artificial neural network is a collection of highly interconnected processing units that has the ability to learn and store patterns as well as to generalize when presented with new patterns. The learnt information is stored in the form of numerical values, called weights that are assigned to the connections between the processing units of the network. Data presented at the input layer of a trained network will result in values from the output layer consistent with the relationship learnt by the network from the training examples. The neural network that is proposed for the e scheduling problem is organized into three layers of processing units. There is an input layer of 10 units, a hidden layer, and an output layer that has a single unit. The number of units in the input and output layers is dictated by the specific representation adopted for the schedule problem. In the proposed representation, the input layer contains the information describing the problem in the form of a vector of continuous values. The 10 input units are designed to contain the following information for each of the n jobs that have to be scheduled: Input 1 = ( 1 ) Input 2 = ( 2 ) Input 3 = ( 3 ) Vol. 2 Issue 2 March 2013 128 ISSN: 2278-621X

Input 4 = ( 4 ) Input 5 = ( 5 ) Input 6 = ( 6 ) Input 7 =1 ( 7 ) Input 8 = ( 8 ) Input 9 = ( 9 ) Input 10 = ( 10 ) Where Slack for job = - Longest processing time among the n jobs = max Largest slack for the n jobs Thus, 10-input vectors represent each job, which holds information particularly to that job and related to the other jobs in the problem. The output unit assumes values that are in the range of 0.1-0.9, the magnitude being an indication of where the job represented at the input layer should desirably lie in the schedule. High values suggest the lead position and low values indicate less priority and hence position towards the end of the schedule. The number of units in the hidden layer is selected by trial and error during training phase. III. PROBLEM DESCRIPTION The problem is to schedule all the jobs such that the make span, total tardiness and total earliness are minimized. Data of the Problem Number of identical machines : 3 Number of jobs : 10 Working hours / day : 8 Notations; Job J i Processing time P i Due date d i Completion time C i The data of the problem is summarized in Table 1 Vol. 2 Issue 2 March 2013 129 ISSN: 2278-621X

Table 1 Data of the Problem Job Number Processing Time Due date Batch Minutes Days quantity 0 2 11 218 1 1 07 112 2 7 12 711 3 6 13 655 4 4 03 419 5 3 12 354 6 1 03 174 7 9 07 910 8 2 08 076 9 1 10 249 Earliness of the job J i, Ei = Max.{ 0, (di - Ci )} Tardiness of the job Ji, Ti = Max.{ 0, (Ci - di )} The fitness function considered in this study is the combined objective function (COF) and is given by COF = W m (maximum make span) +W c (total earliness) +W t (total Tardiness) Weightage given for make span is 0.4 and for tardiness and earliness 0.3 each. 3.1 Neural Network approach: To illustrate how the neural network is trained. Table 2 shows a 6-job problem that serves as training example for neural network. The 6-jobs are converted first into their vector representations by using the set of equations (1 to 10). The result of this pre-processing stage is presented in Table (3). To train the neural network, each vector with their output is presented individually at the input layer and output layer of the neural network. Training is considered completed after an average of 500 cycles using a 10-8-1 configuration. A cycle is concluded after the network has been exposed once, in the course of the back propagation algorithm, to each one of the available training patterns. The trained neural network is used to find job schedule for our problem. Table 2: 6- job problem Job, J i 1 2 3 4 5 6 P i, minutes 3655 4977 1680 6317 214 962 d i, days 7 9 3 10 12 6 Table 3: Problem representation for the example in Table 2 Input and output: JOB 1 2 3 4 5 6 Input 1 0.5715 0.7878 0.2659 1 0.03387 0.15228 Input 2 0.1008 0.1296 0.0432 0.144 0.1728 0.0864 Input 3 0.1008 0.467 0.1546 0.4736 1 0.449 Input 4 0.03 0.03 0.03 0.03 0.03 0.03 Vol. 2 Issue 2 March 2013 130 ISSN: 2278-621X

Input 5 0.03 0.03 0.03 0.03 0.03 0.03 Input 6 0.28185 0.28185 0.28185 0.28185 0.28185 0.28185 Input 7 1 1 1 1 1 1 Input 8 0.29224 0.29224 0.29224 0.29224 0.29224 0.29224 Input 9 1.403 1.403 1.403 1.403 1.403 1.403 Input 10 1.09593 1.09593 1.09593 1.09593 1.09593 1.09593 Output 0.89996 021786 0.9 0.10084 0.10012 0.17529 Table 4 : The output for the example in Table 1 Job 0 1 2 3 4 5 6 7 8 9 Output 0.10012 0.10034 0.10013 0.10012 0.9 0.10012 0.89158 0.88364 0.10012 0.10013 IV. RESULT AND DISCUSSION Sequences obtained by various methods are summarized below. Table 3 Method Sequence Total Earliness Total Tardiness COF SPT 1-8-6-9-0-5-4-3-2-7 84531 10878 37006 LPT 7-2-3-4-5-0-9-6-8-1 7626 64137 29912 EDD 4-6-1-7-8-9-0-2-5-3 24712 2310 16492 WSPT 1-8-9-0-6-5-3-2-4-7 87087 19323 40307 WLPT 7-4-2-3-5-6-0-9-8-1 54946 4327 26165 SA 4-6-7-1-2-9-0-8-5-3 8583 7714 13278 Fuzzy 6-4-1-8-9-0-5-3-2-7 78078 10878 35070 NN 4-6-7-1-2-9-8-0-5-3 7557 7424 12888 100000 90000 80000 70000 60000 50000 40000 30000 20000 10000 0 SPT LPT EDD WSPT WLPT SA FUZZY NN Total Earliness Total Tardiness COF Figure (1) Comparison of Total Earliness, Total Tardiness and COF Vol. 2 Issue 2 March 2013 131 ISSN: 2278-621X

It is found from the above Table (3) and Figure (1) that all the parameters such as total earliness, total tardiness and the Combined Objective Function were minimized by applying the Neural Network Technique. It is evident that this method also minimizes the chances of converging at local optimal values, as in the case of Simulated Annealing Technique. V. CONCLUSION A Neural Network technique was applied to Parallel Machine Scheduling problem. The schedule obtained is compared with other techniques. The procedure adopted for this technique is simpler when compared to other hybrid or heuristics and can be applied to schedule large number of jobs without training the network again. The same procedure can be extended to optimize other performance measures also. REFERENCES [1] Abdelaziz Hamad, Bahrom Sanugi and Shaharuddin Salleh, Single Machine Common Due date Scheduling problems using Neural Network, Jurnal Teknologi, 36(C), Jun 2002:75-82 [2] Abdelaziz Hamad, Bahrom Sanugi and Shaharuddin Salleh, A Neural Network for Common Due date Job Scheduling Problem on Parallel Unrelated machines, Matematica,2001Jilid17,bil.2,63-70 Jabatan Matematik,UTM [3] Bo Chen, Handbook of Scheduling, CRC Press ( 2004) [4] Gyula Kulscar and Fernc Erdelyi, A New approach to solve Multi Objective Scheduling and Rescheduling Tasks, International Journal of Computational Intellegence Research, Vol2, No:4(2007) pp343-351. [5] Jeng. A.A.K. and Lin B.M.T., A note on Parallel Machinie Scheduling with deteriorating jobs, Journal of Operation Research Society (2007) 58:824-826 [6] Jones.A and Rabelo.L (1998), Survey of Job shop scheduling Techniques, NISTIR, National Institute of Standards and Technology, Gaitherburg, MD [7] Mare Sevaux and Kenneth Sorensen, VNS/TS for Parallel Machine Scheduling Problem, MEC-VNS : 18 th Mini Euro Conference on VNS 2005. [8] MAtlab 6.5 Neural Network Tool Box Help, The MAthworks Inc 2002. [9] Michele Pfund, John W.Flower and Jatinder Gupta, A survey of Algorithms for Single and Multi-objective Unrelated Parallel Machine Deterministic Scheduling Problems, Journal of Chinese Institute of Industrial Engineers, Vol 21 No.3, pp 230-241 (2004) [10] Muralidhar.A and Alwarsamy.T, Multi- objective Optimization of Parallel Machine Scheduling using Fuzzy logic and Simulated Annealing International Journal of Applied Engineering Research, Vol.4, No.11(2009) pp:2141-2148 [11] Pradhan.S and Lam,S.S.Y. Minimizing make span during environmental stress screening using Genetic Algorithm and an Ant Colony Optimization International Journal of Advanced Manufacturing Technology (2007) 32: 571-577 [12] Raja.K, Saravanan.R, and Selladurai.V, Multi objective Parallel Machine Scheduling using Genetic Algorithm and Fuzzy logic, Institute of Engineers(I) Journal-PR Vol.87 September 2006: p 26-31 [13] Yuan.J.J,Cheng.T.C.E and C.T.Ng NP hardness of the single variable resource scheduling problem to minimi9ze the total weighted completion time, European Journal of Operations Research 178 (2007):631-633 Vol. 2 Issue 2 March 2013 132 ISSN: 2278-621X