Control chart pattern recognition using semi-supervised learning

Similar documents
Neural Network Model of the Backpropagation Algorithm

Fast Multi-task Learning for Query Spelling Correction

An Effiecient Approach for Resource Auto-Scaling in Cloud Environments

More Accurate Question Answering on Freebase

MyLab & Mastering Business

Channel Mapping using Bidirectional Long Short-Term Memory for Dereverberation in Hands-Free Voice Controlled Devices

1 Language universals

Information Propagation for informing Special Population Subgroups about New Ground Transportation Services at Airports

A Neural Network GUI Tested on Text-To-Phoneme Mapping

System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks

Semi-Supervised Face Detection

On the Combined Behavior of Autonomous Resource Management Agents

Artificial Neural Networks written examination

Evolutive Neural Net Fuzzy Filtering: Basic Description

Reinforcement Learning by Comparing Immediate Reward

Applying Fuzzy Rule-Based System on FMEA to Assess the Risks on Project-Based Software Engineering Education

Direct and Indirect Passives in East Asian. C.-T. James Huang Harvard University

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

ACTIVITY: Comparing Combination Locks

Modeling function word errors in DNN-HMM based LVCSR systems

CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH

Learning Methods for Fuzzy Systems

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

FOR TEACHERS ONLY. The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION PHYSICAL SETTING/PHYSICS

TEAM NEWSLETTER. Welton Primar y School SENIOR LEADERSHIP TEAM. School Improvement

Course Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE

Speech Emotion Recognition Using Support Vector Machine

Human Emotion Recognition From Speech

PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Identification of Opinion Leaders Using Text Mining Technique in Virtual Community

PESIT SOUTH CAMPUS 10CS71-OBJECT-ORIENTED MODELING AND DESIGN. Faculty: Mrs.Sumana Sinha No. Of Hours: 52. Outcomes

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Modeling function word errors in DNN-HMM based LVCSR systems

Ordered Incremental Training with Genetic Algorithms

Using EEG to Improve Massive Open Online Courses Feedback Interaction

SARDNET: A Self-Organizing Feature Map for Sequences

LET S COMPARE ADVERBS OF DEGREE

Constructing Parallel Corpus from Movie Subtitles

CS Machine Learning

Cooperative evolutive concept learning: an empirical study

Measurement. When Smaller Is Better. Activity:

Answers To The Energy Bus Discussion Guide

Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks

Rule Learning With Negation: Issues Regarding Effectiveness

Speech Recognition at ICSI: Broadcast News and beyond

Mathematics 112 Phone: (580) Southeastern Oklahoma State University Web: Durant, OK USA

Twitter Sentiment Classification on Sanders Data using Hybrid Approach

A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK. Yun Lei Nicolas Scheffer Luciana Ferrer Mitchell McLaren

Active Learning. Yingyu Liang Computer Sciences 760 Fall

Data Fusion Models in WSNs: Comparison and Analysis

A Reinforcement Learning Variant for Control Scheduling

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

STUDIES WITH FABRICATED SWITCHBOARD DATA: EXPLORING SOURCES OF MODEL-DATA MISMATCH

TRAVEL TIME REPORT. Casualty Actuarial Society Education Policy Committee October 2001

Artificial Neural Networks

E mail: Phone: LIBRARY MBA MAIN OFFICE

Classification Using ANN: A Review

Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration

Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models

Books Effective Literacy Y5-8 Learning Through Talk Y4-8 Switch onto Spelling Spelling Under Scrutiny

Electromagnetic Spectrum Webquest Answer Key

Machine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler

Soft Computing based Learning for Cognitive Radio

Mining Student Evolution Using Associative Classification and Clustering

Time series prediction

WHEN THERE IS A mismatch between the acoustic

2/15/13. POS Tagging Problem. Part-of-Speech Tagging. Example English Part-of-Speech Tagsets. More Details of the Problem. Typical Problem Cases

MULTILINGUAL INFORMATION ACCESS IN DIGITAL LIBRARY

Machine Learning and Development Policy

Combining Proactive and Reactive Predictions for Data Streams

Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees

FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

Comparison of network inference packages and methods for multiple networks inference

Reducing Features to Improve Bug Prediction

FUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria

Professor Christina Romer. LECTURE 24 INFLATION AND THE RETURN OF OUTPUT TO POTENTIAL April 20, 2017

Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability

Laboratorio di Intelligenza Artificiale e Robotica

MERGA 20 - Aotearoa

FEIRONG YUAN, PH.D. Updated: April 15, 2016

Exposé for a Master s Thesis

TRANSFER LEARNING OF WEAKLY LABELLED AUDIO. Aleksandr Diment, Tuomas Virtanen

Pair Programming. Spring 2015

ASTR 102: Introduction to Astronomy: Stars, Galaxies, and Cosmology

Program Assessment and Alignment

Texas Wisconsin California Control Consortium Group Highlights

Switchboard Language Model Improvement with Conversational Data from Gigaword

In Workflow. Viewing: Last edit: 10/27/15 1:51 pm. Approval Path. Date Submi ed: 10/09/15 2:47 pm. 6. Coordinator Curriculum Management

AUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION

Visual CP Representation of Knowledge

Practical Applications of Statistical Process Control

ADDIE: A systematic methodology for instructional design that includes five phases: Analysis, Design, Development, Implementation, and Evaluation.

On the Formation of Phoneme Categories in DNN Acoustic Models

CPS122 Lecture: Identifying Responsibilities; CRC Cards. 1. To show how to use CRC cards to identify objects and find responsibilities

Procedia Computer Science

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

International Series in Operations Research & Management Science

A Review: Speech Recognition with Deep Learning Methods

Word Segmentation of Off-line Handwritten Documents

Transcription:

7h WSEAS Inernaional Conference on APPLIED COMPUTER SCIENCE, Venice, Ialy, November -3, 007 7 Conrol char paern recogniion using semi-supervised learning Miin-Shen Yang * and Jenn-Hwai Yang Deparmen of Applied Mahemaics Chung Yuan Chrisian Universiy Chung-Li 303 TAIWAN Absrac: - This paper presens a semi-supervised learning algorihm for a conrol char paern recogniion sysem. A learning neural nework is rained wih labeled conrol char paerns based on unsupervised learning. We hen use he classificaion mehod based on a saisical correlaion coefficien approach o es paerns. We find ha he proposed semi-supervised learning algorihm is effecive according o numerical comparisons. Key-Words: Conrol char; Paern recogniion; Semi-supervised learning; Labeled paern; Recogniion rae. Inroducion There are growing ineress by researchers in conrol char paern recogniion. Alhough Shewhar conrol chars [5] are he mos popular chars ha were widely used in indusry o deec abnormal process behaviors, hese conrol chars do no provide paern-relaed informaion because hey focus only on he laes ploed daa poins. In general, here are six unnaural paerns in conrol chars: upward rend, downward rend, upward shif, downward shif, cycle and sysemaic (see [8]). These paerns presen he long-erm behavior of a process. To examine his long-erm rend in he process over ime, conrol char paern recogniion has he capabiliy o deec unnaural paerns. Since neural neworks have been successfully used o achieve human-like performance in speech and image recogniion, hey have been widely applied in varieies of paern recogniion. In recen years, one branch of researches in conrol char paern recogniion using he neural nework approach ges big growing. Neural neworks are generally classified ino wo caegories: supervised and unsupervised. Supervised learning uses a eacher in learning sages o guide wha behavior will response for cerain impulse. Back-propagaion and learning vecor quanizaion are supervised. The unsupervised learning is o learn and updae one or more weighs ha have more similariy o inpu daa. The self-organizing feaure map and adapive resonance heory (ART) neworks are known o be his ype. The supervised neural neworks had been widely used for conrol char paern recogniion (see [5], [6], [0], [], [3], [4], [6]). Alhough he supervised neural learning echniques give good recogniion accuracy, hey have limiaions because hese echniques are lack of adapiveness wihou reraining and suffer a slow learning process (see []). Thus, Al-Ghanim [] firs presened an unsupervised learning neural nework on conrol chars based on ART neworks by feeding some differen kinds of unnaural paerns wih differen disurbance levels, for examples, differen shif quaniy of shif paern, differen slope values of rend paern, ec. However, i could only deec unnaural paern behavior. Bu i could no idenify wha an unnaural paern will occur. This is because unsupervised learning could no label he final oupu neurons o wha an unnaural paern belongs. Alhough o monior an ou-of-conrol signal in X-bar char and o deec irregular unnaural paerns are imporan in producion process, o idenify unnaural paerns can help us o improve he process. Labeling a paern o he oupu neurons is necessary. In his paper, we presen a semi-supervised learning algorihm for a conrol char paern recogniion sysem. A learning neural nework is rained wih labeled conrol char paerns based on an unsupervised learning. In his case, we can reain he essence of unsupervised learning scheme, bu also label he oupu neurons o a cerain unnaural paern. According o numerical comparisons, he proposed semi-supervised learning algorihm is effecive for conrol char paern recogniion. A semi-supervised learning algorihm

7h WSEAS Inernaional Conference on APPLIED COMPUTER SCIENCE, Venice, Ialy, November -3, 007 73 In recen years, here are many sudies abou applying neural learning neworks o conrol char paern recogniion. Mos of hem use various kinds of supervised learning neworks o generae prooypes for paerns where hese prooypes are presened for idenifying conrol chars. Bu he opimal amoun of prooypes for each paern is difficul o be decided. On he oher hand, some unsupervised compeiive learning neworks wih applicaions o paern recogniion usually need o give a priori number of oupu neurons. The purpose in a learning (or raining) sage is o updae winner neuron weighs o achieve is sabiliy. To sabilize he nework, a common approach is o se up a learning rae so ha i is always decreasing in ime. Bu hese learning rules usually suffer from a sabiliy and plasiciy dilemma problem [9]. ART neural neworks were proposed o solve his sabiliy and plasiciy dilemma (see [3], [4]) and have good resuls in clusering. Alhough i presens a good learning mechanism, unsupervised learning, such as ART, makes he nework no suiable for conrol char paern recogniion. This is because an unsupervised learning nework canno label he oupu neurons. In conrol char paern recogniion, i is necessary o label oupu neuron weighs so ha we can clearly indicae wha kind of abnormal conrol char paern is presened. We herefore propose his semi-supervised learning algorihm. The idea is ha we use he unsupervised compeiive learning rules, bu he labeled daa are used for learning (or raining). This is why we call i a semi-supervised learning algorihm. We need o poin ou ha our semi-supervised learning mehod is differen from hose parially supervised [], [] or semi-supervised [7] clusering where boh of labeled and unlabeled daa are used in he clusering algorihms. The proposed mehod is based on he saisical correlaion coefficien as a similariy measure. We menion ha he saisical correlaion coefficien was used on conrol char paern recogniion wih good resuls by Yang and Yang [7] where i was used in he idenifying (or classificaion) sage by he following equaion: x X)( y Y) () i i n ( xi X)( yi Y) r x X) y Y) xi X) yi Y) i i n n The higher correlaion coefficien beween wo paern vecors shows higher similariy. However, Yang and Yang [7] simply use he sample average as an only prooype for each conrol char paern in he raining sage. Here, we use a semi-supervised learning o find more prooypes for presening each conrol char paern in he raining sage. To label all rained prooypes we should have labeled raining samples for each paern. These raining daa ses can be provided by experiences or simulaion. Here we have six differen unnaural paerns named as upward shif, downward shif, upward rend (or increasing rend), downward rend (or decreasing rend), cycle and sysemaic paerns. All of hem can be divided ino normal and disurbance pars. In general, we normalize all daa poins of conrol char so ha he paern n () can follow a sandard normal disribuion in a regular siuaion. The paern sample generaors are defined as follows: (a) Upward and downward shif paerns x ( ) ) + u d () 0 before shifing, u afer shifing, where d is he shif quaniy randomly aken from o.5 for upward shif and from o.5 for downward shif. (b) Upward and downward rend paerns x ( ) ) ± d (3) where d is he rend slope randomly seleced from 0.05 o 0. for upward rend and from -0.05 o -0. for downward rend. (c) Cyclic paern π x( ) ) + d sin( ) (4) Ω where d is he ampliude randomly seleced from 0.5 o.5 and Ω is he cycle lengh aken as Ω 8 here. (d) Sysemaic paern x( ) ) + ( ) d (5) where d is he ampliude randomly seleced from 0.5 o.5. Similar o ART, we do no fix he number of prooypes. Using a hreshold value named vigilance parameer o deermine wheher he similariy is enough or no. If he correlaion coefficien reaches he vigilance parameer, we hen updae he weighs. On he conrary, a new neuron will be acivaed. The vigilance parameer in his sage, called h, needs o be deermined before raining. In he radiional compeiive

7h WSEAS Inernaional Conference on APPLIED COMPUTER SCIENCE, Venice, Ialy, November -3, 007 74 learning nework, here is only one winner neuron o updae is weigh, called winner-ake-all. Bu such approach easily causes one neuron o win oo ofen and ohers no o have an opporuniy o learn. If he degree of similariy beween inpu daa and neuron weigh is greaer han h, all neuron weighs are allowed o be updaed as follows: W ( ) W ( ) + λ ( )( X W ( )) (6) where X is he inpu daa a ime and λ () is he learning rae for he neuron ha is decreasing monoonically. The meaning of (6) is ha he neuron weigh will be updaed o be closer o he inpu daa bu mainain in he original saus for he ohers. The learning rae is defined for all neurons as follows: λ ( ) (7) where is updae imes for he neuron. Thus, he proposed learning algorihm can be creaed as follows: The Semi-supervised learning algorihm Sep : Choose he hreshold h, he paern lengh n and he paern raining sample number N. Sep : Selec a paern sample generaor from () ~ (5). Generae N paern vecors X, X,, X N wih a paern lengh n and differen disurbance level d for each paern. W ( X, c, and 0, for Sep 3 : Se,3,,N DO o N Inpu X and se I 0 DO o c Evaluae he correlaion coefficien ) beween X and he neuron weigh ( ) W IF γ > h THEN + W ( ) W ( ) + ( X W ( )) γ, II+ ELSE W ( ) W ( ) END DO IF I0 THEN cc+ and W ( ) X END DO Sep 4 : Oupu all acivaed neuron weighs W and regard hem as he prooypes represening his curren paern. Sep 5 : Change anoher paern generaor and repea c seps and 3 unil all paern prooypes are generaed. We know ha supervised learning always penalizes he winner neuron weighs if hey have incorrec oupu labels, bu rewards he winner neuron weighs when hey have correc oupu labels according o he inpu labeled raining daa. Alhough he proposed algorihm uses labeled inpu raining daa, he unsupervised learning equaion (6) is used o updae he neuron weighs so ha we called i a semi-supervised learning. All of conrol char paerns are learned o have several differen numbers of prooypes for presening each conrol char paern. 3 Experimenal resuls and comparisons The paern lengh n is also an argumen for such prooype generaing echnique and i is always depending on differen classificaion approaches. We adop here n30 and h 0.5 where similar discussion was in Yang and Yang [7]. We ake N300 of he raining samples for each paern. The proposed semi-supervised learning algorihm in his paper has a maor improvemen, especially in he cases ha he raining daa were colleced by experiences. In Yang and Yang [7], hey only condense all samples of each paern o one prooype by aking is average. However, here are maybe one or more differen clusers beween hese samples. In ha case i is beer o have more prooypes as represenaives for he paern. Table shows he differen number of acivaed neurons wih differen h afer a raining algorihm is finished. I is clearly ha larger h will acivae more neurons. h 0.3 h 0.4 h 0.5 h 0.6 Table. Nubmer of acivaed neurons u.s. d.s. u.. d.. cyc. sys. 3 3 4 4 4 5 5 6 4 0 8 9 9 9 9 5 We combined all rained neuron weighs as prooypes for esing and all he weighs have been labeled because each paern is raining individually. I is also necessary o use equaions () ~ (5) o generae samples for he esing sages. A hreshold, h, is creaed for qualifying he winner wheher i maches enough or no. If he similariy measure is smaller han h, he winner is no similar enough. We will

7h WSEAS Inernaional Conference on APPLIED COMPUTER SCIENCE, Venice, Ialy, November -3, 007 75 classify i as a normal paern. The mechanism will furher help us o idenify hese normal condiions and coninue unil an unnaural paern is recognized. If a normal paern is presened, hen a false alarm will occur (i.e. ype I error) when i is recognized as an unnaural paern. Oherwise, a ype II error is used o measure he capabiliy of classificaion for unnaural paerns. Clearly, a larger hreshold h will decrease a ype I error bu increase a ype II error. We ake h 0.5 here ha is similar o Yang and Yang [7]. Thus, a classificaion algorihm is creaed as follows: The Classificaion Algorihm Sep : Choose a hreshold h. Sep : A processing daa sequence conaining recen n poins is regarded as he paern size o be recognized. Sep 3: Inpu he daa sequence o he recognizer and calculae is saisical correlaion coefficien using () wih all labeled neuron weighs. Sep 4: Choose he maximum value among all oupus o deermine which paern is a winner and hen we can classify i as he winner label. If he maximum value is smaller han he hreshold h, we classify i as a normal paern. We can see ha Yang and Yang [7] simply ook a sample average as he prooype for each conrol char paern in he raining sage so ha only one prooype is evaluaed for one abnormal paern. However, our proposed semi-supervised learning will generae several differen numbers of prooypes for each abnormal paern (see Table ). Based on he same esing samples, we compare hese wo mehods according o he correc classificaion raes. Correc classificaion raes are couned by he numbers of correc classificaion over he numbers of esing samples. The correc classificaion raes of 00 esing samples for wo mehods are shown in Table,. We find ha he proposed semi-supervised learning algorihm presens a quie beer han Yang and Yang for he shif paerns, bu a lile worse han Yang and Yang for he rend paerns, and a lile beer han Yang and Yang for cyclic and sysemaic paerns. Overall, he proposed mehod acually presens beer resuls han Yang and Yang [7]. Our proposed semi-supervised learning Yang & Yang (005) Table. Correc classificaion raes for wo mehods Upward Shif Downward Shif Upward Trend Downward Trend Cyclic Sysemaic Normal 97.5 98 9.5 94.5 97 96.5 00 94.5 93.5 94 94.5 96 96 00 References: [] A. Al-Ghanim, 997, An unsupervised learning neural algorihm for idenifying process behavior on conrol chars and a comparison wih supervised learning approaches. Compuers and Indusrial Engineering, 3 (3), 67-639. [] A.M. Bensaid, L.O. Hall, J.C. Bezdek and L.P. Clarke, 996, Parially supervised clusering for image segmenaion, Paern Recogniion, 9(5), 859-87. [3] G.A. Carpener and S. Grossberg, 987, A massively parallel archiecure for a self-organizing neural paern recogniion machine. Compuer Vision, Graphics, Image Process, 37, 54-5. [4] G.A. Carpener and S. Grossberg, 988, The ART of adapive paern recogniion by a self-organizaion neural nework. Compuer,, 77-88. [5] S. I. Chang and C.A. Aw, 996, A neural fuzzy conrol char for deecing and classifying process mean shifs. Inernaional Journal of Producion Research, 34, 65-78. [6] C.S. Cheng, 997, A neural nework approach for he analysis of conrol char paerns. Inernaional Journal of Producion Research, 35, 667-697. [7] B. Gabrys and L. Perakieva, 004, Combining labelled and unlabelled daa in he design of paern classificaion sysems. Inernaional Journal of Approximae Reasoning, 35, 5-73. [8] E.E. Gran and R.S. Leavenworh, 996, Saisical qualiy conrol. New York: McGraw-Hill. [9] S. Grossberg, 976, Adapive paern classificaion and universal recoding, I: parallel developmen and coding of neural feaure deecors. Biol. Cyberne., 3, -34. [0] R.S. Guh and J.D.T. Tannock, 999, Recogniion of conrol char concurren paerns using a neural nework approach. Inernaional Journal of Producion Reseaarch, 37, 743-765. [] H. Hwarng and N. Hubele, 993, Backpropagaion paern recognizers for X-bar conrol chars: Mehodology and performance, Compuers and Indusrial Enginnering, 4 (), 9-35. [] W. Pedrycz and J. Walezky, 997, Fuzzy clusering wih parial supervision, IEEE Trans. Sysems, Man, and Cyberneics-Par B, 7(5), 787-795. [3] M.B. Perry, J.K. Spoerre and T. Velasco, 00, Conrol char paern recogniion using back propagaion arificial neural neworks. Inernaional Journal of producion Research, 39, 3399-348.

7h WSEAS Inernaional Conference on APPLIED COMPUTER SCIENCE, Venice, Ialy, November -3, 007 76 [4] D.T. Pham and E. Ozemel, 994, Conrol char paern recogniion using learning vecor quanizaion neworks. Inernaional Journal of Producion Research, 3, 7-79. [5] W.A. Shewhar, 93, Economic conrol qualiy manufacured producs. New York: Van Nosrand. [6] M.S. Yang and J.H. Yang, 00, A fuzzy-sof learning vecor quanizaion for conrol char paern recogniion. Inernaional Journal of Producion research, 40, 7-73. [7] J.H. Yang and M.S. Yang, 005, A conrol char paern recogniion using saisical correlaion coefficien mehod. Compuers & Indusrial Engineering, 48, 05-.