A Novel Dynamic Target Tracking Algorithm for Image Based on Two-step Reinforcement Learning
|
|
- Isabel Mason
- 6 years ago
- Views:
Transcription
1 Sensors & Transducers, Vol. 66, Issue 3, March 4, pp Sensors & Transducers 4 by IFSA Publishing, S. L. hp:// A Novel Dynamic Targe Tracking Algorihm for Image Based on Two-sep Reinforcemen Learning, 2 * Xiaokun WANG, Jinhua YANG, Lijuan ZHANG, Chenghao JIANG Changchun Universiy of Science and Technology, Changchun, Jilin 022, China 2 Aviaion Universiy of Air Force, Changchun, Jilin 022, China * wangxkcus@63.com Received: 7 December 3 /Acceped: 28 February 4 /Published: 3 March 4 Absrac: In his aricle, we modeled image arge racking ino reinforcemen learning framework, and we proposed a wo-sep reinforcemen learning algorihm for arge racking. In his algorihm, we se muliple racker agen o rack he pixel of arge, he inenion of reinforcemen learning is o achieve racking sraegy of every racker agen, we divided each learning sep of racker ino wo pars, one is o learn he division sraegy, anoher is o learn he acion sraegy, every racker agen shares he experiences hey have learned. Simulaion experimenal resuls illusrae he feasibiliy and effeciveness of he algorihm. Copyrigh 4 IFSA Publishing, S. L. Keywords: Targe racking, Reinforcemen learning, Image processing, Machine learning.. Inroducion Wih he rapid developmen of compuer vision and image processing echniques, arge racking has been deeply used in he Miliary areas, such as Miliary Saellies, Theaer Missile Defense, Reconnaissance Aircraf, Missile Guidance, and i is also widely used in Civil areas, such as video monioring ec. Targe racking in image is he core echnology in he research of arge moion analysis, and i combines image processing, paern recogniion, compuer vision, auomaion, and oher disciplines. I can make analysis and racking of moving objecs in image sequences, and hen evaluae he moion parameers of he arge in each frame of image, such as wo dimensional coordinaes, ec. Then behavior undersanding of he moving objec can be implemened by making furher processing and analysis o he moion parameers. In a word, arge racking has become one of he key research areas in he field of image processing. There are many radiional approaches for arge racking. Gomez e al. [] developed an aerial vehicle sabilizaion sysem based on compuer vision o perform racking of a moving arge on ground. Xu e al. [2] proposed a min-max approximaion mehod o esimae he arge locaion for racking. Gransrom e al. [3] presened a random se based approach o rack an unknown number of exended arges. Amodeo e al. [4] aimed o he issue of single arge racking in conrolled mobiliy sensor neworks and proposed a mehod for esimaing he curren posiion of a single arge. Liu e al. [5] proposed a reinforcemen learning based feaure selecion mehod for arge racking. Wang e al. [6] proposed o use a novel disribued Kalman filer o esimae he arge posiion and avoid collision. Caerina e al. [7] proposed he concep of marix of pixel Aricle number P_929 23
2 Sensors & Transducers, Vol. 66, Issue 3, March 4, pp weighs o preserve he srucure of he arge emplae. Reinforcemen learning is a powerful machine learning approach, and i is essenially a dynamic programming based on Markov decision processes, and i has been one of he mos imporan echniques o consruc Agen [8, 9]. In reinforcemen learning, he Agens he Agens achieve saes of environmen, choose proper acion and ge reward wih uncerainy, based on hese, he Agens learn opimal acion sraegy [0]. Reinforcemen learning has solved he problem of single agen choosing opimal behavior sraegies in environmen of Markov decision process []. In his paper, we model arge racking problem as reinforcemen learning problem, and proposed a wosep reinforcemen learning algorihm for arge racking in image. In he algorihm, we se muliple racker agens o rack he arge in image, a firs he algorihm makes ask assignmens o racker agens dynamically, ha is o say i assigned sub-goal o each racker agen, hen each racker agen choose acion o move o is sub-goal. Afer learning, he racker agens will learn he opimal acion sraegies, so racker agens can move o he arge quickly, when all racker agens move o (cach) he arge ha means arge racking is complee. The remainder of his paper is organized as follows. Secion 2 described how o conver he arge racking problem ino reinforcemen learning problem. And Secion 3 offers brief background knowledge abou reinforcemen learning. We proposed he new wo-sep reinforcemen learning algorihm for arge racking in Secion 4, and Secion 5 presens experimens resuls. Finally, conclusions and recommendaions for fuure work are summarized in Secion Problem Modeling Taking miliary aircraf racking for example, he aim of arge racking is for each frame of he image, calculae he wo-dimensional coordinaes, as shown in Fig. (a). We propose o absrac and simplify he problem: find he coordinae of cenral pixel of he arge in he image ha composed of pixels, as shown in Fig. (b). Furher more, we can represen he image as wo-dimensional grid, and each elemen of he grid represens one or more pixels (can be se flexible according o acual siuaion) of he image, and he arge is supposed o be represened as one cell of he grid. based on above seing, he arge racking can be compleed by following seps: firsly, we se 4 racker agens, as shown in Fig. 2(a), each racker agen occupies a cell in he image grid, so he 4 racker agens compose a collaboraive eam whose aim is o rack he arge pixel. The iniial sae of he racker agen can be se o four corner pixel of he image, as shown in Fig. 2(a). Tracker and Targe can move a mos one cell disance each ime, and here are 5 acions o move: Up, Down, Lef, Righ, Sandsill. So racker can move in he image grid, any wo rackers can no occupy same cell a he same ime, when he 4 racker agens move o he cell adjacen o he arge pixel, ha means racking is successfully complee. For example, Fig. 2(b) is a successful finish saus, so he pixel in he red frame is he arge pixel. (a) (b) Fig.. Absracion of image arge racking. (a) (b) Fig. 2. Image arge racking problem is modeled o a reinforcemen learning problem. According o above modeling approach, arge racking is acually a procedure of four racker agens pursui he arge Agen hrough eamwork. One of he mos criical issues is how o design he racking sraegy of each racker Agen, so ha each racker Agen can selec he appropriae acion o move o he arge based on he curren sae and heir own knowledge. Moreover, arge racking requires cooperaion among he racker Agens o. This is acually ransformed ino a muli-agen reinforcemen learning problems, we can learn racking sraegy of racker Agen, as well as he collaboraion of muliple racker agens via reinforcemen learning approach. 3. Reinforcemen Learning The research of reinforcemen learning focuses on how o make he agens have percepion and acion in an environmen, and selec he opimal sequence of acions o achieve heir goals [2]. During he process of learning, every acion of he agen in he environmen will achieve a reward or punishmen. The agen can learn o choose series of acions o obain a cumulaive maximum reward. Q-learning algorihm [3] is currenly mos widely used reinforcemen learning algorihm, i can 24
3 Sensors & Transducers, Vol. 66, Issue 3, March 4, pp learn he opimal acion sraegy by sampling from he environmen. The definiion of Q(s, a) is he maximum reward agen can ge by use acion a as he firs acion from sae s. Thus, he opimal sraegy of he agen is o choose he acion wih he maximum Q(s, a). In order o learn he Q funcion, he agen repeaedly observe he curren sae s, choose an acion a and execue he acion, hen consider he reward r = r(s, a) and he new sae s'. The agen updaes each enry of Q(s, a) wih he following rules: Qsa (, ) r+ γ max Qs (, a ) Q-learning algorihm is acually o modify acion sraegies wih he experience gained by he "rial and error", in order o obain he sraegy wih he maximum reward. In he iniial sep of learning, he agen does no have any experience or knowledge, i can only rely on rial and error. During he learning, agen obains knowledge and hen uses he knowledge o modify acion sraegies [4]. Muli-agen sysem is composed of muliple auonomous agens, and he sysem complees complex asks and solves complex problems hrough collaboraion among agens [5, 6]. If radiional reinforcemen learning algorihm for single agen is applied o he muli-agen environmen, i will ake long ime o converge due o he size of he exponenial sae and acion space. In order o reduce he scale of sae and acion space, disribued independen reinforcemen learning mehod is used in general, bu i is difficul o converge o he global opimal sraegy, one of he reasons is radiional mehods do no make division of asks for each agen, or only make division of asks for each agen wih a fixed sraegy before learning, i.e. assign each agen o a fixed sub-asks, so ha sub-asks of each agen is consan during he learning, ha makes each agen can no consider he eam's global benefi, he learning resul is jus each agen's local opimal sraegy for heir sub-ask, furhermore, he learning resul an no adap o he dynamic changes of he environmen. 4. The Proposed Approach for Image Targe Tracking Aiming a he characerisics of image arge racking problem and o overcome he shorcomings of radiional reinforcemen learning, in his paper we proposed a wo-sage reinforcemen learning algorihm for arge racking in image, we divided he learning procedure ino wo sages, one is "learning sraegy of ask division", ha means make proper work division for he four racker agens, in order o make hem move o he arge efficienly; he oher sage is "learning sraegy of acion choice", i.e. perform reinforcemen learning for acion sraegies of each racker agen, so ha each racker agen can a choose mos appropriae behavior(acion) o move o he arge based on he curren sae. Now we described he wo sages of he algorihm separaely. 4.. Learning he Sraegy for Task Division The condiion of arge racking is ha he four racker agens in he image simulaneously occupy he four adjacen grids of arge agen. The exising approaches are divided ino wo caegories. One is o learn he racking sraegy direcly wihou ask divisions, hese kinds of approaches have large blindness and low convergence speed. The second is o make ask division only once before he sar of he learning. In such approaches, he overall goal is decomposed ino four sub-goals ha occupying he up, down, lef and righ grids of arge agen. The four racker agens need o complee heir sub-goals o achieve he overall goal. Then each racker agen is assigned o a fixed sub-goal only before he learning process, racker agens always rack he fixed subgoal during he whole learning process. So if he racker agen whose sub-goal is "occupy he lef grid of he arge" (hereafer referred o as racker agen A) moves below he arge agen, while he racker agen whose sub-goal is "occupy he underside grid of he arge" (hereafer referred o as racker agen B) reached lef of he arge agen, in his siuaion, he algorihm can achieve he local opimum of individual A and B by he original learning sraegy, bu obviously his is no global opimal racking sraegy, i.e. i has low racking speed. A his ime, if he sub-goal of racker agen A becomes "occupy he underside grid of he arge", and he sub-goal of racker agen B becomes "occupy he lef grid of he arge", and hen he arge will be racked more quickly. So in his aricle, we proposed o dynamically disribue sub-goal, a each sep, before each racker agen choose acion o move, he sub-goals will be redisribued, and he racker agen rack he arge according o is new sub-goal. The sraegy for redisribuing sub-goals can be also achieved by reinforcemen learning. In order o reduce he space of sae-acion, he sae is represened by relaive posiion of he arge agen o racker agen. Esablishing a coordinae sysem in he wodimensional grid, he downward direcion in he coordinae sysem is y-axis direcion, and he direcion from lef o righ on he horizonal axis is x axis, he lengh of side of grid equals o a uni in he coordinae sysem. Each racker agen has a sensing radius, only when he arge agen in he sensing range of he racker agen, he racker agen can perceive he arge agen, his can be undersood as he view of he racker agen, racker agen can only perceive he hings wihin heir range of view. When racker agen perceives he arge agen, he relaive posiion of racker and arge agen can be represened as a uple (x Tracker x Targe, y Tracker y Targe ), in which (x Tracker, y Tracker ) represens he curren 25
4 Sensors & Transducers, Vol. 66, Issue 3, March 4, pp coordinae of racker agen, and (x Targe, y Targe ) represens he curren coordinae of arge agen. Because one sub-goal can no be disribued o wo racker agen a he same ime, so using a four bi binary number as a mask o represen he disribuion abou he sub-goal of Righ, Down, Lef, Up, for example, =9 represen ha he sub-goals Righ and Up have been disribued o some rackers, and he sub-goals Down and Lef have no been disribued, he range of he mask is from 0000 o, and his corresponds o decimal number 0 o 5, so he mask bi can be represened as a ineger. During he process of learning he sraegy of ask division (i.e. sub-goal disribuion), he sae of each racker agen can be represened as S ={x Tracker x Targe, y Tracker y Targe, mask}, he acion space is he selecion for four sub-goals, represened as a ={Righ, Down, Lef, Up}, he Q-value of each racker agen can be updaed according o he following rule: Q ( S, a ) = r + γ max Q ( S, a ), + a where r is he reward, for each racker agen, if he sub-goal has been disribued o oher racker, hen his disribuion is unreasonable, and he racker ges a negaive reward, oherwise using he disance beween racker and arge agen (x Tracker x Subgoal ) 2 +(y Tracker y Subgoal ) 2 o measure he qualiy of he disribuion, if he disance beween he racker and he grid of he sub-goal is lower han he disance beween he racker and oher sub-goal, i.e. he racker agen has closer disance o he sub-goal han oher sub-goal, hen a big posiive reward is achieved, oherwise a lile posiive reward is achieved. Because he racking sraegies of he four racker agens can be shared, sharing he Q-value of each racker agen o enhance he learning performance Learning he Sraegy for Acion Selecion For each racker agen, he aim of his sage is o learn a sraegy according o which each racker agen can complee is own sub-goal. The acions of he racker agens are {Up, Down, Lef, Righ, Sandsill}. Using he relaive coordinae of he racker o is sub-goal o represen sae, so he sae can be represened as S 2 ={x Tracker x subgoal, y Tracker y subgoal }, Fig. 3 shows he sae space of racker agen a each locaion when he sensing radius of racker agen is 2, and is sub-goal is o occupy he lef grid of he arge agen. The acion space a 2 = {Up, Down, Lef, Righ, Sandsill}, he updae rules of Q value for each racker agen are as follows: Q ( S, a ) = r + γ max Q ( S, a ), a where r 2 is he reward, afer racker agen choosing an acion, if i ge is sub-goal, i will ge he maximum reward, if i reduces he disance o he sub-goal, i will ge he second highes reward, and if he disance o he sub-goal is no changed, i will ge 0 reward. If he disance o he sub-goal is increased, i will ge a negaive reward. Fig. 3. The sae for learning sraegy for acion selecion. The sae of he racker agen has corresponding relaionship. From Fig. 3, i can be seen ha he sae of he oher hree sub-goals can be convered o sae space in Fig. 3 by roaing around he arge agen. For example, he saus of he sae of he racker agen whose sub-goal is occupying he upward of he arge agen (0, -) and he acion is down is equivalen o he sae of he racker agen whose sub-goal is occupying he lef of he arge agen (-, 0), and he acion is righ, and he corresponding Q value is also equivalen. So heir behavioral sraegies can be shared. So i can improve learning efficiency o share he Q funcion of racker agens. In summary, he overall framework of he algorihm for each racker agen is as follows: Algorihm Targe Tracking based on reinforcemen learning /*Iniializaion*/ Q 0 0 while no convergence do /*Learning he sraegy for ask division*/ Ge curren sae S for ask division; Choose an acion a according o S and Q ; Execue acion a ; Ge sub-goal, reward r and nex sae S + ; Updae Q wih following rule: + Q( S, a) = r+ γ max Q( S, a ) a /*Learning he sraegy for acion selecion*/ Ge sae S 2 according o curren sub-goal; Choose an acion a 2 according o S 2 and Q 2 ; Execue acion a 2 ; Ge reward r 2 and nex sae S + 2 ; Updae Q 2 wih following rule: + Q2( S2, a2) = r2 + γ max Q2( S2, a 2) a2 + end while 26
5 Sensors & Transducers, Vol. 66, Issue 3, March 4, pp Simulaed Experimen and Resuls In his paper we ry o adop reinforcemen learning o arge racking, so in he simulaed experimen, we use 0*0 wo-dimensional grids o simulae he image, each grid represens one or more pixels of he image. The racker and arge agen can only move wihin he 0*0 grids, he iniial posiion of each racker agen is (0,0), (0,9), (9,0), (9,9), and he iniial posiion of he arge agen is a random posiion. We make 6 experimens for comparison wih he mehod wih dynamic ask division and he mehod wihou dynamic ask division, in each experimen, given 0 frame simulaed images, and perform 0 imes arge racking, hen record he number of seps when rack erminae, every racking are divided ino one group, and ake he average number of seps of he group, so each experimen has 0 group frame images. The number of seps of racking reflecs he speed of he racking algorihm. Fig. 4 shows he comparison of number of seps for he wo mehods in he 6 experimens, in which abscissa represens he group number of he image frame, and he longiudinal coordinae represens he average seps for racking. 0 The proposed mehod Tradiional mehod 0 The proposed mehod Tradiional mehod number of racking seps number of racking seps number of simulaed image frame number of simulaed image frame (a) (b) The proposed mehod The proposed mehod 0 Tradiional mehod 0 Tradiional mehod number of racking seps number of racking seps number of simulaed image frame number of simulaed image frame (c) (d) 0 The proposed mehod The proposed mehod Tradiional mehod Tradiional mehod number of racking seps number of racking seps number of simulaed image frame number of simulaed image frame (e) (f) Fig. 3. Comparison of he proposed mehod and radiional mehod. 27
6 Sensors & Transducers, Vol. 66, Issue 3, March 4, pp Conclusions and Fuure Works In his aricle we conver he arge racking in image ino reinforcemen learning domain, we se muliple racker agens o rack he arge agen, and presen a wo sage reinforcemen learning algorihm for arge racking. In he algorihm, a each sep, i firs perform dynamic ask division o each racker agen, ha is o divide a sub-goal o each racker agen, hen each racker agen choose is acion according o is curren sub-goal. The learning algorihm divided he learning procedure ino wo pars, one is o learn he sraegy for ask division, and he oher is o learn he sraegy for acion selecion, each racker agen shares Q funcion o enhance he efficiency. The mehod improved he efficiency o some exen, bu each racker agen performs disribued learning separaely, so i is no a horough global opimal mehod. In fuure, we will concenrae on how o make furher cooperaion among racker agens by more ineracion, so as o obain more global opimal soluion. Furhermore, we will apply he mehod ino real applicaion domain for validaion. Zhu proposed some Bayesian nework oriened machine learning mehods [7-], i is promising o use hese sraegies o enhance he acion selecion for arge racking, and his is also he work we will concenrae on in fuure. References []. J. E. Gomez-Balderas, G. Flores, L. R. Garcia Carrillo, Tracking a ground moving arge wih a quadroor using swiching conrol, Journal of Inelligen & Roboic Sysems, Vol., Issue -4, 3, pp [2]. Xu Enyang, Ding Zhi, Dasgupa Soura, Targe racking and mobile sensor navigaion in wireless sensor neworks, IEEE Transacions on Mobile Compuing, Vol. 2, Issue, 3, pp [3]. Gransrom Karl, Orguner Umu, A Phd filer for racking muliple exended arges using random marices, IEEE Transacions on Signal Processing, Vol., Issue, pp [4]. Lionel Amodeo, Mourad Farah, Chehade Hicham, Snoussi Hichem, Conrolled mobiliy sensor neworks for arge racking using an colony opimizaion, IEEE Transacions on Mobile Compuing, Vol., Issue 8, 2, pp [5]. Fang Liu, Jianbo Su, Reinforcemen learning-based feaure learning for objec racking, in Proceedings of he 7 h Inernaional Conference on Paern Recogniion (ICPR 04), 04, pp [6]. Wang Zongyao, Gu Dongbing, Cooperaive arge racking conrol of muliple robos, IEEE Transacions on Indusrial Elecronics, Vol. 59, Issue 8, 2, pp [7]. G. Di Caerina, J. J. Soraghan, Robus complee occlusion handling in adapive emplae maching arge racking, Elecronics Leers, Vol. 48, Issue 4, 2, pp [8]. M. Michell, Machine learning, McGraw-Hill, 997, pp [9]. Changying Wang, Xiaohu Yin, Yiping Bao, Li Yao, A shared experience uples muli-agen cooperaive reinforcemen learning algorihm, Paern Recogniion and Arificial Inelligence, Vol. 8, Issue 2, 05, pp [0]. Xiaohu Yin, Changying Wang, Muli-agen reinforcemen learning algorihm based on decomposiion, in Proceedings of he 0 h CAAI Conference, November 03. []. Bo Fan, Quan Pan, Hongcai Zhang, A mehod for muli-agen coordinaion based on disribued reinforcemen learning, Compuer Simulaion, Vol. 22, Issue 6, 05, pp [2]. Xiao Dan, Tan Ah-Hwee, Cooperaive reinforcemen learning in opology-based muli-agen sysems, Auonomous Agens and Muli-Agen Sysems, Vol. 26, Issue, 3, pp [3]. Changying Wang, Bo Zhang, An agen eam based reinforcemen learning model and is applicaion, Journal of Compuer Research and Developmen, Vol. 37, Issue 9, 00, pp [4]. Sharma Rajneesh, Mahijs T. J. Spaan, Bayesiangame-based fuzzy reinforcemen learning conrol for decenralized POMDPs, IEEE Transacions on Compuaional Inelligence and AI in Games, Vol. 4, Issue 4, pp [5]. Freek Sulp, Evangelos A. Theodorou, Sefan Schaal, Reinforcemen learning wih sequences of moion primiives for robus manipulaion, IEEE Transacions on Roboics, Vol. 28, Issue 6, 2, pp [6]. Yong Duan, Baoxia Cui, Xinhe Xu, A muli-agen reinforcemen learning approach o robo soccer, Arificial Inelligence Review, Vol. 38, Issue 3, 2, pp [7]. Yungang Zhu, Dayou Liu, Haiyang Jia, Yuxiao Huang, Srucure learning of Bayesian nework wih bee riple-populaion evoluion sraegies, Inernaional Journal of Advancemens in Compuing Technology, Vol. 3, Issue 0,, pp [8]. Yungang Zhu, Dayou Liu, Haiyang Jia, A new evoluionary compuaion based approach for learning Bayesian nework, Procedia Engineering, No. 5,, pp [9]. Yungang Zhu, Dayou Liu, Haiyang Jia, D. Trinugroho, Incremenal learning of Bayesian neworks based on chaoic dual-populaion evoluion sraegies and is applicaion o nanoelecronics, Journal of Nanoelecronics and Opoelecronics, Vol. 7, Issue 2, 2, pp []. Yungang Zhu, Dayou Liu, Guifen Chen, Haiyang Jia, Helong Yu, Mahemaical modeling for acive and dynamic diagnosis of crop diseases based on Bayesian neworks and incremenal learning, Mahemaical and Compuer Modelling, Vol. 58, Issue 3-4, 3, pp Copyrigh, Inernaional Frequency Sensor Associaion (IFSA) Publishing, S. L. All righs reserved. (hp:// 28
Neural Network Model of the Backpropagation Algorithm
Neural Nework Model of he Backpropagaion Algorihm Rudolf Jakša Deparmen of Cyberneics and Arificial Inelligence Technical Universiy of Košice Lená 9, 4 Košice Slovakia jaksa@neuron.uke.sk Miroslav Karák
More informationAn Effiecient Approach for Resource Auto-Scaling in Cloud Environments
Inernaional Journal of Elecrical and Compuer Engineering (IJECE) Vol. 6, No. 5, Ocober 2016, pp. 2415~2424 ISSN: 2088-8708, DOI: 10.11591/ijece.v6i5.10639 2415 An Effiecien Approach for Resource Auo-Scaling
More informationChannel Mapping using Bidirectional Long Short-Term Memory for Dereverberation in Hands-Free Voice Controlled Devices
Z. Zhang e al.: Channel Mapping using Bidirecional Long Shor-Term Memory for Dereverberaion in Hands-Free Voice Conrolled Devices 525 Channel Mapping using Bidirecional Long Shor-Term Memory for Dereverberaion
More informationFast Multi-task Learning for Query Spelling Correction
Fas Muli-ask Learning for Query Spelling Correcion Xu Sun Dep. of Saisical Science Cornell Universiy Ihaca, NY 14853 xusun@cornell.edu Anshumali Shrivasava Dep. of Compuer Science Cornell Universiy Ihaca,
More informationInformation Propagation for informing Special Population Subgroups about New Ground Transportation Services at Airports
Downloaded from ascelibrary.org by Basil Sephanis on 07/13/16. Copyrigh ASCE. For personal use only; all righs reserved. Informaion Propagaion for informing Special Populaion Subgroups abou New Ground
More informationMore Accurate Question Answering on Freebase
More Accurae Quesion Answering on Freebase Hannah Bas, Elmar Haussmann Deparmen of Compuer Science Universiy of Freiburg 79110 Freiburg, Germany {bas, haussmann}@informaik.uni-freiburg.de ABSTRACT Real-world
More information1 Language universals
AS LX 500 Topics: Language Uniersals Fall 2010, Sepember 21 4a. Anisymmery 1 Language uniersals Subjec-erb agreemen and order Bach (1971) discusses wh-quesions across SO and SO languages, hypohesizing:...
More informationMyLab & Mastering Business
MyLab & Masering Business Efficacy Repor 2013 MyLab & Masering: Business Efficacy Repor 2013 Edied by Michelle D. Speckler 2013 Pearson MyAccouningLab, MyEconLab, MyFinanceLab, MyMarkeingLab, and MyOMLab
More informationReinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationSpeech Emotion Recognition Using Support Vector Machine
Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,
More informationLecture 10: Reinforcement Learning
Lecture 1: Reinforcement Learning Cognitive Systems II - Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation
More informationProduct Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments
Product Feature-based Ratings foropinionsummarization of E-Commerce Feedback Comments Vijayshri Ramkrishna Ingale PG Student, Department of Computer Engineering JSPM s Imperial College of Engineering &
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationMalicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method
Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering
More informationISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM
Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 23-26, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and
More informationDeveloping True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability
Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability Shih-Bin Chen Dept. of Information and Computer Engineering, Chung-Yuan Christian University Chung-Li, Taiwan
More informationMachine Learning from Garden Path Sentences: The Application of Computational Linguistics
Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,
More informationOn the Combined Behavior of Autonomous Resource Management Agents
On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science
More informationPREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES
PREDICTING SPEECH RECOGNITION CONFIDENCE USING DEEP LEARNING WITH WORD IDENTITY AND SCORE FEATURES Po-Sen Huang, Kshitiz Kumar, Chaojun Liu, Yifan Gong, Li Deng Department of Electrical and Computer Engineering,
More informationLaboratorio di Intelligenza Artificiale e Robotica
Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning
More informationAustralian Journal of Basic and Applied Sciences
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Feature Selection Technique Using Principal Component Analysis For Improving Fuzzy C-Mean
More informationBAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass
BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,
More informationInvestigation and Analysis of College Students Cognition in Science and Technology Competitions
Investigation and Analysis of College Students Cognition in Science and Technology Competitions https://doi.org/10.3991/ijet.v12i07.7226 Hongwei Yue Wuyi University, Jiangmen, China Ken Cai * Zhongkai
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationIdentification of Opinion Leaders Using Text Mining Technique in Virtual Community
Identification of Opinion Leaders Using Text Mining Technique in Virtual Community Chihli Hung Department of Information Management Chung Yuan Christian University Taiwan 32023, R.O.C. chihli@cycu.edu.tw
More informationDetecting English-French Cognates Using Orthographic Edit Distance
Detecting English-French Cognates Using Orthographic Edit Distance Qiongkai Xu 1,2, Albert Chen 1, Chang i 1 1 The Australian National University, College of Engineering and Computer Science 2 National
More informationThe Current Situations of International Cooperation and Exchange and Future Expectations of Guangzhou Ploytechnic of Sports
The Current Situations of International Cooperation and Exchange and Future Expectations of Guangzhou Ploytechnic of Sports It plans to enroll students officially in 2015 Sports services and management
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationData Fusion Models in WSNs: Comparison and Analysis
Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,
More informationApplication of Visualization Technology in Professional Teaching
Application of Visualization Technology in Professional Teaching LI Baofu, SONG Jiayong School of Energy Science and Engineering Henan Polytechnic University, P. R. China, 454000 libf@hpu.edu.cn Abstract:
More informationTransfer Learning Action Models by Measuring the Similarity of Different Domains
Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yat-sen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn
More informationNCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches
NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches Yu-Chun Wang Chun-Kai Wu Richard Tzong-Han Tsai Department of Computer Science
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationLaboratorio di Intelligenza Artificiale e Robotica
Laboratorio di Intelligenza Artificiale e Robotica A.A. 2008-2009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms Genetics-Based Machine Learning
More informationSegmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition
Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition Yanzhang He, Eric Fosler-Lussier Department of Computer Science and Engineering The hio
More informationTOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION. by Yang Xu PhD of Information Sciences
TOKEN-BASED APPROACH FOR SCALABLE TEAM COORDINATION by Yang Xu PhD of Information Sciences Submitted to the Graduate Faculty of in partial fulfillment of the requirements for the degree of Doctor of Philosophy
More informationExploration. CS : Deep Reinforcement Learning Sergey Levine
Exploration CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?
More informationUnvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition
Unvoiced Landmark Detection for Segment-based Mandarin Continuous Speech Recognition Hua Zhang, Yun Tang, Wenju Liu and Bo Xu National Laboratory of Pattern Recognition Institute of Automation, Chinese
More informationUsing Proportions to Solve Percentage Problems I
RP7-1 Using Proportions to Solve Percentage Problems I Pages 46 48 Standards: 7.RP.A. Goals: Students will write equivalent statements for proportions by keeping track of the part and the whole, and by
More informationThe Learning Model S2P: a formal and a personal dimension
The Learning Model S2P: a formal and a personal dimension Salah Eddine BAHJI, Youssef LEFDAOUI, and Jamila EL ALAMI Abstract The S2P Learning Model was originally designed to try to understand the Game-based
More informationThe Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma
International Journal of Computer Applications (975 8887) The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma Gilbert M.
More informationAMULTIAGENT system [1] can be defined as a group of
156 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 2, MARCH 2008 A Comprehensive Survey of Multiagent Reinforcement Learning Lucian Buşoniu, Robert Babuška,
More informationA Case-Based Approach To Imitation Learning in Robotic Agents
A Case-Based Approach To Imitation Learning in Robotic Agents Tesca Fitzgerald, Ashok Goel School of Interactive Computing Georgia Institute of Technology, Atlanta, GA 30332, USA {tesca.fitzgerald,goel}@cc.gatech.edu
More informationRobust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction
INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer
More informationChen Zhou. June Room 492, Darla Moore School of Business Office: (803) University of South Carolina 1014 Greene Street
Chen Zhou June 2017 Room 492, Darla Moore School of Business Office: (803) 777-4914 University of South Carolina 1014 Greene Street Email: chen.zhou@moore.sc.edu Columbia, SC, 29201 USA ACADEMIC APPOINTMENT
More informationMYCIN. The MYCIN Task
MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task
More informationMandarin Lexical Tone Recognition: The Gating Paradigm
Kansas Working Papers in Linguistics, Vol. 0 (008), p. 8 Abstract Mandarin Lexical Tone Recognition: The Gating Paradigm Yuwen Lai and Jie Zhang University of Kansas Research on spoken word recognition
More informationDefragmenting Textual Data by Leveraging the Syntactic Structure of the English Language
Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Nathaniel Hayes Department of Computer Science Simpson College 701 N. C. St. Indianola, IA, 50125 nate.hayes@my.simpson.edu
More informationHuman Factors Computer Based Training in Air Traffic Control
Paper presented at Ninth International Symposium on Aviation Psychology, Columbus, Ohio, USA, April 28th to May 1st 1997. Human Factors Computer Based Training in Air Traffic Control A. Bellorini 1, P.
More informationCircuit Simulators: A Revolutionary E-Learning Platform
Circuit Simulators: A Revolutionary E-Learning Platform Mahi Itagi Padre Conceicao College of Engineering, Verna, Goa, India. itagimahi@gmail.com Akhil Deshpande Gogte Institute of Technology, Udyambag,
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationThought and Suggestions on Teaching Material Management Job in Colleges and Universities Based on Improvement of Innovation Capacity
Thought and Suggestions on Teaching Material Management Job in Colleges and Universities Based on Improvement of Innovation Capacity Lihua Geng 1 & Bingjun Yao 1 1 Changchun University of Science and Technology,
More informationEmpirical research on implementation of full English teaching mode in the professional courses of the engineering doctoral students
Empirical research on implementation of full English teaching mode in the professional courses of the engineering doctoral students Yunxia Zhang & Li Li College of Electronics and Information Engineering,
More informationarxiv: v2 [cs.ro] 3 Mar 2017
Learning Feedback Terms for Reactive Planning and Control Akshara Rai 2,3,, Giovanni Sutanto 1,2,, Stefan Schaal 1,2 and Franziska Meier 1,2 arxiv:1610.03557v2 [cs.ro] 3 Mar 2017 Abstract With the advancement
More informationClass-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification
Class-Discriminative Weighted Distortion Measure for VQ-Based Speaker Identification Tomi Kinnunen and Ismo Kärkkäinen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 JOENSUU,
More informationContinual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots
Continual Curiosity-Driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots Varun Raj Kompella, Marijn Stollenga, Matthew Luciw, Juergen Schmidhuber The Swiss AI Lab IDSIA, USI
More informationMULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.
Ch 2 Test Remediation Work Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Provide an appropriate response. 1) High temperatures in a certain
More informationLearning Human Utility from Video Demonstrations for Deductive Planning in Robotics
Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics Nishant Shukla, Yunzhong He, Frank Chen, and Song-Chun Zhu Center for Vision, Cognition, Learning, and Autonomy University
More informationReducing Features to Improve Bug Prediction
Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science
More informationGeorgetown University at TREC 2017 Dynamic Domain Track
Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain
More informationSupervised Agriculture Experience Suffield Regional 2013
Name Chapter Mailing address Home phone Email address: Cell phone Date of Birth Present Age Years of Ag. Ed. completed as of Year in school or year of graduation Year Greenhand Degree awarded Total active
More informationStrategy Study on Primary School English Game Teaching
6th International Conference on Electronic, Mechanical, Information and Management (EMIM 2016) Strategy Study on Primary School English Game Teaching Feng He Primary Education College, Linyi University
More informationAnswer Key Applied Calculus 4
Answer Key Applied Calculus 4 Free PDF ebook Download: Answer Key 4 Download or Read Online ebook answer key applied calculus 4 in PDF Format From The Best User Guide Database CALCULUS. FOR THE for the
More informationApplication of Multimedia Technology in Vocabulary Learning for Engineering Students
Application of Multimedia Technology in Vocabulary Learning for Engineering Students https://doi.org/10.3991/ijet.v12i01.6153 Xue Shi Luoyang Institute of Science and Technology, Luoyang, China xuewonder@aliyun.com
More informationDEVELOPMENT OF AN INTELLIGENT MAINTENANCE SYSTEM FOR ELECTRONIC VALVES
DEVELOPMENT OF AN INTELLIGENT MAINTENANCE SYSTEM FOR ELECTRONIC VALVES Luiz Fernando Gonçalves, luizfg@ece.ufrgs.br Marcelo Soares Lubaszewski, luba@ece.ufrgs.br Carlos Eduardo Pereira, cpereira@ece.ufrgs.br
More informationProcedia - Social and Behavioral Sciences 191 ( 2015 ) WCES 2014
Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 191 ( 2015 ) 323 329 WCES 2014 Assessing Students Perception Of E-Learning In Blended Environment: An Experimental
More informationProFusion2 Sensor Data Fusion for Multiple Active Safety Applications
ProFusion2 Sensor Data Fusion for Multiple Active Safety Applications S.-B. Park 1, F. Tango 2, O. Aycard 3, A. Polychronopoulos 4, U. Scheunert 5, T. Tatschke 6 1 DELPHI, Electronics & Safety, 42119 Wuppertal,
More informationHow to read a Paper ISMLL. Dr. Josif Grabocka, Carlotta Schatten
How to read a Paper ISMLL Dr. Josif Grabocka, Carlotta Schatten Hildesheim, April 2017 1 / 30 Outline How to read a paper Finding additional material Hildesheim, April 2017 2 / 30 How to read a paper How
More informationUsing EEG to Improve Massive Open Online Courses Feedback Interaction
Using EEG to Improve Massive Open Online Courses Feedback Interaction Haohan Wang, Yiwei Li, Xiaobo Hu, Yucong Yang, Zhu Meng, Kai-min Chang Language Technologies Institute School of Computer Science Carnegie
More informationLEGO MINDSTORMS Education EV3 Coding Activities
LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a
More informationA student diagnosing and evaluation system for laboratory-based academic exercises
A student diagnosing and evaluation system for laboratory-based academic exercises Maria Samarakou, Emmanouil Fylladitakis and Pantelis Prentakis Technological Educational Institute (T.E.I.) of Athens
More informationRegret-based Reward Elicitation for Markov Decision Processes
444 REGAN & BOUTILIER UAI 2009 Regret-based Reward Elicitation for Markov Decision Processes Kevin Regan Department of Computer Science University of Toronto Toronto, ON, CANADA kmregan@cs.toronto.edu
More informationFEIRONG YUAN, PH.D. Updated: April 15, 2016
FEIRONG YUAN, PH.D. Assistant Professor The University of Texas at Arlington College of Business Department of Management Box 19467 701 S. West Street, Suite 226 Arlington, TX 76019-0467 Phone: 817-272-3863
More informationTime series prediction
Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationSemi-Supervised Face Detection
Semi-Supervised Face Detection Nicu Sebe, Ira Cohen 2, Thomas S. Huang 3, Theo Gevers Faculty of Science, University of Amsterdam, The Netherlands 2 HP Research Labs, USA 3 Beckman Institute, University
More informationUNIVERSITY OF CALIFORNIA SANTA CRUZ TOWARDS A UNIVERSAL PARAMETRIC PLAYER MODEL
UNIVERSITY OF CALIFORNIA SANTA CRUZ TOWARDS A UNIVERSAL PARAMETRIC PLAYER MODEL A thesis submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in COMPUTER SCIENCE
More informationWenguang Sun CAREER Award. National Science Foundation
Wenguang Sun Address: 401W Bridge Hall Department of Data Sciences and Operations Marshall School of Business University of Southern California Los Angeles, CA 90089-0809 Phone: (213) 740-0093 Fax: (213)
More informationHigh-level Reinforcement Learning in Strategy Games
High-level Reinforcement Learning in Strategy Games Christopher Amato Department of Computer Science University of Massachusetts Amherst, MA 01003 USA camato@cs.umass.edu Guy Shani Department of Computer
More informationEileen Bau CIE/USA-DFW 2014
Eileen Bau Frisco Liberty High School, 10 th Grade DECA International Development Career Conference (2013 and 2014) 1 st Place Editor/Head of Communications (LHS Key Club) Grand Champion at International
More informationThe Effect of Explicit Vocabulary Application (EVA) on Students Achievement and Acceptance in Learning Explicit English Vocabulary
The Effect of Explicit Vocabulary Application (EVA) on Students Achievement and Acceptance in Learning Explicit English Vocabulary Z. Zakaria *, A. N. Che Pee Che Hanapi, M. H. Zakaria and I. Ahmad Faculty
More informationLevel 1 Mathematics and Statistics, 2015
91037 910370 1SUPERVISOR S Level 1 Mathematics and Statistics, 2015 91037 Demonstrate understanding of chance and data 9.30 a.m. Monday 9 November 2015 Credits: Four Achievement Achievement with Merit
More informationWhat s in a Step? Toward General, Abstract Representations of Tutoring System Log Data
What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data Kurt VanLehn 1, Kenneth R. Koedinger 2, Alida Skogsholm 2, Adaeze Nwaigwe 2, Robert G.M. Hausmann 1, Anders Weinstein
More informationDOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY?
DOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY? Noor Rachmawaty (itaw75123@yahoo.com) Istanti Hermagustiana (dulcemaria_81@yahoo.com) Universitas Mulawarman, Indonesia Abstract: This paper is based
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationDistributed Weather Net: Wireless Sensor Network Supported Inquiry-Based Learning
Distributed Weather Net: Wireless Sensor Network Supported Inquiry-Based Learning Ben Chang, Department of E-Learning Design and Management, National Chiayi University, 85 Wenlong, Mingsuin, Chiayi County
More informationData Modeling and Databases II Entity-Relationship (ER) Model. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich
Data Modeling and Databases II Entity-Relationship (ER) Model Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich Database design Information Requirements Requirements Engineering
More informationPaper 2. Mathematics test. Calculator allowed. First name. Last name. School KEY STAGE TIER
259574_P2 5-7_KS3_Ma.qxd 1/4/04 4:14 PM Page 1 Ma KEY STAGE 3 TIER 5 7 2004 Mathematics test Paper 2 Calculator allowed Please read this page, but do not open your booklet until your teacher tells you
More informationAnalyzing the Usage of IT in SMEs
IBIMA Publishing Communications of the IBIMA http://www.ibimapublishing.com/journals/cibima/cibima.html Vol. 2010 (2010), Article ID 208609, 10 pages DOI: 10.5171/2010.208609 Analyzing the Usage of IT
More informationAUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS
AUTOMATED TROUBLESHOOTING OF MOBILE NETWORKS USING BAYESIAN NETWORKS R.Barco 1, R.Guerrero 2, G.Hylander 2, L.Nielsen 3, M.Partanen 2, S.Patel 4 1 Dpt. Ingeniería de Comunicaciones. Universidad de Málaga.
More informationDIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) DIRECT ADAPTATION OF HYBRID DNN/HMM MODEL FOR FAST SPEAKER ADAPTATION IN LVCSR BASED ON SPEAKER CODE Shaofei Xue 1
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationNational Taiwan Normal University - List of Presidents
National Taiwan Normal University - List of Presidents 1st Chancellor Li Ji-gu (Term of Office: 1946.5 ~1948.6) Chancellor Li Ji-gu (1895-1968), former name Zong Wu, from Zhejiang, Shaoxing. Graduated
More informationPOS tagging of Chinese Buddhist texts using Recurrent Neural Networks
POS tagging of Chinese Buddhist texts using Recurrent Neural Networks Longlu Qin Department of East Asian Languages and Cultures longlu@stanford.edu Abstract Chinese POS tagging, as one of the most important
More informationA Model to Detect Problems on Scrum-based Software Development Projects
A Model to Detect Problems on Scrum-based Software Development Projects ABSTRACT There is a high rate of software development projects that fails. Whenever problems can be detected ahead of time, software
More informationCLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH
ISSN: 0976-3104 Danti and Bhushan. ARTICLE OPEN ACCESS CLASSIFICATION OF TEXT DOCUMENTS USING INTEGER REPRESENTATION AND REGRESSION: AN INTEGRATED APPROACH Ajit Danti 1 and SN Bharath Bhushan 2* 1 Department
More informationOperational Knowledge Management: a way to manage competence
Operational Knowledge Management: a way to manage competence Giulio Valente Dipartimento di Informatica Universita di Torino Torino (ITALY) e-mail: valenteg@di.unito.it Alessandro Rigallo Telecom Italia
More informationRole of Blackboard Platform in Undergraduate Education A case study on physiology learning in nurse major
I.J. Education and Management Engineering 2012, 5, 31-36 Published Online May 2012 in MECS (http://www.mecs-press.net) DOI: 10.5815/ijeme.2012.05.05 Available online at http://www.mecs-press.net/ijeme
More informationCurriculum Vitae of Chiang-Ju Chien
Contact Information Curriculum Vitae of Chiang-Ju Chien Affiliation : Department of Electronic Engineering, Huafan University, Taiwan Address : Department of Electronic Engineering, Huafan University,
More informationFF+FPG: Guiding a Policy-Gradient Planner
FF+FPG: Guiding a Policy-Gradient Planner Olivier Buffet LAAS-CNRS University of Toulouse Toulouse, France firstname.lastname@laas.fr Douglas Aberdeen National ICT australia & The Australian National University
More information