Sequential Decision Making Predictions under the Influence of Observational Learning
|
|
- Alfred Bond
- 5 years ago
- Views:
Transcription
1 Sequenial Decision Making Predicions under he Influence of Observaional Learning Shing H. DOONG Deparmen of Informaion Managemen, ShuTe Universiy YanChao Disric, KaoHsiung Ciy, Taiwan 8445 STRCT Today s corporae managers face challenges in informaion echnology (IT adopion wih grea sakes. The waiing period for an IT invesmen o be realized, if any, could be long; hus word-of-mouh informaion propagaion may no help hem o make wise decisions. Though informaion used by early adopers o make heir decisions may no be available o he public, lae adopers can ofen observe he decisions made by he early adopers and infer hidden informaion o supplemen heir own privae informaion. Observaional learning heory applies when a person uses observed behavior from ohers o infer somehing abou he usefulness of he observed behavior. Walden and rowne proposed a simulaion procedure o model he influence of observaional learning in sequenial decision makings. Previously, we proposed a dynamic ayesian nework (DN o model sequenial decision makings under he influence of observaional learning. In he presen sudy, we show how o infer a DN model from simulaed daa. Hidden Markov model and arificial neural neworks were used o infer he DN model. Their performance will be discussed. Keywords: Sequenial Decision Making, Observaional Learning, Dynamic ayesian Nework, Hidden Markov Model, rificial Neural Neworks. INTRODUCTION Today s corporae managers face challenges in informaion echnology (IT adopion wih grea sakes. IT ogeher wih elecommunicaion has been considered he main driver for he economic growh of many counries in he new economy era since 000s. To many companies, IT has become an indispensable par of heir core compeence wih several characerisics. Firs, IT is becoming so powerful and complex ha a fair assessmen of is meris is difficul. Second, capial invesmens in IT are subsanial, ye reurns on invesmens ofen ake ime o maerialize. rooks has shown ha sofware and oher echnological componens are complex arifacs ever buil by human beings []. In some cases, impacs of new echnologies may ake years o be realized []. Owing o hese reasons, corporae managers need differen kinds of ools and pracices o help hem make wise decisions in IT adopions. When people make decisions wih limied or asymmeric informaion, hey use differen pracices o correc his informaion deficiency. Observaional learning occurs when one person observes he behavior of anoher person and infers somehing abou he usefulness of he behavior based on ha observaion [3]. Research shows ha, due o informaion asymmery, people use wha hey observe from ohers o updae heir own privae informaion or belief abou a decision making [4]. Observaional learning ofen leads o an ineresing phenomenon called informaional cascades [5]. n informaional cascade occurs if an individual s acion does no depend on his privae informaion signal [5]. Walden and rowne [3] developed a heoreical exension of he observaional learning model in [5], where a binary privae signal is generaed for each decision maker who chooses o adop or rejec an acion. In [3], a coninuous privae signal is issued o each individual who also chooses o adop or rejec an acion. Changing he privae informaion signal from binary o coninuous has produced many ineresing resuls. For example, unlike he easy informaional cascading in he case of binary signals, Walden and rowne showed ha here are always lae decision reversals in a sequence of decision makings. Tha is, informaional cascades do no occur in he case of coninuous signals. simulaion procedure was used o invesigae he exended observaional learning heory [3]. In a laer sudy, we showed ha he Walden and rowne (W model can also be invesigaed from he perspecive of a dynamic ayesian nework (DN [6]. In he presen sudy, we consider he problem of inferring he DN from simulaed daa. Hidden Markov model (HMM and arificial neural neworks (NN are used o infer he DN model. This paper is organized as follows. We briefly discuss he W model and our DN perspecive firs. Then, HMM and NN are inroduced o learn he DN, given simulaed daa. Experimenal resuls will be presened nex, followed by discussions and conclusions.. MTERILS ND METHODS Observaional learning wih coninuous signals Walden and rowne used a coninuous signal o denoe he privae informaion received by an individual [3]. sequence of individuals will make a decision of choosing echnology (e.g., adop he cloud compuing IT or echnology (e.g., rejec he cloud compuing IT. ssume ha echnologies and emi signals from he normal disribuions N(, σ and N( µ, σ respecively, and µ > µ. n individual chooses if he following condiion is saisfied: s µ β s µ Here s is a privae signal received by he individual, p s µ and p s µ are he probabiliy disribuion ( ( µ (
2 funcions (pdfs of N (, σ and N(, σ, respecively. µ Plugging in pdfs o solve for s, we obain he following decision rule: s β s < β µ ( Dynamic ayesian Nework In order o model he W model as a DN, we need o use wo sequences of random variables o describe he dynamic involved in a sequenial decision model [6]. The variable X represens a decision hreshold β, hus P X = β =, and he Y variable represens he oucome of a decision. Thus, assuming ha signals are drawn from echnology, hen PY X is given by: ln( β σ µ + µ β = (3 ( µ µ Using signal deecion heory [7], Walden and rowne se he decision hreshold β as follows: P a β = P b β = β β s µ ds, s µ ds (7 Pr (µ β = k (4 Pr (µ Here k is common o all individuals and involves he relaive benefi of o. For he firs individual, we can assume echnologies and are equally good, hus Pr (µ = Pr (µ. For he remaining decision makers, hese erms are poserior probabiliies afer observing previous decisions: P µ P µ + + = P µ = P µ D,... D D,... D In he above equaion, D denoes he decision made by he -h individual. Using he chain rule of condiional probabiliy, Walden and rowne deduce he following rule for updaing decision hresholds: (5 Pr (D µ, β+ = β, Pr (D µ, (6 where D = a or b when he -h individual chooses echnology or, and consiss of decisions made by all - previous individuals. Decisions in ogeher deermine he signal breakpoin r β for he -h individual via he hreshold β and Eq. ( (3. Consequenly, we have he following ideniies when is he rue echnology emiing privae signals. Pr (a µ, = Pr (b µ, = p β β p ( s µ ds ( s µ ds Similar ideniies can be derived when is he rue echnology emiing privae signals. If he -h individual chooses echnology (i.e., D = a, β will be scaled down o form β + because Pr (a µ, < Pr (a µ,. Thus, he break-poin β for signal s is moved lefwards. There is more space for he nex individual o choose echnology. On he oher hand, if his individual chooses, hen β is scaled up, β moves righwards and here is less space for he nex individual o choose echnology. In order o describe he fac ha β scales up or down depending on he decision of he -h individual, a causal link from Y o X + mus be esablished as follows: X = β X + = β + Y Y + Figure. DN for observaional learning The X + variable depends on X and Y by rules in Table. This random variable has a discree disribuion and is value depends on he previous hidden sae (X and he previous decision (Y. The W model is now convered o a DN in Figure wih relevan pdfs given in Eq. (7 and Table. Table. Condiional probabiliy PX + X, Y Pr (X + X, Y ( β, a ( β, b Pr (a µ, β 0 Pr (a µ, Pr (b µ, β 0 Pr (b µ, The DN perspecive has a few advanages over he original W model. Firs, he signal receiving and decision making sep has been simplified o a binomial sampling sep. Second, he dynamic updaing of decision hresholds is replaced by clear rules in Table. The DN in Figure is also easy o inerpre. I ells causal relaionships among all relevan variables and shows how he sysem evolves as ime moves ahead. Hidden Markov Models hidden Markov model (HMM is represened by a 5-uple (S, V,,, P where S = {s, s,, s N } consiss of N saes ha are no direcly observable, V = {v, v,.., v M } denoes M observable oucomes emied by a sae, = ((a ij = Ps j s i represens he ransiion probabiliies beween saes, = ((b jk = Pv k s j represens he emiing probabiliies of oucomes by saes, and P = ((p i represens he iniial probabiliies of saes [8]. Given an HMM wih all needed componens, a sequence of oucomes can be generaed by ( choosing an iniial sae according o he iniial probabiliy vecor P; ( emiing an oucome from his sae by using he emiing probabiliy marix
3 V; (3 ransiing o a nex sae by following he ransiion marix ; and (4 repeaing seps ( and (3. This is a daa deducion procedure commonly used in simulaion sudies. On he oher hand, given sequences of oucomes (daa, an HMM may be learned from he daa and used for fuure predicions. This is a paerns inducion procedure used in mos daa mining algorihms. rificial Neural Neworks rificial neural nework (NN has been successfully applied o solve many funcion approximaion problems in engineering and social sciences. n NN simulaes he neural sysem of a brain o learn paerns from examples and uses he learned knowledge o make predicions for new daa [9]. basic daa processing uni in a neural ne is called a neuron which is conneced o oher neurons via synapses. The srucure of an NN refers o he number of neurons and he way hey are disribued and conneced. To simplify he compuaion, neurons are scaered ino layers and informaion is ransferred from layer o layer. The inpu layer represens he independen variables in a funcion approximaion problem. The oupu layer corresponds o he dependen variable(s. Layers beween he inpu and oupu layers are called hidden layers. n NN wih hidden layers is also called a mulilayer percepron (MLP. Wihou a hidden layer, a simple percepron has limied learning capabiliy [0]. I has been shown ha an MLP can approximae arbirarily well any coninuous decision region provided ha here are enough layers and neurons []. Learning an NN from daa is o find opimal synapic weighs o fi raining daa wih known inpuoupu pairs. enion mus be paid o he nework srucure so ha we do no overfi he model wih daa. rained NN can be used o predic oupu values for new inpu values. 3. SIMULTION STUDY To examine informaional cascades of sequenial decision makings under he influence of observaional learning, boh [3] and [6] presened a simulaion sudy. ssume ha wo alernaive echnologies and are o be seleced by a sequence of individuals. Suppose is he beer echnology, hus all privae signals will be emied by is pdf, which is assumed be a normal disribuion N( assume ha µ, σ. We µ =, µ = 0, and k =, σ = in he previous secion of observaional learning wih coninuous signals. simulaion run of decision makings consiss of 00 sequenial decisions as explained previously. For he W approach, his includes ( drawing a signal from he pdf of echnology ; ( making a decision based on he signal, Eq. ( and Eq. (3; (3 updaing he new hreshold according o he decision made and Eq. (6; and (4 coninuing he process unil he 00 h decision is made. On he oher hand, for he DN approach, his includes ( drawing a sample from he uniform disribuion on (0, o decide echnology or according o Eq. (7; ( updaing condiional probabiliy PX + X, Y in Table ; and (3 coninuing he process unil he 00 h decision is made. ecause he simulaion is based on probabilisic samplings, one run of simulaion can differ from anoher run of simulaion subsanially. Thus, a oal of 000 runs of simulaion are conduced o smooh ou flucuaions beween runs. he end, he average correc decision rae for each decision posiion (from o 00 is repored. The average correc decision rae a a posiion is he number of correc decisions (i.e., choosing a ha posiion ou of oal runs divided by 000. Figure shows ha boh approaches yield very similar cures of average correc decision rae. oh approaches have an average correc rae curve ha sars low a around.70 and increases o around.95 a he laer sage. The correlaion value beween hese wo sequences of average correc decision raes is.994 and he mean absolue error (ME is.03. Oher simulaion ypes including random updaing of decision hresholds and cases of eriary decisions can be considered wih he DN approach. verage correc rae Posiion number W DN Figure. Comparison of W and DN approaches. 4. LERING PTTERNS OF SEQUENTIL DECISIONS The las secion presens a simulaion sudy based on W and DN models. We now consider he reverse process of discovering models from daa. Since real sequenial decisions are hard, if no impossible, o obain in business, we use simulaed daa from he DN approach o learn paerns of sequenial decisions under he influence of observaional learning. Training samples The DN approach is used o generae raining samples for learning paerns of sequenial decisions. In oal, 000 observaion sequences are oupued from he simulaions. Each observaion sequence consiss of 00 sequenial decisions of (for choosing or 0 (for choosing. Using HMM as a learning ool HMM is a special DN when i is spread ou in seps. In order o use HMM, we need o decide he number of hidden saes and he number of observable oucomes. Since here are only wo possible decisions ( or 0, we choose hidden saes and emied oucomes. The 000 observaion sequences of raining samples are fed ino a aum-welch (also called a forwardbackward learning algorihm o learn parameers of an HMM [8]. These parameers include he iniial probabiliy for each sae, oucomes emiing probabiliies and saes ransiion probabiliies. The rained HMM is used o generae 000 sequences of simulaed oucomes. The simulaion is obvious and sraighforward, given he full parameers of an HMM. Each sequence consiss of 00 sequenial decisions. The average correc decision rae is compued as before and compared wih ha from he DN approach.
4 Using NN as a learning ool In order o use NN as a learning ool, we need o se up an inpu-oupu correspondence, i.e., inpu variables and oupu variables. Using he DN perspecive (Figure as a guideline, we can se up a correspondence as β + = f ( β, D Since β deermines he disribuion of D according o Eq. (7, we will use he probabiliy of choosing as he surrogae variable. Le denoe he probabiliy of choosing a he -h posiion. p p can be deermined from Eq. (7. Then, we will approximae he following funcion wih NN. p + = f ( p, D (8 To prepare raining samples for he NN learning, we use he average correc decision rae from he DN simulaions o denoe p, i.e. p = (number of decision a posiion /000. The D variable is exraced from he 000 observaion sequences of he DN simulaions. Insead of using he full se of 000 observaion sequences o rain a single NN model, we rain 0 NN models wih smaller daa ses and average oupus from hese 0 NN models o make predicions. More specifically, we randomly choose 00 observaion sequences from he DN simulaion o rain an NN model. This procedure is repeaed 0 imes o ge 0 NN models, which are bagged o ge he final predicor. The idea is similar o a bagging predicor []. Since our model in Eq. (8 has only wo inpus and one oupu, we do no need o use a complicaed nework srucure. One or wo hidden layers will suffice for our daa se. Though our daa se may be poenially large, e.g., 00 observaion sequences wih 00 sequenial decisions will produce 9900 inpu-oupu pairs, of which many are simply duplicaes. fer using a rialand-error approach wih es daa, we decided o use a wo hidden layer srucure he firs hidden layer has 4 neurons and he second hidden layer has neurons. Our final MLP has, 4,, neurons in he respecive layer. The Sigmod funcion was chosen o be he acivaion funcion. fer he bagging aggregaor is rained, i is used o predic he probabiliy in a simulaion of sequenial decisions. The firs p decision is simulaed by using he average correc rae a posiion from he DN simulaion. random sample is drawn from he uniform disribuion on (0, and compared wih his average correc rae o choose echnology or. fer he decision is made, i is plugged ino Eq. (8 wih he learned bagging NN predicor o predic he nex probabiliy of choosing. This process coninues unil he 00 h decision is made. gain, 000 runs of simulaion are conduced o calculae he average correc decision rae from he learned NN model. 5. EXPERIMENTL RESULTS In his secion, we presen he experimenal resuls from differen simulaion scenarios. The sandard case In he sandard case, we assume k=. Thus, he relaive benefi of choosing or is equal o one. The previous simulaion sudy has shown ha he average correc decision rae increases from around.70 a posiion o around.95 a posiion 00. Figure 3 shows he average correc decision rae curves from DN, HMM and NN. The DN simulaion was used o generae raining samples for he oher wo o learn. oh HMM and NN learn heir model from he raining samples, and use he learned model o simulae sequenial decisions. The average correc decision rae curve repors he simulaion resuls using he rained model. verage correc rae DN HMM NN Posiion number Figure 3. Comparison wih DN, HMM and NN (k= The ME beween DN and HMM is.03 and he same measure for DN and NN is.008. On he oher hand, he correlaion beween DN sequence and HMM sequence is.969, and he same measure for DN and NN is.98. Thus, NN has learned a beer predicion model for his sandard case. Technology has a higher relaive benefi In his case, we assume k = 0, hus echnology has a higher relaive benefi han echnology. This gives individuals less incenives o choose echnology. verage correc rae DN HMM NN Posiion number Figure 4. Comparison wih DN, HMM and NN (k=0 Figure 4 shows ha he rae of choosing echnology is subsanially smaller han ha in he sandard case. This is reasonable; because of a higher relaive benefi for choosing over, an individual mus have received a very srong signal in
5 order o make a decision of choosing. The probabiliy of choosing echnology is small a he beginning. When more individuals selec echnology, laer individuals increase heir belief in echnology hrough observaional learning. The simulaions show ha he average correc decision rae increases from less han.0 a posiion o around.60 a posiion 00. The ME beween DN and HMM and NN is.08 and.03 respecively. The correlaion beween DN sequence and HMM sequence is.995, while he same measure for DN and NN is.99. Therefore, HMM is a beer predicion model in his case. Technology has a higher relaive benefi and only parial sequences are used In his case, we assume ha echnology has a higher relaive benefi (k = 5, and only parial sequences from he DN simulaions are used o rain HMM and NN models. We assume ha only he firs 50 decisions in an observaion sequence are used o rain predicion models. verage correc rae DN HMM NN Posiion number Figure 5. Comparison of models (k=5, only 50 decisions used Since he relaive benefi of o is no as big as he one in he previous case, we expec individuals o have more incenives o choose echnology. Figure 5 verifies his wih an iniial average correc decision rae of around.5 o he las rae of around.80 a posiion 00. Since we only use he firs 50 decisions o rain HMM and NN, heir performance for he second half of decision sequences is more ineresing. Figure 5 shows ha he HMM model performs beer han he NN model for his par of decision sequences. Overall, he HMM model also produces a smaller ME (.06 vs..030 and a higher correlaion (.993 vs..970 han he NN model. 6. DISCUSSIONS The experimenal resuls show ha HMM has a beer capabiliy han NN in learning paerns of sequenial decisions. When k is big, he average correc decision rae curve resuling from he NN model is much jagged han ha from he HMM model. This is ineresing if we consider he fac ha he HMM model has no causal links beween D and β +. The DN model in Figure is our basis o consruc he NN model in Eq. (8. Tha is, he curren probabiliy of choosing and he curren decision oucome should ogeher decide he nex probabiliy of choosing. On he oher hand, an HMM has causal links beween hidden saes only. Using ransiion probabiliies, he nex sae is sampled based on he curren sae only. Oucomes from he curren sae have no effec on he sampling of he nex sae in HMM. This seems o conradic he causal model explained by he DN perspecive of observaional learning. The jaggedness of he NN average correc decision rae curve may come from an over-fied neural nework. ecause we have a small nework srucure wih a bundle of daa, hough many of hem are duplicaes, we may over-fi he nework o produce a sensiive predicor. The bagging procedure does no seem o overcome his difficuly. Oher predicion algorihms such as suppor vecor regressions wih known capabiliies in overfiing conrol may be considered in he fuure. 7. CONCLUSIONS Today s corporae managers face challenges in IT adopion wih grea sakes. Corporae IT has become so powerful and complex ha a fair assessmen of is meris is difficul. Capial invesmens in IT are subsanial, ye reurns on invesmens ofen ake ime o maerialize. Convenional word-of-mouh informaion propagaion procedures may work for consumer IT decisions, bu no for corporae IT decisions. Though i is usually difficul o obain he privae informaion ha oher companies use o make heir IT adopion decisions, i is possible o observe wha he oher companies have decided in heir IT adopion. Observaional learning heory applies when a person uses observed behavior from ohers o infer somehing abou he usefulness of he observed behavior. Corporae managers may pracice observaional learning o help hem make beer IT adopion decisions. Observaional learning is known o creae informaional cascades, a phenomenon when an individual s acion does no depend on his privae informaion signal. When informaional cascades occur, belief inferred from observaional learning has overshadowed he privae informaion signal ha an individual uses o make his decision. Walden and rowne [3] proposed a simulaion model o show ha informaional cascades do no occur when he privae informaion signal is coninuous. We presened a DN perspecive of he W model in [6]. The DN approach demonsraed similar simulaion resuls as he W approach.. This sudy is focused on learning he DN model resuling from observaional leaning impaced sequenial decisions. Two machine learning ools are used o learn he DN. The firs one, hidden Markov model, is iself a special case of DN. The second one, arificial neural nework, is a popular learning algorihm in arificial inelligence. The HMM learning approach does no consider impacs of he curren decision (D on he sampling of he nex sae. I also uses a limied number of hidden saes o represen coninuous informaion signals. On he oher hand, he NN learning approach uses he DN perspecive o model a funcional form for approximaion. Is coninuous oupu variable mees he ype of privae informaion signals in Walden and rowne [3]. The experimenal resuls show ha HMM has a beer learning capabiliy han NN in our sudy. In he fuure, we plan o run
6 more ess wih differen learning algorihms and diverse raining samples. Learning paerns of sequenial decisions consiues he reverse process of simulaion sudies as presened in [3, 6]. Togeher, simulaion sudies and paerns learning can help us beer undersand how observaional learning impacs sequenial decisions. cknowledgemens: This research has been suppored in par by a gran from he Naional Science Council of Taiwan under he conrac number NSC99-40-H MY. 8. REFERENCES [] E. rooks, The Myhical Man-monh: Essays on Sofware Engineering, Reading, M: ddison-wesley, 975. [] E. rynjolfsson, and L. Hi, L., Paradox los? Firm-level Evidence on he Reurns o Informaion Sysems, Managemen Science, Vol. 4, No., 996, pp [3] E.. Walden, and G.J. rowne, Sequenial dopion Theory: a Theory for Undersanding Herding ehavior in Early dopion of Novel Technologies, Journal of he ssociaion for Informaion Sysems, Vol. 0, No., 00, pp [4]. andura, Social learning heory, New York: General Learning Press, 977. [5] S. ikhchandani, D. Hirshleifer, and I. Welch, Theory of Fads, Fashion, Cusom, and Culural Change as Informaional Cascades, The Journal of Poliical Economy, Vol. 00, No. 5, 99, pp [6] S. Doong, and S. Ho, Consruc a Sequenial Decision Model: a Dynamic ayesian Nework Perspecive, Proceedings of he 44 h nnual Hawaii Inernaional Conference on Sysem Sciences (HICSS, 0. [7] D. Green, and J. Swes, Signal Deecion Theory and Psychophysics, New York: Wiley, 966. [8] L.R. Rabiner, Tuorial on Hidden Markov Models and Seleced pplicaions in Speech Recogniion, Proceedings of he IEEE, Vol. 77, No., 989, pp [9] I.H. Wien, and E. Frank, Daa Mining, Pracical Machine Learning Tools and Techniques, San Francisco: Morgan Kauffman Publishers, 005. [0] M.L. Minsky, and S.. Paper, Perceprons, Cambridge, M: MIT Press, 969. []. Gallan, and H. Whie, On Learning Derivaives of an Unknown Mapping wih Mulilayer Feed-forward Neworks, Neural Neworks, Vol. 5, No., 99, pp [] L. reiman, agging Predicors, Machine Learning, Vol. 4, No., 996, pp
Neural Network Model of the Backpropagation Algorithm
Neural Nework Model of he Backpropagaion Algorihm Rudolf Jakša Deparmen of Cyberneics and Arificial Inelligence Technical Universiy of Košice Lená 9, 4 Košice Slovakia jaksa@neuron.uke.sk Miroslav Karák
More informationAn Effiecient Approach for Resource Auto-Scaling in Cloud Environments
Inernaional Journal of Elecrical and Compuer Engineering (IJECE) Vol. 6, No. 5, Ocober 2016, pp. 2415~2424 ISSN: 2088-8708, DOI: 10.11591/ijece.v6i5.10639 2415 An Effiecien Approach for Resource Auo-Scaling
More informationInformation Propagation for informing Special Population Subgroups about New Ground Transportation Services at Airports
Downloaded from ascelibrary.org by Basil Sephanis on 07/13/16. Copyrigh ASCE. For personal use only; all righs reserved. Informaion Propagaion for informing Special Populaion Subgroups abou New Ground
More informationFast Multi-task Learning for Query Spelling Correction
Fas Muli-ask Learning for Query Spelling Correcion Xu Sun Dep. of Saisical Science Cornell Universiy Ihaca, NY 14853 xusun@cornell.edu Anshumali Shrivasava Dep. of Compuer Science Cornell Universiy Ihaca,
More information1 Language universals
AS LX 500 Topics: Language Uniersals Fall 2010, Sepember 21 4a. Anisymmery 1 Language uniersals Subjec-erb agreemen and order Bach (1971) discusses wh-quesions across SO and SO languages, hypohesizing:...
More informationMyLab & Mastering Business
MyLab & Masering Business Efficacy Repor 2013 MyLab & Masering: Business Efficacy Repor 2013 Edied by Michelle D. Speckler 2013 Pearson MyAccouningLab, MyEconLab, MyFinanceLab, MyMarkeingLab, and MyOMLab
More informationChannel Mapping using Bidirectional Long Short-Term Memory for Dereverberation in Hands-Free Voice Controlled Devices
Z. Zhang e al.: Channel Mapping using Bidirecional Long Shor-Term Memory for Dereverberaion in Hands-Free Voice Conrolled Devices 525 Channel Mapping using Bidirecional Long Shor-Term Memory for Dereverberaion
More informationMore Accurate Question Answering on Freebase
More Accurae Quesion Answering on Freebase Hannah Bas, Elmar Haussmann Deparmen of Compuer Science Universiy of Freiburg 79110 Freiburg, Germany {bas, haussmann}@informaik.uni-freiburg.de ABSTRACT Real-world
More informationCourse Outline. Course Grading. Where to go for help. Academic Integrity. EE-589 Introduction to Neural Networks NN 1 EE
EE-589 Introduction to Neural Assistant Prof. Dr. Turgay IBRIKCI Room # 305 (322) 338 6868 / 139 Wensdays 9:00-12:00 Course Outline The course is divided in two parts: theory and practice. 1. Theory covers
More informationMULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.
Ch 2 Test Remediation Work Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Provide an appropriate response. 1) High temperatures in a certain
More informationThe Evolution of Random Phenomena
The Evolution of Random Phenomena A Look at Markov Chains Glen Wang glenw@uchicago.edu Splash! Chicago: Winter Cascade 2012 Lecture 1: What is Randomness? What is randomness? Can you think of some examples
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationOn-the-Fly Customization of Automated Essay Scoring
Research Report On-the-Fly Customization of Automated Essay Scoring Yigal Attali Research & Development December 2007 RR-07-42 On-the-Fly Customization of Automated Essay Scoring Yigal Attali ETS, Princeton,
More informationACTIVITY: Comparing Combination Locks
5.4 Compound Events outcomes of one or more events? ow can you find the number of possible ACIVIY: Comparing Combination Locks Work with a partner. You are buying a combination lock. You have three choices.
More informationMalicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method
Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering
More informationCS Machine Learning
CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing
More informationModel Ensemble for Click Prediction in Bing Search Ads
Model Ensemble for Click Prediction in Bing Search Ads Xiaoliang Ling Microsoft Bing xiaoling@microsoft.com Hucheng Zhou Microsoft Research huzho@microsoft.com Weiwei Deng Microsoft Bing dedeng@microsoft.com
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 00-14
More informationSemi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration
INTERSPEECH 2013 Semi-Supervised GMM and DNN Acoustic Model Training with Multi-system Combination and Confidence Re-calibration Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu Microsoft Corporation, One
More informationA Coding System for Dynamic Topic Analysis: A Computer-Mediated Discourse Analysis Technique
A Coding System for Dynamic Topic Analysis: A Computer-Mediated Discourse Analysis Technique Hiromi Ishizaki 1, Susan C. Herring 2, Yasuhiro Takishima 1 1 KDDI R&D Laboratories, Inc. 2 Indiana University
More informationA cognitive perspective on pair programming
Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2006 Proceedings Americas Conference on Information Systems (AMCIS) December 2006 A cognitive perspective on pair programming Radhika
More informationModeling function word errors in DNN-HMM based LVCSR systems
Modeling function word errors in DNN-HMM based LVCSR systems Melvin Jose Johnson Premkumar, Ankur Bapna and Sree Avinash Parchuri Department of Computer Science Department of Electrical Engineering Stanford
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationUsing Proportions to Solve Percentage Problems I
RP7-1 Using Proportions to Solve Percentage Problems I Pages 46 48 Standards: 7.RP.A. Goals: Students will write equivalent statements for proportions by keeping track of the part and the whole, and by
More informationCS 598 Natural Language Processing
CS 598 Natural Language Processing Natural language is everywhere Natural language is everywhere Natural language is everywhere Natural language is everywhere!"#$%&'&()*+,-./012 34*5665756638/9:;< =>?@ABCDEFGHIJ5KL@
More informationLearning to Rank with Selection Bias in Personal Search
Learning to Rank with Selection Bias in Personal Search Xuanhui Wang, Michael Bendersky, Donald Metzler, Marc Najork Google Inc. Mountain View, CA 94043 {xuanhui, bemike, metzler, najork}@google.com ABSTRACT
More informationBAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION. Han Shu, I. Lee Hetherington, and James Glass
BAUM-WELCH TRAINING FOR SEGMENT-BASED SPEECH RECOGNITION Han Shu, I. Lee Hetherington, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge,
More informationReinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationTime series prediction
Chapter 13 Time series prediction Amaury Lendasse, Timo Honkela, Federico Pouzols, Antti Sorjamaa, Yoan Miche, Qi Yu, Eric Severin, Mark van Heeswijk, Erkki Oja, Francesco Corona, Elia Liitiäinen, Zhanxing
More informationA study of speaker adaptation for DNN-based speech synthesis
A study of speaker adaptation for DNN-based speech synthesis Zhizheng Wu, Pawel Swietojanski, Christophe Veaux, Steve Renals, Simon King The Centre for Speech Technology Research (CSTR) University of Edinburgh,
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationLahore University of Management Sciences. FINN 321 Econometrics Fall Semester 2017
Instructor Syed Zahid Ali Room No. 247 Economics Wing First Floor Office Hours Email szahid@lums.edu.pk Telephone Ext. 8074 Secretary/TA TA Office Hours Course URL (if any) Suraj.lums.edu.pk FINN 321 Econometrics
More informationIntroduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition
Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition Todd Holloway Two Lecture Series for B551 November 20 & 27, 2007 Indiana University Outline Introduction Bias and
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationAmerican Journal of Business Education October 2009 Volume 2, Number 7
Factors Affecting Students Grades In Principles Of Economics Orhan Kara, West Chester University, USA Fathollah Bagheri, University of North Dakota, USA Thomas Tolin, West Chester University, USA ABSTRACT
More informationThe Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma
International Journal of Computer Applications (975 8887) The Use of Statistical, Computational and Modelling Tools in Higher Learning Institutions: A Case Study of the University of Dodoma Gilbert M.
More informationPredicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks
Predicting Student Attrition in MOOCs using Sentiment Analysis and Neural Networks Devendra Singh Chaplot, Eunhee Rhim, and Jihie Kim Samsung Electronics Co., Ltd. Seoul, South Korea {dev.chaplot,eunhee.rhim,jihie.kim}@samsung.com
More informationSystem Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks
System Implementation for SemEval-2017 Task 4 Subtask A Based on Interpolated Deep Neural Networks 1 Tzu-Hsuan Yang, 2 Tzu-Hsuan Tseng, and 3 Chia-Ping Chen Department of Computer Science and Engineering
More informationLearning From the Past with Experiment Databases
Learning From the Past with Experiment Databases Joaquin Vanschoren 1, Bernhard Pfahringer 2, and Geoff Holmes 2 1 Computer Science Dept., K.U.Leuven, Leuven, Belgium 2 Computer Science Dept., University
More informationHow People Learn Physics
How People Learn Physics Edward F. (Joe) Redish Dept. Of Physics University Of Maryland AAPM, Houston TX, Work supported in part by NSF grants DUE #04-4-0113 and #05-2-4987 Teaching complex subjects 2
More informationADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF
Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download
More informationTest Effort Estimation Using Neural Network
J. Software Engineering & Applications, 2010, 3: 331-340 doi:10.4236/jsea.2010.34038 Published Online April 2010 (http://www.scirp.org/journal/jsea) 331 Chintala Abhishek*, Veginati Pavan Kumar, Harish
More informationSpeech Emotion Recognition Using Support Vector Machine
Speech Emotion Recognition Using Support Vector Machine Yixiong Pan, Peipei Shen and Liping Shen Department of Computer Technology Shanghai JiaoTong University, Shanghai, China panyixiong@sjtu.edu.cn,
More informationThe 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, / X
The 9 th International Scientific Conference elearning and software for Education Bucharest, April 25-26, 2013 10.12753/2066-026X-13-154 DATA MINING SOLUTIONS FOR DETERMINING STUDENT'S PROFILE Adela BÂRA,
More informationCapturing and Organizing Prior Student Learning with the OCW Backpack
Capturing and Organizing Prior Student Learning with the OCW Backpack Brian Ouellette,* Elena Gitin,** Justin Prost,*** Peter Smith**** * Vice President, KNEXT, Kaplan University Group ** Senior Research
More informationChapters 1-5 Cumulative Assessment AP Statistics November 2008 Gillespie, Block 4
Chapters 1-5 Cumulative Assessment AP Statistics Name: November 2008 Gillespie, Block 4 Part I: Multiple Choice This portion of the test will determine 60% of your overall test grade. Each question is
More informationIntegration of ICT in Teaching and Learning
Integration of ICT in Teaching and Learning Dr. Pooja Malhotra Assistant Professor, Dept of Commerce, Dyal Singh College, Karnal, India Email: pkwatra@gmail.com. INTRODUCTION 2 st century is an era of
More informationMachine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler
Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina
More informationProposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science
Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Gilberto de Paiva Sao Paulo Brazil (May 2011) gilbertodpaiva@gmail.com Abstract. Despite the prevalence of the
More informationOCR for Arabic using SIFT Descriptors With Online Failure Prediction
OCR for Arabic using SIFT Descriptors With Online Failure Prediction Andrey Stolyarenko, Nachum Dershowitz The Blavatnik School of Computer Science Tel Aviv University Tel Aviv, Israel Email: stloyare@tau.ac.il,
More informationAnalysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems
Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems Ajith Abraham School of Business Systems, Monash University, Clayton, Victoria 3800, Australia. Email: ajith.abraham@ieee.org
More informationImproving software testing course experience with pair testing pattern. Iyad Alazzam* and Mohammed Akour
244 Int. J. Teaching and Case Studies, Vol. 6, No. 3, 2015 Improving software testing course experience with pair testing pattern Iyad lazzam* and Mohammed kour Department of Computer Information Systems,
More informationteaching essay writing presentation presentation essay presentations. presentation, presentations writing teaching essay essay writing
Teaching essay writing powerpoint presentation. In this powerpoi nt, I amgoing to use Gibbs (1988) Reflective Cycle, teaching essay. This writing presentation help inform the college as to your potential
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationSTT 231 Test 1. Fill in the Letter of Your Choice to Each Question in the Scantron. Each question is worth 2 point.
STT 231 Test 1 Fill in the Letter of Your Choice to Each Question in the Scantron. Each question is worth 2 point. 1. A professor has kept records on grades that students have earned in his class. If he
More informationSoft Computing based Learning for Cognitive Radio
Int. J. on Recent Trends in Engineering and Technology, Vol. 10, No. 1, Jan 2014 Soft Computing based Learning for Cognitive Radio Ms.Mithra Venkatesan 1, Dr.A.V.Kulkarni 2 1 Research Scholar, JSPM s RSCOE,Pune,India
More informationAn Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District
An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District Report Submitted June 20, 2012, to Willis D. Hawley, Ph.D., Special
More informationFUZZY EXPERT. Dr. Kasim M. Al-Aubidy. Philadelphia University. Computer Eng. Dept February 2002 University of Damascus-Syria
FUZZY EXPERT SYSTEMS 16-18 18 February 2002 University of Damascus-Syria Dr. Kasim M. Al-Aubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate
More informationArtificial Neural Networks
Artificial Neural Networks Andres Chavez Math 382/L T/Th 2:00-3:40 April 13, 2010 Chavez2 Abstract The main interest of this paper is Artificial Neural Networks (ANNs). A brief history of the development
More informationAn Estimating Method for IT Project Expected Duration Oriented to GERT
An Estimating Method for IT Project Expected Duration Oriented to GERT Li Yu and Meiyun Zuo School of Information, Renmin University of China, Beijing 100872, P.R. China buaayuli@mc.e(iuxn zuomeiyun@263.nct
More informationIn Workflow. Viewing: Last edit: 10/27/15 1:51 pm. Approval Path. Date Submi ed: 10/09/15 2:47 pm. 6. Coordinator Curriculum Management
1 of 5 11/19/2015 8:10 AM Date Submi ed: 10/09/15 2:47 pm Viewing: Last edit: 10/27/15 1:51 pm Changes proposed by: GODWINH In Workflow 1. BUSI Editor 2. BUSI Chair 3. BU Associate Dean 4. Biggio Center
More informationMachine Learning and Development Policy
Machine Learning and Development Policy Sendhil Mullainathan (joint papers with Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Ziad Obermeyer) Magic? Hard not to be wowed But what makes
More informationAbstractions and the Brain
Abstractions and the Brain Brian D. Josephson Department of Physics, University of Cambridge Cavendish Lab. Madingley Road Cambridge, UK. CB3 OHE bdj10@cam.ac.uk http://www.tcm.phy.cam.ac.uk/~bdj10 ABSTRACT
More informationSoftprop: Softmax Neural Network Backpropagation Learning
Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA E-mail: mrimer@axon.cs.byu.edu Tony Martinez Computer Science
More informationFrom Empire to Twenty-First Century Britain: Economic and Political Development of Great Britain in the 19th and 20th Centuries 5HD391
Provisional list of courses for Exchange students Fall semester 2017: University of Economics, Prague Courses stated below are offered by particular departments and faculties at the University of Economics,
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationMathematical Induction Examples And Solutions
Examples And Free PDF ebook Download: Examples And Download or Read Online ebook mathematical induction examples and solutions in PDF Format From The Best User Guide Database Examples. Question 1. Prove
More informationTHE USE OF WEB-BLOG TO IMPROVE THE GRADE X STUDENTS MOTIVATION IN WRITING RECOUNT TEXTS AT SMAN 3 MALANG
THE USE OF WEB-BLOG TO IMPROVE THE GRADE X STUDENTS MOTIVATION IN WRITING RECOUNT TEXTS AT SMAN 3 MALANG Daristya Lyan R. D., Gunadi H. Sulistyo State University of Malang E-mail: daristya@yahoo.com ABSTRACT:
More informationAnalyzing the Usage of IT in SMEs
IBIMA Publishing Communications of the IBIMA http://www.ibimapublishing.com/journals/cibima/cibima.html Vol. 2010 (2010), Article ID 208609, 10 pages DOI: 10.5171/2010.208609 Analyzing the Usage of IT
More informationAnalysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription
Analysis of Speech Recognition Models for Real Time Captioning and Post Lecture Transcription Wilny Wilson.P M.Tech Computer Science Student Thejus Engineering College Thrissur, India. Sindhu.S Computer
More informationTransfer Learning Action Models by Measuring the Similarity of Different Domains
Transfer Learning Action Models by Measuring the Similarity of Different Domains Hankui Zhuo 1, Qiang Yang 2, and Lei Li 1 1 Software Research Institute, Sun Yat-sen University, Guangzhou, China. zhuohank@gmail.com,lnslilei@mail.sysu.edu.cn
More informationOn-Line Data Analytics
International Journal of Computer Applications in Engineering Sciences [VOL I, ISSUE III, SEPTEMBER 2011] [ISSN: 2231-4946] On-Line Data Analytics Yugandhar Vemulapalli #, Devarapalli Raghu *, Raja Jacob
More informationDefragmenting Textual Data by Leveraging the Syntactic Structure of the English Language
Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Nathaniel Hayes Department of Computer Science Simpson College 701 N. C. St. Indianola, IA, 50125 nate.hayes@my.simpson.edu
More information4-3 Basic Skills and Concepts
4-3 Basic Skills and Concepts Identifying Binomial Distributions. In Exercises 1 8, determine whether the given procedure results in a binomial distribution. For those that are not binomial, identify at
More informationThree Strategies for Open Source Deployment: Substitution, Innovation, and Knowledge Reuse
Three Strategies for Open Source Deployment: Substitution, Innovation, and Knowledge Reuse Jonathan P. Allen 1 1 University of San Francisco, 2130 Fulton St., CA 94117, USA, jpallen@usfca.edu Abstract.
More informationAn OO Framework for building Intelligence and Learning properties in Software Agents
An OO Framework for building Intelligence and Learning properties in Software Agents José A. R. P. Sardinha, Ruy L. Milidiú, Carlos J. P. Lucena, Patrick Paranhos Abstract Software agents are defined as
More informationVisual CP Representation of Knowledge
Visual CP Representation of Knowledge Heather D. Pfeiffer and Roger T. Hartley Department of Computer Science New Mexico State University Las Cruces, NM 88003-8001, USA email: hdp@cs.nmsu.edu and rth@cs.nmsu.edu
More informationImplementing a tool to Support KAOS-Beta Process Model Using EPF
Implementing a tool to Support KAOS-Beta Process Model Using EPF Malihe Tabatabaie Malihe.Tabatabaie@cs.york.ac.uk Department of Computer Science The University of York United Kingdom Eclipse Process Framework
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationLinking the Ohio State Assessments to NWEA MAP Growth Tests *
Linking the Ohio State Assessments to NWEA MAP Growth Tests * *As of June 2017 Measures of Academic Progress (MAP ) is known as MAP Growth. August 2016 Introduction Northwest Evaluation Association (NWEA
More informationImpact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees
Impact of Cluster Validity Measures on Performance of Hybrid Models Based on K-means and Decision Trees Mariusz Łapczy ski 1 and Bartłomiej Jefma ski 2 1 The Chair of Market Analysis and Marketing Research,
More information12- A whirlwind tour of statistics
CyLab HT 05-436 / 05-836 / 08-534 / 08-734 / 19-534 / 19-734 Usable Privacy and Security TP :// C DU February 22, 2016 y & Secu rivac rity P le ratory bo La Lujo Bauer, Nicolas Christin, and Abby Marsh
More informationOn the Formation of Phoneme Categories in DNN Acoustic Models
On the Formation of Phoneme Categories in DNN Acoustic Models Tasha Nagamine Department of Electrical Engineering, Columbia University T. Nagamine Motivation Large performance gap between humans and state-
More informationRobust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction
INTERSPEECH 2015 Robust Speech Recognition using DNN-HMM Acoustic Model Combining Noise-aware training with Spectral Subtraction Akihiro Abe, Kazumasa Yamamoto, Seiichi Nakagawa Department of Computer
More informationDEVELOPMENT OF AN INTELLIGENT MAINTENANCE SYSTEM FOR ELECTRONIC VALVES
DEVELOPMENT OF AN INTELLIGENT MAINTENANCE SYSTEM FOR ELECTRONIC VALVES Luiz Fernando Gonçalves, luizfg@ece.ufrgs.br Marcelo Soares Lubaszewski, luba@ece.ufrgs.br Carlos Eduardo Pereira, cpereira@ece.ufrgs.br
More informationNCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches
NCU IISR English-Korean and English-Chinese Named Entity Transliteration Using Different Grapheme Segmentation Approaches Yu-Chun Wang Chun-Kai Wu Richard Tzong-Han Tsai Department of Computer Science
More informationCompositional Semantics
Compositional Semantics CMSC 723 / LING 723 / INST 725 MARINE CARPUAT marine@cs.umd.edu Words, bag of words Sequences Trees Meaning Representing Meaning An important goal of NLP/AI: convert natural language
More informationVirtually Anywhere Episodes 1 and 2. Teacher s Notes
Virtually Anywhere Episodes 1 and 2 Geeta and Paul are final year Archaeology students who don t get along very well. They are working together on their final piece of coursework, and while arguing over
More informationQuickStroke: An Incremental On-line Chinese Handwriting Recognition System
QuickStroke: An Incremental On-line Chinese Handwriting Recognition System Nada P. Matić John C. Platt Λ Tony Wang y Synaptics, Inc. 2381 Bering Drive San Jose, CA 95131, USA Abstract This paper presents
More informationLikelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Likelihood-Maximizing Beamforming for Robust Hands-Free Speech Recognition Seltzer, M.L.; Raj, B.; Stern, R.M. TR2004-088 December 2004 Abstract
More informationChapter 9 The Beginning Teacher Support Program
Chapter 9 The Beginning Teacher Support Program Background Initial, Standard Professional I (SP I) licenses are issued to teachers with fewer than three years of appropriate teaching experience (normally
More informationCONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS
CONCEPT MAPS AS A DEVICE FOR LEARNING DATABASE CONCEPTS Pirjo Moen Department of Computer Science P.O. Box 68 FI-00014 University of Helsinki pirjo.moen@cs.helsinki.fi http://www.cs.helsinki.fi/pirjo.moen
More informationAutoregressive product of multi-frame predictions can improve the accuracy of hybrid models
Autoregressive product of multi-frame predictions can improve the accuracy of hybrid models Navdeep Jaitly 1, Vincent Vanhoucke 2, Geoffrey Hinton 1,2 1 University of Toronto 2 Google Inc. ndjaitly@cs.toronto.edu,
More informationHow do adults reason about their opponent? Typologies of players in a turn-taking game
How do adults reason about their opponent? Typologies of players in a turn-taking game Tamoghna Halder (thaldera@gmail.com) Indian Statistical Institute, Kolkata, India Khyati Sharma (khyati.sharma27@gmail.com)
More informationLearning Methods in Multilingual Speech Recognition
Learning Methods in Multilingual Speech Recognition Hui Lin Department of Electrical Engineering University of Washington Seattle, WA 98125 linhui@u.washington.edu Li Deng, Jasha Droppo, Dong Yu, and Alex
More information