2. Related Work. KEY WORDS concept, spreading activation, neural network, semantic memory, semantic network, memory model. 1.
|
|
- Allison Chase
- 6 years ago
- Views:
Transcription
1 SEMANTIC MEMORY IN A COMPUTER:A SYNTHETIC MODEL PROPOSAL Richard Veale & Richard Liston Department of Mathematics and Computer Science, Ursinus College richard.e.veale@gmail.com rliston@ursinus.edu ABSTRACT We propose a model of semantic memory, drawing upon ideas from various sources. The crux of the model is the concept system (CS), which is an associative network of nodes connected by weighted edges. We hypothesize that this structure, through its interplay with the closely related Short-Term Memory (STM), can account for many interesting human-like properties of memory. In particular we are interested in the emergence and function of higherlevel concepts and how they affect the agent s behavior. The CS is accessed through a type of spreading activation, where stimulation spreads from one node to another based on several factors, such as the weight of the link between them. The CS contains both general and atomic nodes. Atomic nodes represent conceptions of idiosyncratic experience, and general nodes may represent anything else. The general nodes offer insight into how higherlevel concepts could be stored in memory. Lexical tags, which are names that may be fed to the agent simultaneous to its experience and which will be associated with nodes in the STM or the CS, will also be used in the concept-construction process. Pattern mining will be used to identify new concepts. KEY WORDS concept, spreading activation, neural network, semantic memory, semantic network, memory model 1. Introduction Traditional computing methods leave much to be desired in areas where humans seem to flourish. Many extensive models have been suggested (such as [1][2][3]) to reconcile this discrepancy, but as yet none have succeeded to a degree that any would consider the enigma of modeling human general intelligence solved. We use the term semantic memory to refer to memory whose relation to things is based primarily on their semantic value or content. Items in memory are categorized, conceptualized, and associated with one another in a way such that they can be used in tasks requiring knowledge of such relations. In this paper we describe a model that accounts for some of the interesting properties that human-like semantic memory exhibits, in particular the extraction of higherlevel concepts (in this research represented as the recognition of more complex patterns, and similarities between such patterns, without explicitly recording them). This theory draws inspiration, ideas, and structures from various sources. We frame descriptions of the model using a very simple agent design, whose sole purpose is to allow direct interaction with the memory model. We refer to research in psychology and cognitive science to support decisions that we make about how to represent a particular task. We also draw upon research related to modeling human-like abilities in computers, particularly those involving artificial neural networks (ANNs). 2. Related Work Quillian s & co. s [4][5][6] seminal work in semantic memory has guided much of the thought in the subject. He used semantic networks of nodes representing tokens and types, connected by links. The knowledge stored in these networks would be accessed through spreading activation, by stimulating one node, and then traveling to adjacent nodes and continuing outwards. The theory did not explain how such a structure would come about in the first place, nor how such a structure could be used for anything except for simulating sentence comprehension under the narrow definition he provided. However, the idea of using semantic networks (indeed, the semantic network itself) and the idea that comprehension occurred as a result of spreading activation and the activation status of nodes in such a network has remained central to the study and modeling of semantic memory. Anderson and Bower [7] propose a theory regarding the associative nature of human memory, that recalls older associativist theories for support. They proposed a model of human memory (Human Associative Memory) which involves the interplay of associated concepts and irreducible experiences, which they call sensations. The model was the basis for the memory component of Anderson s [1][2] architecture, ACT (and later ACT-R), which added production rules, among other things, to formulate a full-blown theory of cognition and to account for how such memory would interact with thought. Douglass Hofstadter and his Fluid Analogies Research Group [8] produced several programs (e.g. copycat) based on a general model of how analogical reasoning, which they take to be the basis of human-like intelligence, works. The model has something similar to the CS ( the Slipnet ), though it contains all concepts from the very start and does not change, as well as an analogue to shortterm memory ( the workspace ). They integrate some connectionist ideas, e.g. of overall system heat
2 ( happiness ), and of stochastic elements jimmying the system from an entirely deterministic result. Overall, however, the model seems far too similar to limited symbolic models which solve limited problems to be expandable to the extent that is necessary to support it as a valid model for explaining how human analogy-making and memory works. Especially in question are aspects of their system such as the static nature of their LTM analogue, which prevents the model from being easily extensible into more complex environments. A more recent model is that of Novamente [3]. The memory section of this model seems related to ours. However, details about the deeper aspects of the design are unavailable, and so it is difficult to understand precisely what decisions they have made. They describe it as [resembling] classic semantic networks but has dynamic aspects that are more similar to neural networks. This seems very much to be the same type of structure that we have produced in this research. The rest of the model (for theirs is a full-blown model of general AI and consciousness) is significantly different, however. In a slightly removed field, Burgess and Lund [9] have built a model (HAL Hyperspace Analogue to Language) that constructs models of language (and so a certain type of memory) by examining large corpora of text and building associative graphs to represent them. In addition to the use of weighted graph-type structures to represent this, some of the more theoretical observations they make are relevant, particularly their acknowledgement of the necessity of experience-grounding (as described in [10]). More purely connectionist work has been done in representing memory as well. The PDP group s [11][12] model of memory using distributed representations 1 ([12], chapter 19) consists of a large number of modules, each in itself an organized neural network, with a certain number of incoming connections from various other modules, and outgoing connections leading to other modules. Signals are received on the input side, and then the module-internal-network reaches a sufficiently stable state such that signals are transmitted via some of its outgoing connections. This model is similar to the ones described above in that it stipulates that memory emerges through the interplay of a large number of nodes that do not inherently have much (if any) meaning in themselves. Functioning together, however, they seem to have the ability to encode interesting properties properties which are strikingly similar to many of the properties of humanlike memory. The primary difference is that this model s native functions (such as content-directed retrieval) differ as a result of its using distributed representations ([11]). 1 A network using local representations uses at least one node of its network as the identifier for a given feature (in other words, there is at least a one-to-one correspondence between the number of non-relational things represented by the network and the number of nodes). In a network using distributed representations, this is not true. Other systems, such as that described in [13] (a system that uses combinations of several attractor-type neural networks for semantic fact retrieval), building off of these ideas, have been implemented and tweaked. Finally, making the connection between concept theory and models of memory, Van Loocke [14] has compiled a book relating concepts and connectionism. He examines how certain connectionist theories support theories of the nature of concepts in philosophy (prototype theory, etc.) and analyzes many in detail, and how such models can be made to account for research done in psychology. Like [15] and [16], he uses data of reaction-times in recall to support the model. 3. Description of Model Throughout this description we will be using a very simple environment. Environment refers to the world external to the agent, of which its experiences come. The primary purpose of this simple environment is make implementation and debugging simpler for the eventual implementation. Expanding it to more complex environments will then be only a matter of appropriately modifying the transducers (a.k.a. sensors ) and input modules (similar to those described in [17]) such that it can appropriately handle the richer set of experience.) We describe a model that accounts for interesting properties of memory, all the while constraining ourselves to only using functions or abilities seem native or likely to exist in a real memory of this type. At times, since we are using local representations, computation may become extensive (such as searching for a certain sub-graph within a large graph). It must be remembered that even with distributed representations, though the computation may be parallel in theory, in simulation it is serial, and so may fall prey to the combinatorial explosion before its truly parallel implementation would. This does not change for our model. Indeed, it does pose a major problem, the solution to which might be found in switching to distributed representations (discussed in the conclusion). Simple Environment (Three Dots) The environment we have chosen to use as the realm of possible experience for our agent is a world of two dimensions. One axis, called the width, is segmented into three columns. The other, called length, is continuous, though in practice it will always be segmented into very small moments. A given scroll (Figure 1) can contain any number of these moments (Figure 2). Any column can, at any moment, be either filled or empty. As such, there are eight possible configurations of a single moment, represented as a threeby-one matrix. Experience moves from the top to the bottom, one moment at a time. As such, the movement of the agent along the scroll downwards (or rather, the movement of the scroll upwards past the agent s transducers) is representative of the passage of time. This
3 is useful because the relation between time and the amount of possible experience is strictly controlled. The scrolls will contain patterns, and it is these patterns that we expect to arise as higher-level concepts in the memory. The reason for choosing this type of experience, versus some sort of linguistic one such as natural language or a limited subset of it, is that we wish to work in an environment more akin to physical experience, or at least one which is idiosyncratic and irreducible. Others have made the observation that to attempt to teach an agent language without it having some grounding on which to base it is difficult ([9][10]). In addition, one can avoid all of the complexities of grammar and other uncertain baggage that language brings along with it. Interesting properties might be built up from experience, whereas an agent whose realm is exclusively language is limited in precisely that way. Language, a tool used to communicate in an environment, would be the environment to such an agent. Such an agent would be unable to use language in the normal way. We assume that the agent has transducers which can recognize whether a given square of its current experience scroll is filled or not, and checks every moment for each of the three squares of its experience. This is converted to a simple internal form, which can be (conveniently) represented in exactly the same way that a moment is represented on the scroll. The internal organization of the model is divided into several main static parts (parts that can be recognized in a snapshot taken of any given moment). These will be described in greater detail in later sections. The Concept System (CS) Short-Term Memory (STM) Experiential Memory In addition to these internal pieces, there are the pieces that interact with the environment (the pieces that make up our agent, minus the memory model). The actual implementation of these will change with the environment. Lexical Tag Receiver Command Receiver Output Location for lexical tags Output Location for actuation (scroll etc.) Our agent has two primary modes. In the training mode, experience will be fed to the agent and it will modify itself, but not output anything. In testing mode, it may output things, but will not modify itself permanently. Later on, these two can be combined. For the moment, however, it is simpler to separate them. These modes will be specified to the agent using commands, which are not represented in the model per se, but can be thought of as stimulation coming from another part of the brain, or perhaps changes in the ambient environment (such hormones being introduced to blood-stream). We have chosen to include them for ease of testing, even if perhaps they are not necessary and a type of such memory could exist free of any such goal-directing external entity. Figure 1: An experience scroll. Large arrow indicates order of experience. Smaller arrow points to a single moment. Figure 2: Example of a single moment. In our simple environment, such a 3x1 matrix can also be seen as the internal representation of the agent. Static Aspects I: The Concept System (Figure 3) The Concept System (CS) is a directed, weighted graph connecting nodes (sometimes we will call a single node a concept when referring to the CS). Each of these nodes is one of three types: atomic, general, or lexical. These nodes have some special properties. Atomic nodes are associated with a single feature of experience (blueness, redness, softness, etc. In the threedot environment these are features of an moment, i.e. one of the squares being turned on). These nodes are trained to react whenever something with the feature that they are trained to recognize is presented. One could think of these nodes as recognizing when the pattern of neurons fire that result in the experience of that feature. In a more complex environment these could be represented by embedding a recognizer neural network, and training it through experience to recognize that particular feature. Initially,
4 the same feature might be mistakenly categorized into two or more nodes, but as experience accrues and more sensations are experienced, these will be reconciled. In the simple three-dot environment, this functionality is realized by simply checking for an exact match. This may have the undesirable result of subtracting from the fuzzy aspects that the esoteric structures are supposed to add, but since the environment is so simple this cannot be helped. General nodes are simply nodes in the graph that can be stimulated, and have links leading to and from other nodes in the graph. More qualitatively, they will represent the higher-level, emergent concepts. Atomic and general nodes may both be associated with a lexical tag. A lexical tag could be anything a word, a symbol though throughout we shall assume that it is an English string. The purpose of this tag is to allow selective access into the network by way of stimulating just the node that is associated with a lexical tag. Functionally, this serves as a sort of proto-language, though in theory it will primarily encode static concepts. Each of these nodes has the ability to become activated with a certain amount of potential (energy). This activation will spread along links to other nodes, modulated by the weight of the link between them. The maximum weight a link can have is positive one (weights are in the range [0, 1]). This, combined with the fact that each node s activation will be decreased by a set amount each tick, assures that activation will be exhausted after a certain amount of jumps. How many is determined by 1) how strong the issuing node s activation was and 2) the weights and number of links over which it has been transmitted. When activation energy gets very close to zero it will have no noticeable effect and so is said to be expended. Another feature of the CS is that a node may have a link to a link between two other nodes, instead of to another single node, as is normal (Figure 4). This type of connection would have the function of either additionally stimulating/inhibiting the connection (if it is directed at the link), or of modulating the activation of the node when potential flowed across the link (in the event that it is directed from the link to the node). In addition to increasing drastically the number of patterns of activation that can be achieved given a number of nodes, this type of connection gives us the ability to represent some interesting relations between concepts. To use an intuitive real-world example, a node representing fire may become more active when activation travels strongly from red to hot, but the link between red and, say, car would be inhibited (preventing car from becoming too Pattern A Pattern B 1-2 pattern Figure 3: Graphic of a portion of a concept system. General nodes associated with lexical tags (shown as boxes over the nodes) are represented in the lightest shade, atomic nodes (with their associated pattern inside) as a little darker, and general nodes without tags as the darkest shade. Floating edges are leading to other nodes/links not in the picture.
5 activated, even if car is normally associated strongly with red ). One can imagine many situations in which this would be useful, to direct even just a little more precisely which concepts should be active and which should not. Whether or not this type of connection functionally increases the domain expressible by the system is uncertain. Testing an actual implementation will be the best method for determining whether or not these types of connections are needed. Pattern 5 Diagram 5: Example graphic of the STM. Same as the CS, except for the difference in its dynamics over time. All types of nodes may exist. Figure 4: Detail of different types of connections. Each edge has a weight. Activation is inhibited if the weight is near zero, and strengthened if nearer to one. II: Short-Term Memory (Figure 5) There has been much research into Short-Term Memory (STM) in the psychology and cognitive science literature [18]. It is characterized as having relatively small capacity, and being quickly searchable (at least as compared to Long-Term Memory), and volatile (prone to be overwritten by some new, more immediately relevant, young upstart memory). Though STM might be seen as playing a similar functional role in this model, it also takes a very different form than is normally attributed to it. Instead of being serial, and encoding chunks of information in a linguistic form, as many models assume, the STM of this model can be succinctly described as a volatile, limited version of the CS. In other words, the structure is the same as the CS, though the rules for how it changes dynamically over time are modified to make it far more transitive. The rules for modifying the weights of links call for more modification than in the CS, resulting in a quickly changing scene as time advances. Nodes that were strongly relevant a moment before may disappear entirely in the next. Two concepts that were very positively related in one moment may have nothing in common in the next. Since the STM is more immediately connected to experience, it arranges itself to reflect trends from only a relatively narrow window of time and experience. III: Experiential Memory In addition to STM, which encodes processed information, there is also the need for a memory that stores experience in raw form. This results from the fact that by the time something is ready to be processed, it will already have passed. In our simple environment will be represented as a sliding window over the scroll, storing only a few moments prior to the current one. Figure 6: Sliding window on experience scroll. Dynamic Aspects (Figure 7) This section describes the high-level interplay between the parts described above such that the proposed properties will emerge. We hypothesize that interesting properties will emerge which are similar to semantic memory in humans. The way the agent acts at any given moment will be a function of:
6 The tendencies of the system (long-term memory, represented by the CS assuming its current form through the history the agent). The recent experiences of the system (short-term memory, being modified by recent experience and modulated by queries to the CS). The current experience of the system (modifies shortterm memory by adding new information to it and beginning the process that will alter much of the system). The goal of the agent (in our case determined by commands given to it manually). Let us follow the activity of the system during a short period of time when an experience is presented to it (in training mode). 1) The experience is first appended to the end of the experiential memory. The window will be only one moment long for this example. 2) The representation of the experience is presented to the STM. A lexical tag may also be simultaneously presented. 3) If an atomic node in the STM corresponds to the experience, it will be stimulated with some amount of activation. The amount that works best will be determined from empirical testing, and may be modulated depending on the number of nodes given activation this moment, and other such factors. 4) If a node in the STM associated with the same lexical tag as the one presented, it will also be given some amount of activation. 5) If any of the nodes do not already exist, they will be created on the fly. Links will be extended to other nodes that are currently very active. The reasoning behind this is that strength of connection should vary with temporal cooccurrence. A strong STM activation level means that it was used recently. 6) Once the two nodes exist, a search begins for the corresponding nodes in the CS. This search occurs starting from the nodes corresponding to strongly activated/associated nodes in the STM. A certain amount of energy is expended in the spreading activation from these nodes. When a node is activated past some threshold, this event is reported to the STM. If the content matches, that node (in the CS) is given a large amount of activation. Nodes around it which are activated will also be returned to STM. The STM will be modified based on the tendencies of the system (represented as the CS). If the energy is expended and no appropriate node has been found, it is assumed to have degraded past the point of recall, at least using the current cues. A new node is created in the CS and links are extended to other nodes active in STM, in the same fashion as before. 7) While these actions occur in reaction to new experience being presented, ambient actions will also occur. All weights will degrade by a fixed amount once every moment. Currently activated nodes activation will decrease slightly. Activation will spread until energy is expended. Weights between activated nodes will increase by a rule similar to the delta rule, mutatis mutandis. Activated nodes in STM will give activation to corresponding nodes in the CS. Sufficiently active nodes in the CS will be passed up to STM, where modification of weights will occur to a higher degree than in the CS. Patterns will be recognized in the STM, and new concepts created in the STM and activated. (The Novamente [3] engine seems to use a similar process of separating out Figure 7: Modular overview of the model and exchange of information between parts. Experience initially is recorded in Experiential memory, which elicits a reaction in STM. Interaction occurs both within and between the CS and STM, causing activation to spread and nodes to be created. Finally, the STM is checked for patterns, and the moment ends.
7 new nodes in their memory structure.) The above process repeats until energy is expended (functionally it will continue until slightly after the end of experiential input). The results of these modifications and activations are that temporally co-occurring phenomena (i.e. experiences that tend to occur in close proximity in time, or situations where one tends to always occur near another in time) will be associated unless other evidence inhibits this. Slowly, over time, patterns will be built in the CS, and so mirrored in the STM. These patterns will be extracted and labeled as new concepts, to be activated and utilized later. And complexity will gradually emerge in the system, giving rise to interesting pattern-recognition abilities (e.g. categorization, conceptualization). Recalling What follows is an example to demonstrate how the recall of a simple episodic memory might take place (see Figure 8). For this task, we will assume that only a lexical tag is presented to the agent and that it has the goal of recalling whatever information it deems most pertinent to the presented lexical tag. From the STM, the node corresponding to the lexical tag queried will be activated. Activation will spread to the nodes strongest associated with it, modulated by weights. In this example, the node receiving strongest activation would be the atomic node representing the first array of notes of the pattern. All these nodes would be copied into the STM with appropriate levels of activation and connected by their weights. Experimentation will determine whether there would need to be inhibitory connections between the configurations already played and the nodes representing later configurations in the pattern, such that they are not activated too strongly, and so output preemptively. When an atomic node became sufficiently stimulated, it would be output. The result of this would be the sequence of three-by-one moments (configurations), that are the pattern being played. If too much similar experience was gained, the agent may have trouble recalling each pattern accurately. We argue that this very phenomenon occurs in humans as well, and so is similar even in its ostensible drawbacks. Beethoven s 9th Beethoven s 9th Figure 8: A possible (high-level) snapshot from the recall task being discussed. At this point in time the second atomic node has been recalled (and output) and the system is moving on to the next. Dotted connections are inhibitory, full lines are stimulatory. We have marked what each node s relative activation by putting rings of different thicknesses about them. Thicker rings represent higher levels of activation. Note the lexical tag node which is acting as the handle into the memory is constantly receiving strong activation, which is spreading to adjacent nodes, its strength modulated by the character and strength of the connection between the two.
8 A more abstract (and anthropomorphic) description of what is happening is that, by holding in mind that it only wants things related to the pattern (represented by the lexical-tag-associated node), and the slowly-degradingover-time previous pieces of the pattern, each subsequent piece of the pattern can be recalled. Much of the research in neural networks has been in devising proofs that there exists an assignment of weights that can satisfy such a result. We are sufficiently convinced that such an assignment exists. A few problems arise, however, such as how the agent knows when to stop, or in the playing of the same configuration over again within a single pattern. Discussion The motivation for abilities such as that of recognizing and separating out constituents of atomic sensations, Kuhl s research ([19][20]) shows that untrained babies, and even many higher-order mammals, show the ability to differentiate between many different phonemes. Kuhl also, in the same paper ([19]), speaks strongly to the function of motherese in children learning language, which supports the system of lexical tags, which helps to guide the initial creation of higher-level concepts by providing handles that can be called on to selectively activate certain concepts. In terms of temporal co-occurrence being indispensable in learning patterns, Baddeley and Weiskrantz present evidence in ([21]) that even in humans, a certain degree of proximity or order is required in the constituent events for any sort of concept or rule to be formed in the mind of the subject. Relation to Concept Theory Our model s approach concepts can be explained by several different theories in the structure of concepts and how they interact with human memory. Particularly relevant to all representation with neural networks are the prototype and exemplar hypotheses, which are in practice similar. By the prototype theory, there exists a prototypical example of the concept, to which other noncore features may be appended in recognition. In exemplar theory, there exists some exemplar of the class of that object, to which candidates for inclusion in that category will be compared and only ones that satisfy it to a sufficient degree will be included ([22][23]). Indeed, Van Loocke [14] refers to prototype theory in his analysis of how certain aspects of distributed representations can be mapped to aspects used in categorization of concepts, since it seems that networks will tend to settle on a given memory if something sufficiently close to it is excited. In our model, concepts are represented as a single node, and the part that node will play in any action undertaken is defined by the weights of afferent and efferent connections to other nodes in the concept system. In other words, a concept is really only defined insofar as are the other concepts that create its identity, ultimately grounded in physical, idiosyncratic, irreducible experience. One might immediately regard this as answering to the (neo-) classical concept theory (that higher-level concepts are fuzzily constructed of lower-level ones), i.e. a car is only a car so much as it is made up of wheels (80% of cars have them), a chassis (85%) and an engine (96%) ([22][23]), which is not a mistake. However, at the atomic level, it seems to conform to prototype theory, in that the atomic nodes contain esoteric structures (neural networks) trained to recognize a certain type of experience. So, it is possible to argue that perhaps even more clearly than do wholly distributed models our model adopts the synthetic view. Namely, that at a lower level, prototype theory holds true, but as more complex, composite concepts are brought into being they are defined by (as the name suggests) composing pre-existing concepts. So, the concept bird is composed of those pre-existing concepts for wings, a body, feathers, etc.. 4. Conclusion The advantages of such a semantic memory are many. There are many tasks that are very difficult using traditional computational methods, and yet at which humans excel. Many of these seem to involve understanding of the semantic value of a thing. This can be achieved through having the right types of structures and rules, and exercising a correct regimen of training so that the agent will accrue a history of experience so that its structures are functionally formed similar to the human s. The existence of the necessity of this training is somewhat of a drawback, though no more so than the necessity of raising a human to develop as one wishes. Development of an appropriate training regimen or sufficiently complex and appropriate self-directing rules is another problem entirely, and one that becomes only more complex as the environment does. In this paper a model has been outlined for how such a semantic memory might work. The environment we selected is extremely limited, so as to demonstrate viability. Expansion to other environments is not difficult intuitively. However, because of the requirement that the memory/agent have access to transducers and input modules that can convert experience into an appropriate internal representation, and that certain aspects of the system (such as the recognition capabilities of atomic nodes) be scaled up to account for the increased complexity of such an environment, the problem is realistically probably very difficult. If the environment could be extended farther, semantic search (online, for example) would become possible. If the agent had a similarly organized concept system as the searcher, it could automatically make assumptions that the user unconsciously makes. Recently it has been the practice that search engines automatically append similar items to one s query to narrow down the search. Making such items more accurate with regards to the searcher s
9 intentions would make the returned items more relevant. Even with scarce information, perhaps relatively high accuracy could be achieved. Applications also seem possible in the field of Natural Language Processing (NLP), especially in such uses as Machine Translation. With the aid of a correctly constructed semantic memory model, ambiguous words senses could be decoded more accurately (since words sense often rely on their relation to other words, context, and background knowledge). As such, a more natural translation would be possible. Regarding cognitive science, any model that contributes to our understanding of how the human mind works offers a springboard for discussion. One of the major problems involved in creating such a model is that there is little research that can be concretely applied to it. It is likewise difficult to make predictions about its behavior short of actually building and testing it. The questions raised in the creation of this model are of precisely the variety that need to be asked, in hopes that later model-builders will have something useful to go on. We hope to elicit discussion, whether it be for or against the model. Future Directions The direction this research will immediately take will be an implementation using the simple environment we have described in this paper, followed by extensive testing. Throughout this research we have encountered many problems that seem to be reconcilable if we convert the model to use distributed representations instead of local ones. As such, that seems the logical direction in which to proceed (though it does offer its own special host of problems), keeping in mind data gathered from the implementation and testing. Finally, in the farther future, we hope to expand the model to deal with more complex environments. References: [1] J.R. Anderson, The architecture of cognition (Cambridge, MA: Harvard University Press, 1983). [2] J.R. Anderson & C. Lebiere, The atomic components of thought (Mahwah, NJ: Erlbaum, 1998). [3] B. Goertzel, M. Looks & C. Pennachin, Novamente: an integrative architecture for artificial general intelligence, Proc. of AAAI Symposium on Achieving Human-Level Intelligence through Integrated Systems and Research, Washington DC, August [4] M.R. Quillian, Semantic memory. In: M. Minsky (ed.), Semantic information processing (Cambridge, MA: The MIT Press, 1968). [5] A.M. Collins & E.F. Loftus, A spreading-activation theory of semantic processing, Psychological Review, 82(6), 1975, [6] A. Bell & M.R. Quillian, Capturing concepts in a semantic net. In: E.L. Jacks (ed.) Associative Information Techniques (New York, NY: American Elsevier, 1971). [7] J.R. Anderson & G.H. Bower, Human associative memory (Washington DC: VH Winston and Sons, 1973) [8] D. Hofstadter and the Fluid Analogies Research Group, Fluid concepts and creative analogies: computer models of the fundamental mechanisms of thought (New York, NY: BasicBooks, 1995). [9] C. Burgess and K. Lund, The dynamics of meaning in memory. In: E. Dietrich and A. Markman (eds.), Cognitive dynamics (Mahwah, NJ: Erlbaum, 2000, pp ). [10] S. Harnad, The symbol-grounding problem, Physica D, 42, 1990, [11] D.E. Rumelhart, J.L. McClelland, & The PDP Research Group, Parallel distributed processing: explorations in the microstructure of cognition., Volume 1: foundations (Cambridge, MA: The MIT Press, 1986). [12] D.E. Rumelhart, J.L. McClelland, & The PDP Research Group, Parallel distributed processing: explorations in the microstructure of cognition. Volume 2: psychological land biological models (Cambridge, MA: The MIT Press, 1986). [13] E. Ruppin & M. Usher, An attractor neural network model of semantic fact retrieval, International Joint Conference on Neural Networks, 3, 1990, [14] P.R. Van Loocke, The dynamics of concepts, a connectionist model (New York, NY: Springer-Verlag, 1994). [15] A.M. Collins & M.R. Quillian, Retrieval time from semantic memory, Journal of Verbal Learning and Verbal Behavior, 8, 1969, [16] L.J. Rips, E.J. Shoben, & E.E. Smith, Structure and process in semantic memory: a featural model for semantic decisions, Psychological review, 81(3), 1974, [17] J.A. Fodor, The modularity of mind: an essay on faculty psychology (Cambridge, MA: The MIT Press, 1983). [18] M.I. Posner (ed.), Foundations of cognitive science (Cambridge, MA: The MIT Press, 1989). [19] P.K. Kuhl, A new view of language acquisition, Proc. National Academy of Sciences USA, 97(22), 2000, [20] F. Ramus, M.D. Hauser, C. Miller, D. Morris, J. Mehler, Language discrimination by human newborns and by cotton-top tamarin monkeys, Science, 288(5464), 2000, [21] A. Baddeley & L. Weiskrantz (eds.), Attention: selection, awareness, and control: a tribute to Donald Broadbent (Oxford: Oxford University Press, 1993). [22] E. Margolis & S. Laurence (eds.), Concepts: core readings (Cambridge, MA: The MIT Press, 1999). [23] E. Margolis & S. Laurence, Concepts. In: E.N. Zalta (ed.), The Stanford encyclopedia of philosophy (Stanford, CA: The Metaphysics Research Lab, Center for the Study of Language and Information, 2006).
Evolution of Symbolisation in Chimpanzees and Neural Nets
Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication
More informationConcept Acquisition Without Representation William Dylan Sabo
Concept Acquisition Without Representation William Dylan Sabo Abstract: Contemporary debates in concept acquisition presuppose that cognizers can only acquire concepts on the basis of concepts they already
More informationNotes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1
Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial
More informationA Case Study: News Classification Based on Term Frequency
A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center
More informationVisual CP Representation of Knowledge
Visual CP Representation of Knowledge Heather D. Pfeiffer and Roger T. Hartley Department of Computer Science New Mexico State University Las Cruces, NM 88003-8001, USA email: hdp@cs.nmsu.edu and rth@cs.nmsu.edu
More informationAbstractions and the Brain
Abstractions and the Brain Brian D. Josephson Department of Physics, University of Cambridge Cavendish Lab. Madingley Road Cambridge, UK. CB3 OHE bdj10@cam.ac.uk http://www.tcm.phy.cam.ac.uk/~bdj10 ABSTRACT
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationObjectives. Chapter 2: The Representation of Knowledge. Expert Systems: Principles and Programming, Fourth Edition
Chapter 2: The Representation of Knowledge Expert Systems: Principles and Programming, Fourth Edition Objectives Introduce the study of logic Learn the difference between formal logic and informal logic
More informationSeminar - Organic Computing
Seminar - Organic Computing Self-Organisation of OC-Systems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SO-Systems 3. Concern with Nature 4. Design-Concepts
More informationAGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016
AGENDA Advanced Learning Theories Alejandra J. Magana, Ph.D. admagana@purdue.edu Introduction to Learning Theories Role of Learning Theories and Frameworks Learning Design Research Design Dual Coding Theory
More informationSpecification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments
Specification and Evaluation of Machine Translation Toy Systems - Criteria for laboratory assignments Cristina Vertan, Walther v. Hahn University of Hamburg, Natural Language Systems Division Hamburg,
More informationSoftware Maintenance
1 What is Software Maintenance? Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities, deletion of obsolete capabilities, and optimization. 2 Categories
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D-396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationAQUA: An Ontology-Driven Question Answering System
AQUA: An Ontology-Driven Question Answering System Maria Vargas-Vera, Enrico Motta and John Domingue Knowledge Media Institute (KMI) The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom.
More informationFull text of O L O W Science As Inquiry conference. Science as Inquiry
Page 1 of 5 Full text of O L O W Science As Inquiry conference Reception Meeting Room Resources Oceanside Unifying Concepts and Processes Science As Inquiry Physical Science Life Science Earth & Space
More informationA Case-Based Approach To Imitation Learning in Robotic Agents
A Case-Based Approach To Imitation Learning in Robotic Agents Tesca Fitzgerald, Ashok Goel School of Interactive Computing Georgia Institute of Technology, Atlanta, GA 30332, USA {tesca.fitzgerald,goel}@cc.gatech.edu
More informationMaximizing Learning Through Course Alignment and Experience with Different Types of Knowledge
Innov High Educ (2009) 34:93 103 DOI 10.1007/s10755-009-9095-2 Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge Phyllis Blumberg Published online: 3 February
More informationWhat s in a Step? Toward General, Abstract Representations of Tutoring System Log Data
What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data Kurt VanLehn 1, Kenneth R. Koedinger 2, Alida Skogsholm 2, Adaeze Nwaigwe 2, Robert G.M. Hausmann 1, Anders Weinstein
More informationProposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science
Proposal of Pattern Recognition as a necessary and sufficient principle to Cognitive Science Gilberto de Paiva Sao Paulo Brazil (May 2011) gilbertodpaiva@gmail.com Abstract. Despite the prevalence of the
More informationA Minimalist Approach to Code-Switching. In the field of linguistics, the topic of bilingualism is a broad one. There are many
Schmidt 1 Eric Schmidt Prof. Suzanne Flynn Linguistic Study of Bilingualism December 13, 2013 A Minimalist Approach to Code-Switching In the field of linguistics, the topic of bilingualism is a broad one.
More informationA Neural Network GUI Tested on Text-To-Phoneme Mapping
A Neural Network GUI Tested on Text-To-Phoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Text-to-phoneme (T2P) mapping is a necessary step in any speech synthesis
More informationExtending Place Value with Whole Numbers to 1,000,000
Grade 4 Mathematics, Quarter 1, Unit 1.1 Extending Place Value with Whole Numbers to 1,000,000 Overview Number of Instructional Days: 10 (1 day = 45 minutes) Content to Be Learned Recognize that a digit
More information9.85 Cognition in Infancy and Early Childhood. Lecture 7: Number
9.85 Cognition in Infancy and Early Childhood Lecture 7: Number What else might you know about objects? Spelke Objects i. Continuity. Objects exist continuously and move on paths that are connected over
More informationMYCIN. The MYCIN Task
MYCIN Developed at Stanford University in 1972 Regarded as the first true expert system Assists physicians in the treatment of blood infections Many revisions and extensions over the years The MYCIN Task
More informationLecturing Module
Lecturing: What, why and when www.facultydevelopment.ca Lecturing Module What is lecturing? Lecturing is the most common and established method of teaching at universities around the world. The traditional
More informationLEGO MINDSTORMS Education EV3 Coding Activities
LEGO MINDSTORMS Education EV3 Coding Activities s t e e h s k r o W t n e d Stu LEGOeducation.com/MINDSTORMS Contents ACTIVITY 1 Performing a Three Point Turn 3-6 ACTIVITY 2 Written Instructions for a
More informationLanguage Acquisition Fall 2010/Winter Lexical Categories. Afra Alishahi, Heiner Drenhaus
Language Acquisition Fall 2010/Winter 2011 Lexical Categories Afra Alishahi, Heiner Drenhaus Computational Linguistics and Phonetics Saarland University Children s Sensitivity to Lexical Categories Look,
More informationAn Empirical and Computational Test of Linguistic Relativity
An Empirical and Computational Test of Linguistic Relativity Kathleen M. Eberhard* (eberhard.1@nd.edu) Matthias Scheutz** (mscheutz@cse.nd.edu) Michael Heilman** (mheilman@nd.edu) *Department of Psychology,
More informationPredicting Students Performance with SimStudent: Learning Cognitive Skills from Observation
School of Computer Science Human-Computer Interaction Institute Carnegie Mellon University Year 2007 Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation Noboru Matsuda
More informationAn Introduction to Simio for Beginners
An Introduction to Simio for Beginners C. Dennis Pegden, Ph.D. This white paper is intended to introduce Simio to a user new to simulation. It is intended for the manufacturing engineer, hospital quality
More informationA Reinforcement Learning Variant for Control Scheduling
A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN 55417 Abstract We present an algorithm based on reinforcement
More informationAn Evaluation of the Interactive-Activation Model Using Masked Partial-Word Priming. Jason R. Perry. University of Western Ontario. Stephen J.
An Evaluation of the Interactive-Activation Model Using Masked Partial-Word Priming Jason R. Perry University of Western Ontario Stephen J. Lupker University of Western Ontario Colin J. Davis Royal Holloway
More informationHow People Learn Physics
How People Learn Physics Edward F. (Joe) Redish Dept. Of Physics University Of Maryland AAPM, Houston TX, Work supported in part by NSF grants DUE #04-4-0113 and #05-2-4987 Teaching complex subjects 2
More informationDigital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology. Michael L. Connell University of Houston - Downtown
Digital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology Michael L. Connell University of Houston - Downtown Sergei Abramovich State University of New York at Potsdam Introduction
More informationCross Language Information Retrieval
Cross Language Information Retrieval RAFFAELLA BERNARDI UNIVERSITÀ DEGLI STUDI DI TRENTO P.ZZA VENEZIA, ROOM: 2.05, E-MAIL: BERNARDI@DISI.UNITN.IT Contents 1 Acknowledgment.............................................
More informationStrategic Practice: Career Practitioner Case Study
Strategic Practice: Career Practitioner Case Study heidi Lund 1 Interpersonal conflict has one of the most negative impacts on today s workplaces. It reduces productivity, increases gossip, and I believe
More informationSAM - Sensors, Actuators and Microcontrollers in Mobile Robots
Coordinating unit: Teaching unit: Academic year: Degree: ECTS credits: 2017 230 - ETSETB - Barcelona School of Telecommunications Engineering 710 - EEL - Department of Electronic Engineering BACHELOR'S
More informationSelf Study Report Computer Science
Computer Science undergraduate students have access to undergraduate teaching, and general computing facilities in three buildings. Two large classrooms are housed in the Davis Centre, which hold about
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIAN-LEARNING- BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationAGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS
AGS THE GREAT REVIEW GAME FOR PRE-ALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic
More informationCEFR Overall Illustrative English Proficiency Scales
CEFR Overall Illustrative English Proficiency s CEFR CEFR OVERALL ORAL PRODUCTION Has a good command of idiomatic expressions and colloquialisms with awareness of connotative levels of meaning. Can convey
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationInnovative Methods for Teaching Engineering Courses
Innovative Methods for Teaching Engineering Courses KR Chowdhary Former Professor & Head Department of Computer Science and Engineering MBM Engineering College, Jodhpur Present: Director, JIETSETG Email:
More informationSARDNET: A Self-Organizing Feature Map for Sequences
SARDNET: A Self-Organizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationCritical Thinking in Everyday Life: 9 Strategies
Critical Thinking in Everyday Life: 9 Strategies Most of us are not what we could be. We are less. We have great capacity. But most of it is dormant; most is undeveloped. Improvement in thinking is like
More informationSpeech Recognition at ICSI: Broadcast News and beyond
Speech Recognition at ICSI: Broadcast News and beyond Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 The DARPA Broadcast News task Aspects of ICSI
More informationNCEO Technical Report 27
Home About Publications Special Topics Presentations State Policies Accommodations Bibliography Teleconferences Tools Related Sites Interpreting Trends in the Performance of Special Education Students
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 12-18 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationP. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas
Exploiting Distance Learning Methods and Multimediaenhanced instructional content to support IT Curricula in Greek Technological Educational Institutes P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou,
More informationChapter 4 - Fractions
. Fractions Chapter - Fractions 0 Michelle Manes, University of Hawaii Department of Mathematics These materials are intended for use with the University of Hawaii Department of Mathematics Math course
More informationCandidates must achieve a grade of at least C2 level in each examination in order to achieve the overall qualification at C2 Level.
The Test of Interactive English, C2 Level Qualification Structure The Test of Interactive English consists of two units: Unit Name English English Each Unit is assessed via a separate examination, set,
More informationECE-492 SENIOR ADVANCED DESIGN PROJECT
ECE-492 SENIOR ADVANCED DESIGN PROJECT Meeting #3 1 ECE-492 Meeting#3 Q1: Who is not on a team? Q2: Which students/teams still did not select a topic? 2 ENGINEERING DESIGN You have studied a great deal
More informationCommanding Officer Decision Superiority: The Role of Technology and the Decision Maker
Commanding Officer Decision Superiority: The Role of Technology and the Decision Maker Presenter: Dr. Stephanie Hszieh Authors: Lieutenant Commander Kate Shobe & Dr. Wally Wulfeck 14 th International Command
More informationLanguage Acquisition Chart
Language Acquisition Chart This chart was designed to help teachers better understand the process of second language acquisition. Please use this chart as a resource for learning more about the way people
More informationUsing the CU*BASE Member Survey
Using the CU*BASE Member Survey INTRODUCTION Now more than ever, credit unions are realizing that being the primary financial institution not only for an individual but for an entire family may be the
More informationCommon Core Exemplar for English Language Arts and Social Studies: GRADE 1
The Common Core State Standards and the Social Studies: Preparing Young Students for College, Career, and Citizenship Common Core Exemplar for English Language Arts and Social Studies: Why We Need Rules
More informationLongest Common Subsequence: A Method for Automatic Evaluation of Handwritten Essays
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 6, Ver. IV (Nov Dec. 2015), PP 01-07 www.iosrjournals.org Longest Common Subsequence: A Method for
More informationSINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)
SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,
More informationGrade 4. Common Core Adoption Process. (Unpacked Standards)
Grade 4 Common Core Adoption Process (Unpacked Standards) Grade 4 Reading: Literature RL.4.1 Refer to details and examples in a text when explaining what the text says explicitly and when drawing inferences
More informationKnowledge Elicitation Tool Classification. Janet E. Burge. Artificial Intelligence Research Group. Worcester Polytechnic Institute
Page 1 of 28 Knowledge Elicitation Tool Classification Janet E. Burge Artificial Intelligence Research Group Worcester Polytechnic Institute Knowledge Elicitation Methods * KE Methods by Interaction Type
More informationConversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games
Conversation Starters: Using Spatial Context to Initiate Dialogue in First Person Perspective Games David B. Christian, Mark O. Riedl and R. Michael Young Liquid Narrative Group Computer Science Department
More informationWhite Paper. The Art of Learning
The Art of Learning Based upon years of observation of adult learners in both our face-to-face classroom courses and using our Mentored Email 1 distance learning methodology, it is fascinating to see how
More informationMultiple Intelligences 1
Multiple Intelligences 1 Reflections on an ASCD Multiple Intelligences Online Course Bo Green Plymouth State University ED 5500 Multiple Intelligences: Strengthening Your Teaching July 2010 Multiple Intelligences
More informationChinese Language Parsing with Maximum-Entropy-Inspired Parser
Chinese Language Parsing with Maximum-Entropy-Inspired Parser Heng Lian Brown University Abstract The Chinese language has many special characteristics that make parsing difficult. The performance of state-of-the-art
More informationhave to be modeled) or isolated words. Output of the system is a grapheme-tophoneme conversion system which takes as its input the spelling of words,
A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion Walter Daelemans and Antal van den Bosch Proceedings ESCA-IEEE speech synthesis conference, New York, September 1994
More information10.2. Behavior models
User behavior research 10.2. Behavior models Overview Why do users seek information? How do they seek information? How do they search for information? How do they use libraries? These questions are addressed
More informationWHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING AND TEACHING OF PROBLEM SOLVING
From Proceedings of Physics Teacher Education Beyond 2000 International Conference, Barcelona, Spain, August 27 to September 1, 2000 WHY SOLVE PROBLEMS? INTERVIEWING COLLEGE FACULTY ABOUT THE LEARNING
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationBENCHMARK TREND COMPARISON REPORT:
National Survey of Student Engagement (NSSE) BENCHMARK TREND COMPARISON REPORT: CARNEGIE PEER INSTITUTIONS, 2003-2011 PREPARED BY: ANGEL A. SANCHEZ, DIRECTOR KELLI PAYNE, ADMINISTRATIVE ANALYST/ SPECIALIST
More informationMathematics Program Assessment Plan
Mathematics Program Assessment Plan Introduction This assessment plan is tentative and will continue to be refined as needed to best fit the requirements of the Board of Regent s and UAS Program Review
More informationAgent-Based Software Engineering
Agent-Based Software Engineering Learning Guide Information for Students 1. Description Grade Module Máster Universitario en Ingeniería de Software - European Master on Software Engineering Advanced Software
More informationDeveloping True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability
Developing True/False Test Sheet Generating System with Diagnosing Basic Cognitive Ability Shih-Bin Chen Dept. of Information and Computer Engineering, Chung-Yuan Christian University Chung-Li, Taiwan
More informationDIDACTIC MODEL BRIDGING A CONCEPT WITH PHENOMENA
DIDACTIC MODEL BRIDGING A CONCEPT WITH PHENOMENA Beba Shternberg, Center for Educational Technology, Israel Michal Yerushalmy University of Haifa, Israel The article focuses on a specific method of constructing
More informationUsing computational modeling in language acquisition research
Chapter 8 Using computational modeling in language acquisition research Lisa Pearl 1. Introduction Language acquisition research is often concerned with questions of what, when, and how what children know,
More informationMathematics process categories
Mathematics process categories All of the UK curricula define multiple categories of mathematical proficiency that require students to be able to use and apply mathematics, beyond simple recall of facts
More informationMini Lesson Ideas for Expository Writing
Mini LessonIdeasforExpositoryWriting Expository WheredoIbegin? (From3 5Writing:FocusingonOrganizationandProgressiontoMoveWriters, ContinuousImprovementConference2016) ManylessonideastakenfromB oxesandbullets,personalandpersuasiveessaysbylucycalkins
More informationReinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationData Fusion Models in WSNs: Comparison and Analysis
Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,
More informationA Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems
A Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems Hannes Omasreiter, Eduard Metzker DaimlerChrysler AG Research Information and Communication Postfach 23 60
More informationInterpreting ACER Test Results
Interpreting ACER Test Results This document briefly explains the different reports provided by the online ACER Progressive Achievement Tests (PAT). More detailed information can be found in the relevant
More informationLinking Task: Identifying authors and book titles in verbose queries
Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,
More informationEQuIP Review Feedback
EQuIP Review Feedback Lesson/Unit Name: On the Rainy River and The Red Convertible (Module 4, Unit 1) Content Area: English language arts Grade Level: 11 Dimension I Alignment to the Depth of the CCSS
More informationThe Common European Framework of Reference for Languages p. 58 to p. 82
The Common European Framework of Reference for Languages p. 58 to p. 82 -- Chapter 4 Language use and language user/learner in 4.1 «Communicative language activities and strategies» -- Oral Production
More information1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature
1 st Grade Curriculum Map Common Core Standards Language Arts 2013 2014 1 st Quarter (September, October, November) August/September Strand Topic Standard Notes Reading for Literature Key Ideas and Details
More informationImplementing a tool to Support KAOS-Beta Process Model Using EPF
Implementing a tool to Support KAOS-Beta Process Model Using EPF Malihe Tabatabaie Malihe.Tabatabaie@cs.york.ac.uk Department of Computer Science The University of York United Kingdom Eclipse Process Framework
More informationGenevieve L. Hartman, Ph.D.
Curriculum Development and the Teaching-Learning Process: The Development of Mathematical Thinking for all children Genevieve L. Hartman, Ph.D. Topics for today Part 1: Background and rationale Current
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationPART C: ENERGIZERS & TEAM-BUILDING ACTIVITIES TO SUPPORT YOUTH-ADULT PARTNERSHIPS
PART C: ENERGIZERS & TEAM-BUILDING ACTIVITIES TO SUPPORT YOUTH-ADULT PARTNERSHIPS The following energizers and team-building activities can help strengthen the core team and help the participants get to
More informationIntelligent Agents. Chapter 2. Chapter 2 1
Intelligent Agents Chapter 2 Chapter 2 1 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types The structure of agents Chapter 2 2 Agents
More informationRETURNING TEACHER REQUIRED TRAINING MODULE YE TRANSCRIPT
RETURNING TEACHER REQUIRED TRAINING MODULE YE Slide 1. The Dynamic Learning Maps Alternate Assessments are designed to measure what students with significant cognitive disabilities know and can do in relation
More informationThe Enterprise Knowledge Portal: The Concept
The Enterprise Knowledge Portal: The Concept Executive Information Systems, Inc. www.dkms.com eisai@home.com (703) 461-8823 (o) 1 A Beginning Where is the life we have lost in living! Where is the wisdom
More informationClouds = Heavy Sidewalk = Wet. davinci V2.1 alpha3
Identifying and Handling Structural Incompleteness for Validation of Probabilistic Knowledge-Bases Eugene Santos Jr. Dept. of Comp. Sci. & Eng. University of Connecticut Storrs, CT 06269-3155 eugene@cse.uconn.edu
More informationThink A F R I C A when assessing speaking. C.E.F.R. Oral Assessment Criteria. Think A F R I C A - 1 -
C.E.F.R. Oral Assessment Criteria Think A F R I C A - 1 - 1. The extracts in the left hand column are taken from the official descriptors of the CEFR levels. How would you grade them on a scale of low,
More informationProof Theory for Syntacticians
Department of Linguistics Ohio State University Syntax 2 (Linguistics 602.02) January 5, 2012 Logics for Linguistics Many different kinds of logic are directly applicable to formalizing theories in syntax
More informationA Process-Model Account of Task Interruption and Resumption: When Does Encoding of the Problem State Occur?
A Process-Model Account of Task Interruption and Resumption: When Does Encoding of the Problem State Occur? Dario D. Salvucci Drexel University Philadelphia, PA Christopher A. Monk George Mason University
More informationOntologies vs. classification systems
Ontologies vs. classification systems Bodil Nistrup Madsen Copenhagen Business School Copenhagen, Denmark bnm.isv@cbs.dk Hanne Erdman Thomsen Copenhagen Business School Copenhagen, Denmark het.isv@cbs.dk
More informationArabic Orthography vs. Arabic OCR
Arabic Orthography vs. Arabic OCR Rich Heritage Challenging A Much Needed Technology Mohamed Attia Having consistently been spoken since more than 2000 years and on, Arabic is doubtlessly the oldest among
More informationCopyright Corwin 2015
2 Defining Essential Learnings How do I find clarity in a sea of standards? For students truly to be able to take responsibility for their learning, both teacher and students need to be very clear about
More information1 3-5 = Subtraction - a binary operation
High School StuDEnts ConcEPtions of the Minus Sign Lisa L. Lamb, Jessica Pierson Bishop, and Randolph A. Philipp, Bonnie P Schappelle, Ian Whitacre, and Mindy Lewis - describe their research with students
More information