From: AAAI Technical Report FS Compilation copyright 1994, AAAI (www.aaai.org). All rights reserved.

Similar documents
Reinforcement Learning by Comparing Immediate Reward

ISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM

Seminar - Organic Computing

*Net Perceptions, Inc West 78th Street Suite 300 Minneapolis, MN

A theoretic and practical framework for scheduling in a stochastic environment

Discriminative Learning of Beam-Search Heuristics for Planning

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Learning and Transferring Relational Instance-Based Policies

Lecture 1: Machine Learning Basics

University of Groningen. Systemen, planning, netwerken Bosman, Aart

CS Machine Learning

Module 12. Machine Learning. Version 2 CSE IIT, Kharagpur

COMPUTATIONAL COMPLEXITY OF LEFT-ASSOCIATIVE GRAMMAR

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Exploration. CS : Deep Reinforcement Learning Sergey Levine

Lecture 10: Reinforcement Learning

Visit us at:

A Reinforcement Learning Variant for Control Scheduling

Learning Cases to Resolve Conflicts and Improve Group Behavior

Longitudinal Analysis of the Effectiveness of DCPS Teachers

Learning From the Past with Experiment Databases

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for

ACADEMIC AFFAIRS GUIDELINES

A Case-Based Approach To Imitation Learning in Robotic Agents

PH.D. IN COMPUTER SCIENCE PROGRAM (POST M.S.)

On the Combined Behavior of Autonomous Resource Management Agents

Visual CP Representation of Knowledge

Pod Assignment Guide

OCR for Arabic using SIFT Descriptors With Online Failure Prediction

Axiom 2013 Team Description Paper

NCEO Technical Report 27

Multimedia Application Effective Support of Education

Diagnostic Test. Middle School Mathematics

A Version Space Approach to Learning Context-free Grammars

Measurement. Time. Teaching for mastery in primary maths

A Case Study: News Classification Based on Term Frequency

Probability and Statistics Curriculum Pacing Guide

Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method

Software Maintenance

Case Acquisition Strategies for Case-Based Reasoning in Real-Time Strategy Games

Parsing of part-of-speech tagged Assamese Texts

Transfer Learning Action Models by Measuring the Similarity of Different Domains

Planning with External Events

Laboratorio di Intelligenza Artificiale e Robotica

Generative models and adversarial training

The lab is designed to remind you how to work with scientific data (including dealing with uncertainty) and to review experimental design.

QuickStroke: An Incremental On-line Chinese Handwriting Recognition System

Learning Methods in Multilingual Speech Recognition

AP Statistics Summer Assignment 17-18

Evidence for Reliability, Validity and Learning Effectiveness

GACE Computer Science Assessment Test at a Glance

Interaction Design Considerations for an Aircraft Carrier Deck Agent-based Simulation

Evolution of Collective Commitment during Teamwork

ACTL5103 Stochastic Modelling For Actuaries. Course Outline Semester 2, 2014

BENCHMARK TREND COMPARISON REPORT:

Causal Link Semantics for Narrative Planning Using Numeric Fluents

Maximizing Learning Through Course Alignment and Experience with Different Types of Knowledge

Given a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations

The Enterprise Knowledge Portal: The Concept

U : Survey of Astronomy

AQUA: An Ontology-Driven Question Answering System

Rote rehearsal and spacing effects in the free recall of pure and mixed lists. By: Peter P.J.L. Verkoeijen and Peter F. Delaney

INPE São José dos Campos

ACBSP Related Standards: #3 Student and Stakeholder Focus #4 Measurement and Analysis of Student Learning and Performance

COMPUTER-ASSISTED INDEPENDENT STUDY IN MULTIVARIATE CALCULUS

Navigating the PhD Options in CMS

Learning to Schedule Straight-Line Code

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Michael Grimsley 1 and Anthony Meehan 2

Early Warning System Implementation Guide

Chinese Language Parsing with Maximum-Entropy-Inspired Parser

Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation

Assessing Functional Relations: The Utility of the Standard Celeration Chart

Liquid Narrative Group Technical Report Number

Laboratorio di Intelligenza Artificiale e Robotica

This scope and sequence assumes 160 days for instruction, divided among 15 units.

University of Waterloo School of Accountancy. AFM 102: Introductory Management Accounting. Fall Term 2004: Section 4

Field Experience Management 2011 Training Guides

Introduction to Ensemble Learning Featuring Successes in the Netflix Prize Competition

The Strong Minimalist Thesis and Bounded Optimality

Predicting Future User Actions by Observing Unmodified Applications

Problems of the Arabic OCR: New Attitudes

(Sub)Gradient Descent

We are strong in research and particularly noted in software engineering, information security and privacy, and humane gaming.

Self Study Report Computer Science

Team Formation for Generalized Tasks in Expertise Social Networks

Montana Content Standards for Mathematics Grade 3. Montana Content Standards for Mathematical Practices and Mathematics Content Adopted November 2011

Evolutive Neural Net Fuzzy Filtering: Basic Description

Cross Language Information Retrieval

A student diagnosing and evaluation system for laboratory-based academic exercises

Major Milestones, Team Activities, and Individual Deliverables

Designing A Computer Opponent for Wargames: Integrating Planning, Knowledge Acquisition and Learning in WARGLES

On-the-Fly Customization of Automated Essay Scoring

Python Machine Learning

CS 1103 Computer Science I Honors. Fall Instructor Muller. Syllabus

Evaluation of a College Freshman Diversity Research Program

CS 446: Machine Learning

Transcription:

From: AAAI Technical Report FS-94-01. Compilation copyright 1994, AAAI (www.aaai.org). All rights reserved. Search Space Characterization for a Telescope Scheduling Application John Bresina Mark Drummond Keith Swanson Recom Technologies Recom Technologies NASA AI Research Branch, Mail Stop: 269-2 NASA Ames Research Center Moffett Field, CA 94031000 USA e-mail: {bresina, drummond, swanson}@ptolemy.arc.nasa.gov Abstract This paper presents a technique for statistically characterizing a search space and demonstrates the use of this technique within a practical telescope scheduling application. The characterization provides the following: (i) an estimate of the search space size, (ii) scaling technique for multi-attribute objective functions and search heuristics, (iii) a "quality density function" for schedules in a search space, (iv) a measure of a scheduler s performance, and (v) support for constructing and tuning search heuristics. This paper describes the random sampling algorithm used to construct this characterization and explains how it can be used to produce this information. As an example, we includ e a comparative analysis of an heuristic dispatch scheduler and a look-ahead scheduler that performs greedy search. Introduction This paper presents a technique for statistically characterizing a search space using a random sampling algorithm. The characterization technique is demonstrated with a problem instance from a practical telescope scheduling application. One of the uses of this characterization is to provide a means for "calibrating" a given scheduler on a given scheduling problem. Too often, one is told that some particular scheduler achieves some particular score on a given scheduling problem. For instance, we might be told that a particular scheduler achieves a score of 67 on a specific job shop scheduling problem, but we are not given a means of interpreting this score. If the particular job shop scheduling problem is a benchmark, then we might have access to "the best score so far". If so, then we might be impressed if 67 is better than any other score to date. But what sort of a score should one reasonably expect? Perhaps the only schedulers tried on the problem to date have not been well-suited to the problem, and with a different scheduling approach, a score of 67 could look rather paltry. Additionally, for many problems of practical interest, the "best score so far" is not available. Even if no one else has worked on the problem at hand, one would still like to evaluate how well some proposed technique is faring. It is not practical to compare a given technique against every other known scheduling technique, but one would like to have some basis for claiming that a proposed technique is actually performing well. Theoretical analyses of problem difficulty have little hearing on particular problem instances. Most interesting classes of scheduling problems are NP-hard, and the theory of computational complexity provides little further insight into the sorts of scores that one might expect from any particular scheduler. Sometimes, one can examine the mathematics of the objective function and derive bounds on the range of possible scores. However, such bounds do not provide insight into how likely any given value is. To obtain such insight, we suggest an empirical analysis based on statistical sampling of a problem s search space. The basic idea behind the approach presented in this paper is as follows. Randomly sample the solutions in the scheduler s search space and collect statistics that describe a probability density function of solution quality. Against this (information-free) background quality density, we can measure the (informed) performance any given scheduler. While this technique does not actually tell us how hard a given problem is, it does tell us how well some particular scheduler performs. We have also found other uses for the information, as outlined later in the paper. The paper is organized as follows. In the next section, we briefly describe our telescope scheduling application, define our formulation of the search space, define the "iterative sampling" algorithm, and describe our multi-attribute objective function and attribute scaling. Then, in the subsequent section, we characterize the search space in terms of size and solution quality, compare two scheduling techniques (heuristic dispatch and greedy look-ahead search), and discuss search heuristics. The final section summarizes and briefly discusses "self-calibration". Scheduling Application Our application involves the management and scheduling of fully automatic, ground-based telescopes. This section only briefly describes the domain; for more de- 10

tails, see Bresina, et al., (1994). Fully automatic operation allows an astronomer to be removed from a telescope both temporally and spatially, and makes it possible for a remotely located telescope to operate unattended for weeks or months. (See Genet and Hayes (1989) for details on automatic photoelectric telescopes.) While the majority of existing ground-based automated telescopes are used for aperture photometry, automation support for spectroscopy and imaging has been increasing. The language used to define observation requests is the Automatic Telescope Instruction Set, or ATIS (Boyd, et al., 1993). In AWLS, a group is the primitive unit to be scheduled and executed. A group is a sequence of telescope commands and instrument commands defined by an astronomer which typically takes two to ten minutes to execute. Observation requests contain "hard" constraints, defined by basic physics, and a number of "soft" preferences. Each observation request can be executed only in a specific time window (typically between one and eight hours) which defined by the astronomer who submitted the request. New requests can arrive daily, and once submitted, an observation request can be active for weeks or months. A schedule is a sequence of groups, and schedule quality is defined with respect to a given domain-specific objective function. Search Space Formulation We have formulated the search space as a space of world model states. For our application, the state of the world includes the state of the telescope, observatory, environment, and the current time. The alternative arcs out of any given state represent the groups that are "enabled" in that state. We say that a group is enabled in a state, if all of its hard constraints (i.e., preconditions) are satisfied in that state. The branching indicates an exclusive-or choice -- one and only one of the groups can be chosen to be part of a given schedule. The search space is organized chronologically as a tree, where the root of the tree is the state describing the start of the observation night. Each trajectory through the tree defines a different possible schedule; schedules that are identical up to a given branching point share a common prefix. The number of trajectories is exponential in the number of ATIS groups, but finite. Since groups cannot be executed after the observation night ends, each trajectory has finite length. Iterative Sampling The basis for our characterization is a technique called ileralive sampling (Minton et al., 1992; Langley, 1992; Chen, 1989; Knuth, 1975). Iterative sampling is a type of Monte Carlo method that randomly selects trajectories. Each trajectory is selected by starting at the initial (root) state and randomly choosing one of the groups that are enabled in that state. The selected group is applied, producing a new state and the process of random selection and application continues until a state is reached in which no groups are enabled. Some of the numerous ways that this sampling technique can be utilized are described in the next section. Objective Function For the experiments presented in this paper, we have constructed a simple but representative objective function based on comments we have received from astronomers. The objective function is a weighted combination of three attributes: priority, fairness, and airmass. For a given schedule, the first attribute is computed as the average group priority. In ATIS, a higher priority is indicated by a lower number; hence, a lower average is better. The second attribute attempts to measure how fair the schedule is in terms of the time allocated to each user. Since each user can request a different amount of observation time, the fairness measure is computed as the sum of the differences between the amount of time requested in the ATIS file and that allocated in a given schedule. Hence, smaller fairness scores are better. The third attribute attempts to improve the quality of observations by reducing the amount of airmass (atmosphere) through which observations are made. For a celestial object of a given declination, airmass is minimal when the telescope is pointing on the meridian. We approximate airmass as the 1 average deviation from the meridian. When constructing such a multi-attribute objective function, the scores of the different attributes need to be scaled so that they are composable. This scaling was accomplished via the iterative sampling technique, scoring each sample according to each of the three attributes. From these scores, we determined that each attribute had approximately a normal distribution and calculated the mean and standard deviation f6r each attribute. These statistics were used in the composite objective function to transform the attribute scores such that each transformed attribute had a mean of zero and a standard deviation of one. Hence, all the attributes were easily comparable. For these experiments, we wanted an objective function that placed equal importance on each attribute, so each transformed attribute was simply added to form the composite score. Search Space Characterization This section presents a search space characterization for a particular problem instance from the telescope scheduling domain. The results presented are only illustrative; they are based on a single, but real, ATIS input file. This file contains 194 groups which represent the combined observation requests of three astronomers. 1Airmass is non-linearly related to local hour angle. ll

18r 16 ~ i 14 -,o i f o I I o 95% Confidence 4 3 2 Dispatch 6~ 4 0 10 20 30 40 50 60 Depth 10-0... T... I... -12-8 -4 0 4 Figure 1: Average branching factor as a function of search tree depth. Results are based on 100 samples; the error bars represent the 95% confidence interval. Figure 2: Composite objective function: Quality density function and the scores obtained by the two scheduling techniques. The search space characterization provides information that can be used to answer the following questions. What is the size of the search space? What is the probability density function for schedule quality for the given problem? How well does the ATIS heuristic dispatch perform? What is the performance of a look-ahead scheduler that performs a greedy search using the objective function as a local search heuristic? How well does each attribute of the multi-attribute objective function perform as a search heuristic for the greedy look-ahead technique? Search Space Size One of the primary determinants of problem difficulty is the size of the search space. While it is not practical to enumerate all states in the space, the overall size can be estimated using iterative sampling. The size of the search tree is determined by its depth and branching factor. These two factors are estimated from the set of randomly selected trajectories. To our knowledge, Knuth (1975) was the first person to use this approach to estimate the size of a search space. Chen (1989) refined, extended, and analyzed the technique. Figure 1 shows the results of 100 samples with error bars representing the 95% confidence interval. The branching factor is history-dependent; i.e., the number of enabled groups decreases through the night. The primary reason for this is that as groups are selected for execution, the number of unscheduled groups decreases. This data suggests that the number of schedules in the search space is between 1056 and 1057. Search Space Quality It is not solely the size of the search space that determines the difficulty of finding a good schedule; the density of high quality schedules is also important. The schedule produced should not only satisfy all hard constraints but, ideally, should also achieve an optimal score on all the soft constraints. (Another important consideration is the execution robustness of the schedule. However, this paper does not address schedule execution; see Drummond, Bresina, and Swanson (1994) for a discussion of this issue.) The technique we used to estimate the size of the search space can also be used to estimate the quality density function of the schedules in the search space. Evaluating the schedules found via iterative sampling yields a frequency distribution of scores. It is important, yet often non-trivial, to obtain an unbiased sample from the solution space. If the solution tree has a constant branching factor at every (internal) node, then iterative sampling will produce an unbiased sample. However, constant branching is not a necessary condition for unbiased sampling; it can be weakened as follows. If, for every depth, all the nodes at that depth have the same branching factor, then iterative sampling will be unbiased. As can be seen in Figure 1, the branching factor changes from depth to depth; however, the error bars indicate that the branching factor is nearly constant for nodes at the same depth. In our formulation, the scheduling search space includes only feasible schedules, i.e., schedules that satisfy all the hard constraints. Hence, in this case, the search space is equivalent to the solution space. For formulations in which this equivalence does not hold, iterative sampling in the search tree is not guaranteed 12

4 Dispatch 4 Dispatch 3 3 2 2 I0-10- 0... -12-8 -4 0 0... -12-8 -4 0 4 Figure 3: Priority attribute: Quality density function and the scores obtained by the two scheduling techniques. Figure 4: Fairness attribute: Quality density function and the scores obtained by the two scheduling techniques. to return a schedule. In this case, the above condition in terms of branching factor is not sufficient to ensure that iterative sampling in the search tree will produce an unbiased sample of solutions. However, the branching condition can be generalized as follows. Note that each internal node in the search tree is the root of a subtree which contains some number of solutions. If, for every depth, all the subtrees have the same number of solutions, then iterative sampling will be unbiased. In our experiments, we performed 1000 iterative samples and computed the composite objective function score, as well as the attribute scores for priority, fairness, and airmass. From this we constructed a quality density function with respect to the composite objective, as well as with respect to each attribute. The resulting four density functions are shown in Figures 2, 3, 4, and 5. The scores from iterative sampling are grouped into 100 "score buckets" of equal size. For each point, the x-coordinate is the mid-point of a score bucket and the y-coordinate is the number of samples that obtained a score in that bucket (i.e., the relative frequency). Comparison of Schedulers In this section, we briefly describe two techniques for searching the scheduling space and describe a comparative analysis of these two scheduling techniques. The first technique is based on a set of group selection rules that are defined by the ATIS standard. The selection rules reduce the set of currently enabled groups to a single group to be executed next. In scheduling parlance, this scheme is often called heuristic dispatch, since at any point in time, some task is "dispatched" for execution, and the selection of a task is determined, purely locally (without look-ahead), the application of domain-specific heuristics. There are four heuristic group selection rules specified in the ATIS standard: priority, number-ofobservations-remaining, nearest-to-end-window, and file-position. The rules are applied in the sequence given; each rule is used to break ties that remain from application of the preceding rules. If the result of applying any rule is that there is only one group remaining, that group is selected for execution and no further rules are applied. Hence, the rules are used to impose an hierarchical sort on the groups. Since there can be no file-position ties, application of the group selection rules deterministically makes a unique selection. The group selection rules can be viewed as a search heuristic that, for each state, deterministically recommends an arc to follow. Hence, starting from the root of the search tree, this search heuristic deterministically selects one trajectory; in other words, the heuristic admits a single solution. The second technique performs a type of look-ahead search, generating and evaluating alternative schedules. At each state, all the enabled groups are applied to generate a set of new states, each of which is scored by an heuristic evaluation function. The arc leading to the best-scoring state is then followed, and the process repeats from that state. This search technique performs a one-step look-ahead and is a type of greedy search. This search technique is (non)deterministic if ties during state evaluation are broken (non) deterministically. In each of the four plots, in addition to the quality density functions, we also illustrate a comparison of the two scheduling techniques. The single score obtained by each technique is shown by a dashed vertical line (the height of which is immaterial). 13

4 3 2 I0-0... l... I -12-8 -4 Dis ~atch 0 4 0... 1-12 Airmass Priority Fairness -8-4 o 4 Figure 5: Airmass attribute: Quality density function and the scores obtained by the two scheduling techniques. 4 3 2 10- Figure 6: Comparison of the composite objective function scores obtained by greedy look-ahead with the three single-attribute search heuristics. It is interesting that, with respect to the objective function, heuristic dispatch was no better than iterative (random) sampling (Figure 2). In contrast, score obtained by greedy look-ahead is much better than both the majority of scores obtained by iterative sampling and the single score obtained by heuristic dispatch (recall that lower scores are better). Notice that heuristic dispatch obtains the best score for priority (Figure 3), as might be expected. This is a natural result of the fact that group priority is the key determinant of which group gets selected by the dispatcher. The scores produced by iterative sampling provide a feeling for the expected density of possible scores in the solution space. Figure 2 shows that greedy look-ahead obtained a composite score of -11.56 and the ATIS heuristic dispatch obtained a composite score of +0.14. The difference between these two composite scores is 11.7. Without knowledge of the distribution of scores, we would not know how significant a difference this represents. However, our sampling technique enables this difference to be interpreted in terms of standard deviation. The standard deviation of the composite objective function sample was 1.3. (The standard deviation for the objective function is not 1, as expected, because the three attributes did not have true standard normal distributions.) The look-ahead score is 8.89 standard deviations better than the mean, whereas, the dispatch score was 0.11 standard deviations worse than the mean. Search Heuristics In the above comparative analysis, the entire composite objective function was used as a local search heuristic in the greedy search. However, this is not necessarily a good idea; it may be better to use only a subset of the attributes. This decision can be based on an empirical evaluation of how well each attribute performs as a local search heuristic. Using the technique discussed above, we were able to carry out such an evaluation as follows. For each attribute, a greedy look-ahead search was performed using a search heuristic based only on the single attribute (this is equivalent to zeroing the weights of the other two attributes in the composite search heuristic). For each greedy search process, the best schedule found was evaluated in terms of the (original) composite objective function. Figure 6 shows the three objective function scores obtained by each single-attribute search heuristic against the same background random sample as in Figure 2. These results indicate that airmass is the best single-attribute local heuristic. The results also indicate that fairness is the worst, which makes sense since it is the "most global" attribute in the objective function. Priority is not a very good local heuristic either, which explains why ATIS dispatch did not perform well. We could also use sampling to estimate the average cost of evaluating each objective attribute. This information along with the above analysis could then be used to determine which attributes yield the most cost-effective search heuristic. The statistical sampling process is also a good basis to determine what weighting factor to apply to each attribute in the heuristic. The weights given in the objective function may not be the same weights that best focus the search for highscoring schedules. Tuning the heuristic weights based on feedback obtained from statistical sampling could either be done manually or automatically using machine learning techniques. 14

Concluding Remarks Our goal in this paper has been to define and illustrate a statistical sampling technique for characterizing search spaces. We have demonstrated the characterization technique on a practical telescope scheduling problem. The characterization provides the following: (i) an estimate of the search space size, (it) a scaling technique for multi-attribute objective functions and search heuristics, (iii) a "quality density function" for schedules in a search space, (iv) a measure of a scheduler s performance, and (v) support for constructing and tuning search heuristics. The experiments reported above used a Lisp-based scheduling engine. However, in order to make the system useful to astronomers, it must be written in such a way that they themselves can extend and support it. To make this possible, we are in the process of implementing a new "C" language version. This new system will provide a "self calibration" facility which will automatically perform the search space characterization experiments upon request. The final version of the system will be connected to the Internet and will accept new groups on a daily basis. Thus, the definition of the scheduling problem could change frequently. We expect that a telescope manager will be able to use the self calibration facility to track the changing characterization of the search space. Based on the current characterization, a telescope manager could choose the best scheduling method and search heuristic for the current mix of groups. It might also be possible for the system itself to make these choices. Acknowledgements We would like to acknowledge the contributions of the most recent person to join the telescope management project team, Will Edgington. References Boyd, L., Epand, D., Bresina, J., Drummond, M., Swanson, K., Crawford, D., Genet, D., Genet, R., Henry, G., McCook, G., Neely, W., Schmidtke, P., Smith, D., and Trublood, M. 1993. Automatic Telescope Instruction Set 1993. In International Amateur- Professional Photoelectric Photometry (I.A.P.P.P.) Communications, No. 52, T. Oswalt (ed). Bresina, J., Drummond, M., Swanson, K., and Edgington, W. 1994. Automated Management and Scheduling of Remote Automatic Telescopes. In Optical Astronomy from the Earth and Moon, ASP Conference Series, Vol. 55. D.M. Pyper and R.J. Angione (eds.). Chen, P.C. 1989. Heuristic Sampling on Backtrack Trees. Ph.D. dissertation, Dept. of Computer Science, Stanford University. Report No. STAN-CS-89-1258. Drummond, M., Bresina, J., and Swanson, K. 1994. Just-In-Case Scheduling. In Proceedings o f the Twelfth National Conference on Artificial Intelligence. Seattle, WA. AAAI Press / The MIT Press. Genet, R.M., and Hayes, D.S. 1989. Robotic Observatories: A Handbook of Remote-Access Personal- Computer Astronomy. Published by the AutoScope Corporation, Ft. Collins, CO. Knuth, D.E. 1975. Estimating the Efficiency of Backtrack Programs. Mathematics of Computation, 29:121-136. Langley, P. 1992. Systematic and nonsystematic search. In Proceedings of the First International Conference on Artificial Intelligence Planning Systems. College Park, MD. Morgan Kaufmann Publishers, Inc. Minton, S., Drummond, M., Bresina, J., and Philips, A.B. 1992. Total Order vs. Partial Order Planning: Factors Influencing Performance. In Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning. Boston, MA. Morgan Kaufmann Publishers, Inc. 15