WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT

Size: px
Start display at page:

Download "WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT"

Transcription

1 WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT PRACTICAL APPLICATIONS OF RANDOM SAMPLING IN ediscovery By Matthew Verga, J.D.

2 INTRODUCTION Anyone who spends ample time working in ediscovery knows that the topic of sampling comes up constantly when referring to collections, early case assessment and review both human and technology-assisted. Long before modern review tools incorporated sophisticated sampling calculators, attorneys were manually taking samples, perhaps re-reviewing every 10 th document. Sampling is name-checked repeatedly in ediscovery orders and decisions. Despite the ubiquity of the concept, most discussions do not stop to explain the basic mechanics of sampling, the basic calculations used to leverage it and the practical, stepby-step ways to apply it in your ediscovery projects. Terms that I do not recall hearing in any of my law school classes are peppered liberally throughout the discourse. WOULD YOU LIKE A SAMPLE? Having always favored words over numbers (like most attorneys and paralegals), I became involved in ediscovery 7½ years ago with little to no knowledge of sampling techniques, confidence levels or recall and precision. The best wisdom of the day: Included iterative testing of search strings by partners or senior attorneys, who would informally sample the results of each revised search string to inform their next revision. Suggested employing a 3-pass document review process with successively more senior attorneys performing each pass: o The first pass reviewed everything o The second pass re-reviewed a random 10% sample o And the third pass re-reviewed a random 5% sample WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 2

3 SOUNDS REASONABLE, RIGHT? It did to me too until I started asking myself and others why? Why is a search that returns more documents than expected invalid? How many search results are enough to sample? Why re-review 10% and 5%? What basis do we have to believe these processes are sufficient or reliable? I keenly felt a gap in my knowledge, and the knowledge of my peers. Surely, there were better ways to accomplish these goals and more defensible bases for decision making of this kind. Surely other professionals in other fields addressed these questions all the time and dispatched them using something more than their guts. FOUR PRACTICAL APPLICATIONS I found that the solution lies in the basic statistics course that some of you took, and that the rest of us should have taken. As it turns out, there is math for that. Some of it is moderately complicated and requires a specialized calculator. Some of it is very complicated and requires an expert with different letters after their name than mine. And, some of it is so simple you can do it with pen and paper. The purpose of this white paper is to illustrate a few practical applications of random sampling in ediscovery, including key concepts, key vocabulary and illustrations of the basic math. At the very least, this information will equip you to have a more productive conversation with your service providers about what you want to accomplish and how they can help. It may even provide you with the confidence to begin experimenting with some of these more concrete methods in your own ediscovery projects. The four practical applications I will cover are: 1. Estimating Prevalence - finding out what s in a new, unknown dataset 2. Testing Classifiers - finding out how good a search string is 3. Quality Control of Human Document Review - finding out how good your reviewers are 4. Elusion and Overall Completeness - finding out how much stuff you missed WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 3

4 ESTIMATING PREVALENCE FINDING OUT WHAT S IN A NEW, UNKNOWN DATASET The first important application of simple random sampling in ediscovery is for the estimation of prevalence. Prevalence is the portion of a dataset that is relevant to a particular information need. For example, if one third of a dataset was relevant in a case, the prevalence of relevant materials would be 33%. Prevalence is always known by the end of a document review project; hindsight is 20/20. But, would there be value in knowing the prevalence at the start of a document review project? Certainly, there is: Knowing the prevalence of relevant materials can guide the selection of culling and review techniques to be employed and other next steps to be taken o It can also provide a measuring stick for overall progress Knowing the prevalence of different subclasses of materials can guide decisions about resource allocation (e.g., associates vs. contract attorneys vs. LPO) or prioritization Knowing the prevalence of specific features facilitates more accurate estimation of project costs: o How much material is likely to need to be reviewed o How much privilege quality control review and logging is likely to be needed o How much redaction is likely to be needed In each of these examples, the application of this sampling process provides valuable discovery intelligence that can serve as the basis of data-driven decision making, replacing gut-feelings with knowledge. When utilizing simple random sampling to estimate prevalence, the first question is: from what pool of materials should the sample be taken? The answer to that question is dictated by the specific prevalence you are attempting to estimate. For the purposes of this discussion, let s assume we are simply trying to estimate the overall prevalence of relevant materials. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 4

5 RANDOM SAMPLING Since we are looking for potentially relevant materials, the pool from which the sample should be taken is the same as the pool that would normally be submitted for review: A pool with system files removed (de-nisted) A pool with documents outside of any applicable date range removed A pool that has been de-duplicated A pool to which any other obvious, objective culling criteria have been applied o (e.g., court mandated key word or custodian filtering) Once this pool of materials has been isolated, it will become your sampling frame. Your simple random sample will be taken from within this frame. A simple random sample is one in which every document has an equal chance of being selected. To accomplish this, a random number generator is used. 1 Most modern review programs have sampling tools built in, which will be based on an acceptable pseudo-random number generator, such as the one included in the Microsoft.NET development framework. If experimenting with simple random sampling manually, you can also utilize spreadsheet programs like Microsoft Excel to generate lists of random numbers. The size of the sample you should take is dictated by the strength of the measurement you want to achieve, the size of your dataset and the expected prevalence of relevant material within the dataset. 1 Technically, all software tools generate pseudo-random numbers. This means that if they were used to generate extremely large sets of random numbers there would eventually be identifiable patterns or repetition, but for our purposes, we can treat them as random number generators. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 5

6 The strength of the measurement is expressed through two values: confidence level and confidence interval. Confidence Level is expressed as a percentage, and is a measure of how certain you are about the results you get. Or, said another way: if you took the same size sample the same way 100 times, how many times out of 100 would you get the same results? Typically, you will be seeking a confidence level of 90%, 95% or 99%. Confidence Interval is also expressed as a percentage, and is a measure of how precise your results are. Or, said differently, how much uncertainty there is in your results. Typically you will be seeking a confidence interval between +/-2% (which is a total range of 4%) and +/-5% (which is a total range of 10%). o The term confidence interval is sometimes used interchangeably with the term margin of error. The margin of error, however, is stated as one half of the confidence interval, just as a radius is one half of a diameter. For example, a margin of error of 2% refers to a confidence interval of +/- 2% (a 4% range). For example, you might choose to take a measurement with a confidence level of 95% and a confidence interval of +/-2% to estimate prevalence. That measurement strength has been referenced in a variety of cases and articles as a potentially acceptable standard. 2 If review of your sample revealed a prevalence of 50%, you would know that if you repeated the test another 100 times, 95 of those tests would also have results that fall between 48% and 52% prevalence. Strength of measurement affects sample sizes in two ways. 1. First, the higher the confidence level you desire, the larger the sample you will need to take. 2. Second, the lower the margin of error you desire, the larger the sample you will need to take. See Figure 1 on the next page illustrating how sample sizes increase with confidence level and interval. 2 For example, in the widely read and discussed Monique da Silva Moore, et al., v. Publicis Groupe & MSL Group. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 6

7 FIGURE 1 - SAMPLE SIZE VARIABILITY WITH CONFIDENCE LEVEL AND INTERVAL Sample sizes also increase with the size of the sampling frame, but only up to a point. Beyond that point, the required sample size levels off. For example, the sample size needed for 100,000 documents is roughly the same as the sample size needed for 1,000,000 documents. Figure 2 illustrates how sample size increases with sampling frame size. FIGURE 2 - SAMPLE SIZE VARIABILITY WITH SAMPLE FRAME SIZE WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 7

8 Understanding this can produce significant cost savings. A traditional 5% sample of 1,000,000 documents would be 50,000 documents, but a simple random sample of only about 2,400 documents actually is sufficient to estimate prevalence and accomplish other useful investigatory tasks. PREVALENCE Prevalence also affects the required sample size, however it will not yet be known when prevalence itself is what you are sampling to estimate. In that case, you should use the most conservative value the one resulting in the largest sample size. Assuming a prevalence of 50%, i.e. that half of the sampling frame is relevant and half is not, requires the largest sample size. Sample size decreases as prevalence increases or decreases from 50%. See Figure 3 for a visualization of how sample size fluctuates with prevalence. FIGURE 3 SAMPLE SIZE VARIABILITY WITH PREVALENCE When estimating prevalence, there is no correct strength of measurement to take. As noted above, several orders and articles have referenced a 95% confidence level and a +/- 2% confidence interval, but that is persuasive authority at best. You may not feel comfortable with anything less than 99% +/- 1%, or you may be fine at 90% +/- 5%. It depends on your specific circumstances. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 8

9 Assuming you have settled on a measurement strength of 95% +/- 2%, how do you calculate your sample size? In most instances, your review tool will have a built in sampling calculator that you can use. If not, sampling calculators are available online on a variety of websites. 3 In either case, you will input your desired confidence level (95%), your desired confidence interval or margin of error (+/- 2% or 2%), your sampling frame size (e.g., 1,000,000), and depending on the calculator, the expected prevalence (50%; some calculators always assume 50% by default and do not allow for customization of this variable). If you were to enter this set of hypothetical variables into a sampling calculator, you would learn that a simple random sample of 2,396 documents from your sampling frame of 1,000,000 will allow you to estimate prevalence with a confidence level of 95% +/- 2%. For example, if you took such a sample and reviewed it, And the review identified 599 relevant documents, You would have 95% confidence o That the overall prevalence of relevant documents is between 23% and 27%, o Or between 230,000 and 270,000 of your 1,000,000 hypothetical documents When reviewing random samples to estimate prevalence whether of general relevance or more specific features it is important to ensure the highest quality review possible, as any errors in the review of the sample will be effectively amplified in the estimations based on that review. For this reason, reviews of such samples should be conducted by one or more members of the project team with direct, substantial knowledge of the matter. Estimating prevalence in this way can reveal a variety of valuable, specific information about an unknown dataset information that can be used to guide many critical project decisions. Moreover, this process can be completed using smaller numbers of documents than traditional methods. And, once completed, this reviewed sample can serve additional purposes as a control set for testing classifiers. 3 For example, WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 9

10 TESTING CLASSIFIERS FINDING OUT HOW GOOD A SEARCH STRING IS The second important application of simple random sampling in ediscovery is for the testing of classifiers. In the context of ediscovery, classifiers are tools, mechanisms or processes by which documents from a dataset are classified into categories like responsive and nonresponsive or privileged and non-privileged. The tools, mechanisms, and processes employed could include: Keyword searching Individual human reviewers Overall human review processes Machine categorization by latent semantic indexing Predictive coding by probabilistic latent semantic analysis Testing classifiers has significant value as a source of discovery intelligence to guide data-driven decision making about the methodologies employed on your matters: Search strings and other classifiers can be refined through iterative testing Testing provides strong bases to argue for or against particular classifiers during negotiations When classifiers are tested, their efficacy is expressed through two values: recall and precision. The higher recall is, the more comprehensive a search s results will be, and the higher precision is, the more efficient any subsequent review process will be. Recall is expressed as a percentage and is a measure of how much of the material sought was returned by the classifier. For example, if 250,000 relevant documents exist and a search returns 125,000 of them, it has a recall of 50%. Precision is also expressed as a percentage and is a measure of how much unwanted material was returned by the classifier. For example, if a search returns 150,000 documents of which 75,000 are irrelevant, it has a precision of 50%. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 10

11 Testing classifiers before applying them to a full dataset requires the creation of a control set against which they can be tested. A control set is a representative sample of the full dataset that has already been classified by the best reviewers possible so that it can function as a gold standard. If you have already estimated prevalence, the sample reviewed for that estimation will generally also work as a gold standard control set for testing classifiers. Assuming you estimated prevalence as described above, you would have a ready-made control set of 2,396 documents that could be used to test search strings for recall and precision. Search strings would be tested by running them against the 2,396 document sample and comparing the results of the search to the results of the prior review by subject matter experts. The comparison facilitates the calculation of recall and precision. To demonstrate how this comparison is used to perform this calculation, we will use contingency tables (sometimes referred to as cross-tabulations). These tables provide an easy breakdown of the comparison between the results of a classifier being tested and the prior review by subject matter experts. It breaks the comparison down into four categories: 1. True Positives a. Documents BOTH returned by the search AND previously reviewed as relevant 2. False Positives a. Documents returned by the search BUT previously reviewed as NOT relevant 3. False Negatives a. Documents NOT returned by the search BUT previously reviewed as relevant 4. True Negatives a. Documents BOTH NOT returned by the search AND previously reviewed as NOT relevant For this example, let s assume that you tested a search string against your 2,396 document control set, and to keep the math simple, let s assume that the comparison of the search string to the prior review resulted in even split of 599 documents in each of those four categories. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 11

12 Figure 4 shows what this contingency table would look like. FIGURE 4 RESULTS OF SEARCH STRING TESTED AGAINST CONTROL SET Relevant/Returned (Search) Not Relevant/Not Returned (Search) Relevant (Prior Review) Not Relevant (Prior Review) On the contingency table in Figure 4: There are 599 True Positives o Top left box o Documents BOTH returned by the search AND previously reviewed as relevant There are 599 False Positives o Top right box o Documents returned by the search BUT previously reviewed as NOT relevant There are 599 False Negatives o Bottom left box o Documents NOT returned by the search BUT previously reviewed as relevant There are 599 True Negatives o Bottom right box o Documents BOTH NOT returned by the search AND previously reviewed as NOT relevant With the results broken out in this way in a contingency table, it is straightforward to perform the calculations of recall and precision for the hypothetical search string being tested. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 12

13 As noted, recall is the percentage of all relevant documents returned by the tested classifier o In this hypothetical, the search string correctly returned 599 out of 1,198 relevant documents o 599 / 1198 = 0.50, or 50% o The hypothetical search string has a recall of 50% As noted, precision is the percentage of documents returned by the classifier that are actually relevant o In this hypothetical, the search string returned a total of 1,198 documents of which 599 were actually relevant o 599 / 1198 = 0.50, or 50% o The hypothetical search string has a precision of 50% Calculating recall and precision provides us with an excellent assessment of the strength and efficacy of a particular search string or other classifier, but it s important to remember that the same confidence level and interval do not automatically apply to these numbers. These numbers have not been calculated based on a sample size of 2,396. Rather: Recall has been calculated based on the total number of relevant documents in the sample of 2,396 o In this hypothetical, that is 1,198 documents o 1,198 documents is, thus, the effective sample size for this calculation, with the sampling frame being the total universe of relevant documents o Of that sample of the total universe of relevant documents, this search can recall half Precision has been calculated based on the total number of returned documents o In this hypothetical, that is also 1,198 documents o 1,198 documents is thus the effective sample size for this calculation, with the sampling frame being the total universe of documents the search would return o Of this sample of the total universe of documents this search would return, half were relevant WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 13

14 Some sampling calculators will allow you to input these variables and work backwards to determine the new confidence level/interval for these measurements; it will be somewhat lower/wider than for the original 2,396 document sample. Testing classifiers in this manner by calculating recall and precision (and determining the reliability of those calculations) offers excellent return on effort, replacing blind sampling and anecdotal evidence with a repeatable process and reliable results valuable discovery intelligence that can be leveraged for data-driven decision making. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 14

15 QUALITY CONTROL OF HUMAN DOCUMENT REVIEW FINDING OUT HOW GOOD YOUR REVIEWERS ARE In addition to leveraging simple random sampling for estimating prevalence and testing classifiers, simple random sampling can also be leveraged for quality control of human document review. In the traditional approach to document review, quality control is maintained by a multi-pass system of review that includes extensive re-review of documents by successively more senior attorneys. Often these later passes will involve the review of a flat percentage of the documents from the pass below; sometimes important categories of materials will be entirely re-reviewed. Instead of such extensive, brute-force re-review, simple random sampling can be employed to streamline the quality control process, while simultaneously increasing its precision. In such a scenario, the reviewer doing the initial work is the classifier being tested and the control set is the decisions of the more senior attorney reviewing the random sample and agreeing or disagreeing with the initial reviewer. After the more senior attorney completes quality control review of an appropriatelysized random sample of the initial reviewer s work (or of a team s combined work), the differences between the initial reviewer s classifications and the more senior attorney s classifications can be used to create a contingency table like those discussed above. If an appropriate tagging palette is employed for documenting quality control decisions, this is a simple matter. With such a contingency table, you can easily calculate the two values used to assess the performance of a reviewer: accuracy and error rate. As the names suggest, the higher a reviewer s (or a review team s) accuracy rate the better, and the lower their error rate the better. Accuracy is expressed as a percentage and is a measure of how many initial reviewer determinations were correct, out of all determinations made. The closer to 100% the better your reviewers are doing. Error Rate is expressed as a percentage and is a measure of how many initial reviewer determinations were incorrect, out of all determinations made. The closer to 0% the better your reviewers are doing. o Error rate and accuracy together should always total 100%. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 15

16 To create a contingency table for this purpose, the four categories would be similar to those used above: 1. True Positives a. Documents deemed relevant by BOTH the initial reviewer AND the QC reviewer 2. False Positives a. Documents deemed relevant by the initial reviewer BUT NOT by the QC reviewer 3. False Negatives a. Documents deemed NOT relevant by the initial reviewer BUT relevant by the QC reviewer 4. True Negatives a. Documents deemed NOT relevant by BOTH the initial reviewer AND the QC reviewer In Figure 5, you can see such a contingency table created for the quality control review of a random sample of 1,000 documents taken from the thousands completed by a particular, hypothetical reviewer. FIGURE 5 RESULTS OF INITIAL REVIEWER S DETERMINATIONS TESTED AGAINST QC REVIEWER S DETERMINATIONS Relevant/Returned (Initial Reviewer) Not Relevant/Not Returned (Initial Reviewer) Relevant (QC Reviewer) Not Relevant (QC Reviewer) On the contingency table in Figure 5: There are 250 True Positives o Top left box o Documents deemed relevant by BOTH the initial reviewer AND the QC reviewer WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 16

17 There are 200 False Positives o Top right box o Documents deemed relevant by the initial reviewer BUT NOT by the QC reviewer There are 100 False Negatives o Bottom left box o Documents deemed NOT relevant by the initial reviewer BUT relevant by the QC reviewer There are 450 True Negatives o Bottom right box o Documents deemed NOT relevant by BOTH the initial reviewer AND the QC reviewer With the results broken out in this way in a contingency table, it is straightforward to perform the calculations of accuracy and error rate for the hypothetical reviewer being tested. As noted above, accuracy is the percentage of correct determinations out of all those made o In this hypothetical, the reviewer made 700 correct determinations (True Positives + True Negatives) out of 1000 total o 700 / 1000 = 0.70, or 70% o The hypothetical reviewer has 70% accuracy As noted above, error rate is the percentage of incorrect determinations out of all those made o In this hypothetical, the reviewer made 300 incorrect determinations (False Positives + False Negatives) out of 1000 total o 300 / 1000 = 0.30, or 30% o The hypothetical reviewer has a 30% error rate As with estimations of prevalence and testing of classifiers, the reliability of the measurements will depend on the overall sampling frame (e.g., all of a reviewer s work) and the sample size taken, with larger samples giving more reliable results. For ongoing quality control review, it is not practical to attempt to attain the same confidence levels and intervals for these measurements that can be achieved for prevalence, recall, and precision. Most projects will not have sufficient scale for the sampling frame and sample sizes for individual reviewers to grow very large. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 17

18 LOT ACCEPTANCE SAMPLING Another approach that can be taken to offset this limitation is lot acceptance sampling. Lot acceptance sampling is a methodology employed in pharmaceutical manufacturing, military contract fulfillment, and many other high-volume, quality-focused processes. When employing lot acceptance sampling, a maximum acceptable error threshold is established, as is a sampling protocol. Each lot has a random sample taken from it for testing. If the established acceptable error rate is exceeded, the entire lot is rejected without further evaluation. In ediscovery, the lot would correspond most readily to the individual review batch. In a high volume document review project, with a large review team, some form of batch acceptance sampling could present an efficient quality control solution. Each completed batch could be randomly sampled to test for batch acceptance. If the established maximum acceptable error rate is exceeded, the entire batch is rejected and sent back for re-review. Statistics on batch rejection could be tracked by reviewer, by source material, or by other useful properties. 4 Simple random sampling can be leveraged in a variety of ways for the quality control of human document review and can be adapted for use in both small and large projects. Employing it gives you the ability to measure quality precisely and to speak with certainty about individual reviewers relative performance, to once again, replace gutfeelings and anecdotal evidence with concrete measurements. 4 Although some practitioners experience discomfort at the thought of positively identifying an acceptable error rate, it is important to remember two things: first, choosing not to acknowledge or measure the error rate in a document review project does not mean that it does not exist; and second, reasonableness is the standard, not perfection. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 18

19 ELUSION AND OVERALL COMPLETENESS FINDING OUT HOW MUCH STUFF YOU MISSED Finally, simple random sampling can be employed to ascertain the overall completeness of a review effort. When engaged in large-scale document review, some initial classifier an iteratively-refined search string or a predictive coding tool will almost certainly be used to cut down the total processed dataset to a smaller subset that will actually be reviewed for potential production. The remainder of the materials not returned by this classifier often are not reviewed at all. If the classifier used was court ordered or was agreed to by the opposing party, then it is acceptable to take no further action with regard to that remainder. If, however, the classifier was of your own design or selection, you may want to validate that classification after the fact. If you employed a predictive coding solution as a classifier, you may want to validate the irrelevance of the excluded materials as part of ensuring the defensibility of your overall process. Like the measurements above of recall and precision, this measurement is related to the performance of a classifier. In this scenario, the classifier is the one used to separate the total dataset into pools to be reviewed and to be ignored. The pool to be reviewed is composed entirely of True Positives and False Positives. The pool to be ignored is composed entirely of True Negatives and False Negatives. What you want to know is what percentage of the pool to be ignored are False Negatives that should have been included in review and production. This measurement is sometimes referred to as elusion. Another way to frame this inquiry is as another estimation of prevalence being made regarding just this remainder, the pool to be ignored. Whether considered elusion or prevalence, the calculation is the same: a simple random sample of the remainder (at a size determined by desired measurement strength) reviewed for remaining relevant documents/false Negatives. There is no way to perfectly identify and produce all relevant materials in the age of high volume ediscovery 5, but there can be great value in being able to say, for example, 5 Even total human review has been shown, repeatedly, to be inconsistent and incomplete, typically achieving only 70-80% recall. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 19

20 that you have 99% confidence that no more than 3-5% of the remainder is potentially relevant. The 3-5% of that hypothetical sample that was relevant could also then be used to illustrate the types of relevant materials remaining, and knowing their prevalence and the size of the pool, you could also estimate with great accuracy the cost required to find each additional document which is a powerful position from which to argue regarding reasonability and proportionality as you near the end of a long ediscovery effort. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 20

21 CONCLUSION WE GAVE BASIC MATH SKILLS TO A LAWYER, AND IT MADE HIM A MORE EFFECTIVE EDISCOVERY PRACTITIONER As noted in the subheading above, mastering math can do the same for you. Random sampling is an essential tool for many activities throughout the discovery lifecycle, and it can be done more effectively than with arbitrarily-selected percentages. As demonstrated throughout this paper, it can be used to find out in a precise fashion how much relevant material you have, how effective your searches will be and how effective your reviewers are. Performing these calculations is not something all practitioners will want to do themselves, but all practitioners will benefit from greater understanding of these concepts in their conversations with service providers, opposing parties, and other fellow practitioners. ABOUT THE AUTHOR Matthew Verga is an electronic discovery consultant and practitioner proficient at leveraging a combination of legal, technical, and logistical expertise to develop pragmatic solutions for electronic discovery problems. Matthew has spent the past seven years working in electronic discovery, four years as a practicing attorney with an AmLaw 100 firm and three years as a consultant with electronic discovery service providers. Matthew has personally designed and managed many large scale electronic discovery efforts and has overseen the design and management of numerous other efforts as an attorney and a consultant. He has provided consultation and training for AmLaw 100 firms and Fortune 100 companies, as well as written and spoken widely on electronic discovery issues. Matthew is currently the Director, Content Marketing and ediscovery Strategy, for Modus ediscovery Inc. Matthew is responsible for managing assessments of law firms and corporations electronic discovery systems and processes. In this role, he focuses his expertise on assessing organizations readiness and capability to handle ediscovery matters across each segment of the EDRM. Additionally, Matthew is responsible for the creation of articles, white papers, presentations and other substantive content in support of Modus marketing, branding and thought leadership efforts. WE GAVE A LAWYER BASIC MATH SKILLS, AND YOU WON T BELIEVE WHAT HAPPENED NEXT 21

How to Judge the Quality of an Objective Classroom Test

How to Judge the Quality of an Objective Classroom Test How to Judge the Quality of an Objective Classroom Test Technical Bulletin #6 Evaluation and Examination Service The University of Iowa (319) 335-0356 HOW TO JUDGE THE QUALITY OF AN OBJECTIVE CLASSROOM

More information

Evidence for Reliability, Validity and Learning Effectiveness

Evidence for Reliability, Validity and Learning Effectiveness PEARSON EDUCATION Evidence for Reliability, Validity and Learning Effectiveness Introduction Pearson Knowledge Technologies has conducted a large number and wide variety of reliability and validity studies

More information

CS Machine Learning

CS Machine Learning CS 478 - Machine Learning Projects Data Representation Basic testing and evaluation schemes CS 478 Data and Testing 1 Programming Issues l Program in any platform you want l Realize that you will be doing

More information

Physics 270: Experimental Physics

Physics 270: Experimental Physics 2017 edition Lab Manual Physics 270 3 Physics 270: Experimental Physics Lecture: Lab: Instructor: Office: Email: Tuesdays, 2 3:50 PM Thursdays, 2 4:50 PM Dr. Uttam Manna 313C Moulton Hall umanna@ilstu.edu

More information

The Good Judgment Project: A large scale test of different methods of combining expert predictions

The Good Judgment Project: A large scale test of different methods of combining expert predictions The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania

More information

The Oregon Literacy Framework of September 2009 as it Applies to grades K-3

The Oregon Literacy Framework of September 2009 as it Applies to grades K-3 The Oregon Literacy Framework of September 2009 as it Applies to grades K-3 The State Board adopted the Oregon K-12 Literacy Framework (December 2009) as guidance for the State, districts, and schools

More information

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Read Online and Download Ebook ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF Click link bellow and free register to download

More information

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany Jana Kitzmann and Dirk Schiereck, Endowed Chair for Banking and Finance, EUROPEAN BUSINESS SCHOOL, International

More information

GRIT. The Secret to Advancement STORIES OF SUCCESSFUL WOMEN LAWYERS

GRIT. The Secret to Advancement STORIES OF SUCCESSFUL WOMEN LAWYERS GRIT The Secret to Advancement STORIES OF SUCCESSFUL WOMEN LAWYERS Chapter 3 Law Firm Lawyers 147 Linda A. Klein Presented by Wendy Huff Ellard This letter is not at all about me. Rather, I was invited

More information

A Case Study: News Classification Based on Term Frequency

A Case Study: News Classification Based on Term Frequency A Case Study: News Classification Based on Term Frequency Petr Kroha Faculty of Computer Science University of Technology 09107 Chemnitz Germany kroha@informatik.tu-chemnitz.de Ricardo Baeza-Yates Center

More information

Early Warning System Implementation Guide

Early Warning System Implementation Guide Linking Research and Resources for Better High Schools betterhighschools.org September 2010 Early Warning System Implementation Guide For use with the National High School Center s Early Warning System

More information

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages

Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Iterative Cross-Training: An Algorithm for Learning from Unlabeled Web Pages Nuanwan Soonthornphisaj 1 and Boonserm Kijsirikul 2 Machine Intelligence and Knowledge Discovery Laboratory Department of Computer

More information

Segmentation Study of Tulsa Area Higher Education Needs Ages 36+ March Prepared for: Conducted by:

Segmentation Study of Tulsa Area Higher Education Needs Ages 36+ March Prepared for: Conducted by: Segmentation Study of Tulsa Area Higher Education Needs Ages 36+ March 2004 * * * Prepared for: Tulsa Community College Tulsa, OK * * * Conducted by: Render, vanderslice & Associates Tulsa, Oklahoma Project

More information

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF)

SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) SINGLE DOCUMENT AUTOMATIC TEXT SUMMARIZATION USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY (TF-IDF) Hans Christian 1 ; Mikhael Pramodana Agus 2 ; Derwin Suhartono 3 1,2,3 Computer Science Department,

More information

West s Paralegal Today The Legal Team at Work Third Edition

West s Paralegal Today The Legal Team at Work Third Edition Study Guide to accompany West s Paralegal Today The Legal Team at Work Third Edition Roger LeRoy Miller Institute for University Studies Mary Meinzinger Urisko Madonna University Prepared by Bradene L.

More information

School Leadership Rubrics

School Leadership Rubrics School Leadership Rubrics The School Leadership Rubrics define a range of observable leadership and instructional practices that characterize more and less effective schools. These rubrics provide a metric

More information

CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE

CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE CONTENTS 3 Introduction 5 The Learner Experience 7 Perceptions of Training Consistency 11 Impact of Consistency on Learners 15 Conclusions 16 Study Demographics

More information

International Business BADM 455, Section 2 Spring 2008

International Business BADM 455, Section 2 Spring 2008 International Business BADM 455, Section 2 Spring 2008 Call #: 11947 Class Meetings: 12:00 12:50 pm, Monday, Wednesday & Friday Credits Hrs.: 3 Room: May Hall, room 309 Instruct or: Rolf Butz Office Hours:

More information

Strategic Practice: Career Practitioner Case Study

Strategic Practice: Career Practitioner Case Study Strategic Practice: Career Practitioner Case Study heidi Lund 1 Interpersonal conflict has one of the most negative impacts on today s workplaces. It reduces productivity, increases gossip, and I believe

More information

Rule-based Expert Systems

Rule-based Expert Systems Rule-based Expert Systems What is knowledge? is a theoretical or practical understanding of a subject or a domain. is also the sim of what is currently known, and apparently knowledge is power. Those who

More information

Interpreting ACER Test Results

Interpreting ACER Test Results Interpreting ACER Test Results This document briefly explains the different reports provided by the online ACER Progressive Achievement Tests (PAT). More detailed information can be found in the relevant

More information

Calculators in a Middle School Mathematics Classroom: Helpful or Harmful?

Calculators in a Middle School Mathematics Classroom: Helpful or Harmful? University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Action Research Projects Math in the Middle Institute Partnership 7-2008 Calculators in a Middle School Mathematics Classroom:

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview

More information

Practice Examination IREB

Practice Examination IREB IREB Examination Requirements Engineering Advanced Level Elicitation and Consolidation Practice Examination Questionnaire: Set_EN_2013_Public_1.2 Syllabus: Version 1.0 Passed Failed Total number of points

More information

Rule Learning With Negation: Issues Regarding Effectiveness

Rule Learning With Negation: Issues Regarding Effectiveness Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United

More information

Full text of O L O W Science As Inquiry conference. Science as Inquiry

Full text of O L O W Science As Inquiry conference. Science as Inquiry Page 1 of 5 Full text of O L O W Science As Inquiry conference Reception Meeting Room Resources Oceanside Unifying Concepts and Processes Science As Inquiry Physical Science Life Science Earth & Space

More information

What is a Mental Model?

What is a Mental Model? Mental Models for Program Understanding Dr. Jonathan I. Maletic Computer Science Department Kent State University What is a Mental Model? Internal (mental) representation of a real system s behavior,

More information

Chapter 9: Conducting Interviews

Chapter 9: Conducting Interviews Chapter 9: Conducting Interviews Chapter 9: Conducting Interviews Chapter Outline: 9.1 Interviewing: A Matter of Styles 9.2 Preparing for the Interview 9.3 Example of a Legal Interview 9.1 INTERVIEWING:

More information

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections

Intra-talker Variation: Audience Design Factors Affecting Lexical Selections Tyler Perrachione LING 451-0 Proseminar in Sound Structure Prof. A. Bradlow 17 March 2006 Intra-talker Variation: Audience Design Factors Affecting Lexical Selections Abstract Although the acoustic and

More information

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1 Notes on The Sciences of the Artificial Adapted from a shorter document written for course 17-652 (Deciding What to Design) 1 Ali Almossawi December 29, 2005 1 Introduction The Sciences of the Artificial

More information

The Internet as a Normative Corpus: Grammar Checking with a Search Engine

The Internet as a Normative Corpus: Grammar Checking with a Search Engine The Internet as a Normative Corpus: Grammar Checking with a Search Engine Jonas Sjöbergh KTH Nada SE-100 44 Stockholm, Sweden jsh@nada.kth.se Abstract In this paper some methods using the Internet as a

More information

Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate

Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate Using Blackboard.com Software to Reach Beyond the Classroom: Intermediate NESA Conference 2007 Presenter: Barbara Dent Educational Technology Training Specialist Thomas Jefferson High School for Science

More information

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening

A Study of Metacognitive Awareness of Non-English Majors in L2 Listening ISSN 1798-4769 Journal of Language Teaching and Research, Vol. 4, No. 3, pp. 504-510, May 2013 Manufactured in Finland. doi:10.4304/jltr.4.3.504-510 A Study of Metacognitive Awareness of Non-English Majors

More information

Aviation English Training: How long Does it Take?

Aviation English Training: How long Does it Take? Aviation English Training: How long Does it Take? Elizabeth Mathews 2008 I am often asked, How long does it take to achieve ICAO Operational Level 4? Unfortunately, there is no quick and easy answer to

More information

Linking Task: Identifying authors and book titles in verbose queries

Linking Task: Identifying authors and book titles in verbose queries Linking Task: Identifying authors and book titles in verbose queries Anaïs Ollagnier, Sébastien Fournier, and Patrice Bellot Aix-Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,

More information

Word Segmentation of Off-line Handwritten Documents

Word Segmentation of Off-line Handwritten Documents Word Segmentation of Off-line Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department

More information

Measurement. When Smaller Is Better. Activity:

Measurement. When Smaller Is Better. Activity: Measurement Activity: TEKS: When Smaller Is Better (6.8) Measurement. The student solves application problems involving estimation and measurement of length, area, time, temperature, volume, weight, and

More information

Exams: Accommodations Guidelines. English Language Learners

Exams: Accommodations Guidelines. English Language Learners PSSA Accommodations Guidelines for English Language Learners (ELLs) [Arlen: Please format this page like the cover page for the PSSA Accommodations Guidelines for Students PSSA with IEPs and Students with

More information

Enhancing Learning with a Poster Session in Engineering Economy

Enhancing Learning with a Poster Session in Engineering Economy 1339 Enhancing Learning with a Poster Session in Engineering Economy Karen E. Schmahl, Christine D. Noble Miami University Abstract This paper outlines the process and benefits of using a case analysis

More information

Assessing and Providing Evidence of Generic Skills 4 May 2016

Assessing and Providing Evidence of Generic Skills 4 May 2016 Assessing and Providing Evidence of Generic Skills 4 May 2016 Dr. Cecilia Ka Yuk Chan Head of Professional Development/ Associate Professor Centre for the Enhancement of Teaching and Learning (CETL) Tell

More information

A Guide to Adequate Yearly Progress Analyses in Nevada 2007 Nevada Department of Education

A Guide to Adequate Yearly Progress Analyses in Nevada 2007 Nevada Department of Education A Guide to Adequate Yearly Progress Analyses in Nevada 2007 Nevada Department of Education Note: Additional information regarding AYP Results from 2003 through 2007 including a listing of each individual

More information

Citrine Informatics. The Latest from Citrine. Citrine Informatics. The data analytics platform for the physical world

Citrine Informatics. The Latest from Citrine. Citrine Informatics. The data analytics platform for the physical world Citrine Informatics The data analytics platform for the physical world The Latest from Citrine Summit on Data and Analytics for Materials Research 31 October 2016 Our Mission is Simple Add as much value

More information

The Enterprise Knowledge Portal: The Concept

The Enterprise Knowledge Portal: The Concept The Enterprise Knowledge Portal: The Concept Executive Information Systems, Inc. www.dkms.com eisai@home.com (703) 461-8823 (o) 1 A Beginning Where is the life we have lost in living! Where is the wisdom

More information

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17.

Semi-supervised methods of text processing, and an application to medical concept extraction. Yacine Jernite Text-as-Data series September 17. Semi-supervised methods of text processing, and an application to medical concept extraction Yacine Jernite Text-as-Data series September 17. 2015 What do we want from text? 1. Extract information 2. Link

More information

Executive Guide to Simulation for Health

Executive Guide to Simulation for Health Executive Guide to Simulation for Health Simulation is used by Healthcare and Human Service organizations across the World to improve their systems of care and reduce costs. Simulation offers evidence

More information

Lecture 1: Machine Learning Basics

Lecture 1: Machine Learning Basics 1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3

More information

ECON 365 fall papers GEOS 330Z fall papers HUMN 300Z fall papers PHIL 370 fall papers

ECON 365 fall papers GEOS 330Z fall papers HUMN 300Z fall papers PHIL 370 fall papers Assessing Critical Thinking in GE In Spring 2016 semester, the GE Curriculum Advisory Board (CAB) engaged in assessment of Critical Thinking (CT) across the General Education program. The assessment was

More information

Scoring Guide for Candidates For retake candidates who began the Certification process in and earlier.

Scoring Guide for Candidates For retake candidates who began the Certification process in and earlier. Adolescence and Young Adulthood SOCIAL STUDIES HISTORY For retake candidates who began the Certification process in 2013-14 and earlier. Part 1 provides you with the tools to understand and interpret your

More information

Learning or lurking? Tracking the invisible online student

Learning or lurking? Tracking the invisible online student Internet and Higher Education 5 (2002) 147 155 Learning or lurking? Tracking the invisible online student Michael F. Beaudoin* University of New England, Hills Beach Road, Biddeford, ME 04005, USA Received

More information

Creating Meaningful Assessments for Professional Development Education in Software Architecture

Creating Meaningful Assessments for Professional Development Education in Software Architecture Creating Meaningful Assessments for Professional Development Education in Software Architecture Elspeth Golden Human-Computer Interaction Institute Carnegie Mellon University Pittsburgh, PA egolden@cs.cmu.edu

More information

JOB OUTLOOK 2018 NOVEMBER 2017 FREE TO NACE MEMBERS $52.00 NONMEMBER PRICE NATIONAL ASSOCIATION OF COLLEGES AND EMPLOYERS

JOB OUTLOOK 2018 NOVEMBER 2017 FREE TO NACE MEMBERS $52.00 NONMEMBER PRICE NATIONAL ASSOCIATION OF COLLEGES AND EMPLOYERS NOVEMBER 2017 FREE TO NACE MEMBERS $52.00 NONMEMBER PRICE JOB OUTLOOK 2018 NATIONAL ASSOCIATION OF COLLEGES AND EMPLOYERS 62 Highland Avenue, Bethlehem, PA 18017 www.naceweb.org 610,868.1421 TABLE OF CONTENTS

More information

Effective practices of peer mentors in an undergraduate writing intensive course

Effective practices of peer mentors in an undergraduate writing intensive course Effective practices of peer mentors in an undergraduate writing intensive course April G. Douglass and Dennie L. Smith * Department of Teaching, Learning, and Culture, Texas A&M University This article

More information

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MATHEMATICS ASSESSING THE EFFECTIVENESS OF MULTIPLE CHOICE MATH TESTS ELIZABETH ANNE SOMERS Spring 2011 A thesis submitted in partial

More information

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics

Web as Corpus. Corpus Linguistics. Web as Corpus 1 / 1. Corpus Linguistics. Web as Corpus. web.pl 3 / 1. Sketch Engine. Corpus Linguistics (L615) Markus Dickinson Department of Linguistics, Indiana University Spring 2013 The web provides new opportunities for gathering data Viable source of disposable corpora, built ad hoc for specific purposes

More information

Multi-Disciplinary Teams and Collaborative Peer Learning in an Introductory Nuclear Engineering Course

Multi-Disciplinary Teams and Collaborative Peer Learning in an Introductory Nuclear Engineering Course Paper ID #10874 Multi-Disciplinary Teams and Collaborative Peer Learning in an Introductory Nuclear Engineering Course Samuel A. Heider, U.S. Military Academy BA Physics from the Universty of Nebraska

More information

Modeling user preferences and norms in context-aware systems

Modeling user preferences and norms in context-aware systems Modeling user preferences and norms in context-aware systems Jonas Nilsson, Cecilia Lindmark Jonas Nilsson, Cecilia Lindmark VT 2016 Bachelor's thesis for Computer Science, 15 hp Supervisor: Juan Carlos

More information

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District

An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District An Empirical Analysis of the Effects of Mexican American Studies Participation on Student Achievement within Tucson Unified School District Report Submitted June 20, 2012, to Willis D. Hawley, Ph.D., Special

More information

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

Knowledge based expert systems D H A N A N J A Y K A L B A N D E Knowledge based expert systems D H A N A N J A Y K A L B A N D E What is a knowledge based system? A Knowledge Based System or a KBS is a computer program that uses artificial intelligence to solve problems

More information

Reducing Features to Improve Bug Prediction

Reducing Features to Improve Bug Prediction Reducing Features to Improve Bug Prediction Shivkumar Shivaji, E. James Whitehead, Jr., Ram Akella University of California Santa Cruz {shiv,ejw,ram}@soe.ucsc.edu Sunghun Kim Hong Kong University of Science

More information

SETTING STANDARDS FOR CRITERION- REFERENCED MEASUREMENT

SETTING STANDARDS FOR CRITERION- REFERENCED MEASUREMENT SETTING STANDARDS FOR CRITERION- REFERENCED MEASUREMENT By: Dr. MAHMOUD M. GHANDOUR QATAR UNIVERSITY Improving human resources is the responsibility of the educational system in many societies. The outputs

More information

Introduction. 1. Evidence-informed teaching Prelude

Introduction. 1. Evidence-informed teaching Prelude 1. Evidence-informed teaching 1.1. Prelude A conversation between three teachers during lunch break Rik: Barbara: Rik: Cristina: Barbara: Rik: Cristina: Barbara: Rik: Barbara: Cristina: Why is it that

More information

Introduction to Questionnaire Design

Introduction to Questionnaire Design Introduction to Questionnaire Design Why this seminar is necessary! Bad questions are everywhere! Don t let them happen to you! Fall 2012 Seminar Series University of Illinois www.srl.uic.edu The first

More information

NCEO Technical Report 27

NCEO Technical Report 27 Home About Publications Special Topics Presentations State Policies Accommodations Bibliography Teleconferences Tools Related Sites Interpreting Trends in the Performance of Special Education Students

More information

Medical Complexity: A Pragmatic Theory

Medical Complexity: A Pragmatic Theory http://eoimages.gsfc.nasa.gov/images/imagerecords/57000/57747/cloud_combined_2048.jpg Medical Complexity: A Pragmatic Theory Chris Feudtner, MD PhD MPH The Children s Hospital of Philadelphia Main Thesis

More information

PEDAGOGICAL LEARNING WALKS: MAKING THE THEORY; PRACTICE

PEDAGOGICAL LEARNING WALKS: MAKING THE THEORY; PRACTICE PEDAGOGICAL LEARNING WALKS: MAKING THE THEORY; PRACTICE DR. BEV FREEDMAN B. Freedman OISE/Norway 2015 LEARNING LEADERS ARE Discuss and share.. THE PURPOSEFUL OF CLASSROOM/SCHOOL OBSERVATIONS IS TO OBSERVE

More information

Managerial Decision Making

Managerial Decision Making Course Business Managerial Decision Making Session 4 Conditional Probability & Bayesian Updating Surveys in the future... attempt to participate is the important thing Work-load goals Average 6-7 hours,

More information

MENTORING. Tips, Techniques, and Best Practices

MENTORING. Tips, Techniques, and Best Practices MENTORING Tips, Techniques, and Best Practices This paper reflects the experiences shared by many mentor mediators and those who have been mentees. The points are displayed for before, during, and after

More information

Susan K. Woodruff. instructional coaching scale: measuring the impact of coaching interactions

Susan K. Woodruff. instructional coaching scale: measuring the impact of coaching interactions Susan K. Woodruff instructional coaching scale: measuring the impact of coaching interactions Susan K. Woodruff Instructional Coaching Group swoodruf@comcast.net Instructional Coaching Group 301 Homestead

More information

Task Types. Duration, Work and Units Prepared by

Task Types. Duration, Work and Units Prepared by Task Types Duration, Work and Units Prepared by 1 Introduction Microsoft Project allows tasks with fixed work, fixed duration, or fixed units. Many people ask questions about changes in these values when

More information

South Carolina College- and Career-Ready Standards for Mathematics. Standards Unpacking Documents Grade 5

South Carolina College- and Career-Ready Standards for Mathematics. Standards Unpacking Documents Grade 5 South Carolina College- and Career-Ready Standards for Mathematics Standards Unpacking Documents Grade 5 South Carolina College- and Career-Ready Standards for Mathematics Standards Unpacking Documents

More information

5. UPPER INTERMEDIATE

5. UPPER INTERMEDIATE Triolearn General Programmes adapt the standards and the Qualifications of Common European Framework of Reference (CEFR) and Cambridge ESOL. It is designed to be compatible to the local and the regional

More information

We re Listening Results Dashboard How To Guide

We re Listening Results Dashboard How To Guide We re Listening Results Dashboard How To Guide Contents Page 1. Introduction 3 2. Finding your way around 3 3. Dashboard Options 3 4. Landing Page Dashboard 4 5. Question Breakdown Dashboard 5 6. Key Drivers

More information

AC : DEVELOPMENT OF AN INTRODUCTION TO INFRAS- TRUCTURE COURSE

AC : DEVELOPMENT OF AN INTRODUCTION TO INFRAS- TRUCTURE COURSE AC 2011-746: DEVELOPMENT OF AN INTRODUCTION TO INFRAS- TRUCTURE COURSE Matthew W Roberts, University of Wisconsin, Platteville MATTHEW ROBERTS is an Associate Professor in the Department of Civil and Environmental

More information

Practices Worthy of Attention Step Up to High School Chicago Public Schools Chicago, Illinois

Practices Worthy of Attention Step Up to High School Chicago Public Schools Chicago, Illinois Step Up to High School Chicago Public Schools Chicago, Illinois Summary of the Practice. Step Up to High School is a four-week transitional summer program for incoming ninth-graders in Chicago Public Schools.

More information

ACCREDITATION STANDARDS

ACCREDITATION STANDARDS ACCREDITATION STANDARDS Description of the Profession Interpretation is the art and science of receiving a message from one language and rendering it into another. It involves the appropriate transfer

More information

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes

Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes Rover Races Grades: 3-5 Prep Time: ~45 Minutes Lesson Time: ~105 minutes WHAT STUDENTS DO: Establishing Communication Procedures Following Curiosity on Mars often means roving to places with interesting

More information

November 2012 MUET (800)

November 2012 MUET (800) November 2012 MUET (800) OVERALL PERFORMANCE A total of 75 589 candidates took the November 2012 MUET. The performance of candidates for each paper, 800/1 Listening, 800/2 Speaking, 800/3 Reading and 800/4

More information

DOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY?

DOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY? DOES RETELLING TECHNIQUE IMPROVE SPEAKING FLUENCY? Noor Rachmawaty (itaw75123@yahoo.com) Istanti Hermagustiana (dulcemaria_81@yahoo.com) Universitas Mulawarman, Indonesia Abstract: This paper is based

More information

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016 AGENDA Advanced Learning Theories Alejandra J. Magana, Ph.D. admagana@purdue.edu Introduction to Learning Theories Role of Learning Theories and Frameworks Learning Design Research Design Dual Coding Theory

More information

Successfully Flipping a Mathematics Classroom

Successfully Flipping a Mathematics Classroom 2014 Hawaii University International Conferences Science, Technology, Engineering, Math & Education June 16, 17, & 18 2014 Ala Moana Hotel, Honolulu, Hawaii Successfully Flipping a Mathematics Classroom

More information

GCSE English Language 2012 An investigation into the outcomes for candidates in Wales

GCSE English Language 2012 An investigation into the outcomes for candidates in Wales GCSE English Language 2012 An investigation into the outcomes for candidates in Wales Qualifications and Learning Division 10 September 2012 GCSE English Language 2012 An investigation into the outcomes

More information

The Common European Framework of Reference for Languages p. 58 to p. 82

The Common European Framework of Reference for Languages p. 58 to p. 82 The Common European Framework of Reference for Languages p. 58 to p. 82 -- Chapter 4 Language use and language user/learner in 4.1 «Communicative language activities and strategies» -- Oral Production

More information

A Pilot Study on Pearson s Interactive Science 2011 Program

A Pilot Study on Pearson s Interactive Science 2011 Program Final Report A Pilot Study on Pearson s Interactive Science 2011 Program Prepared by: Danielle DuBose, Research Associate Miriam Resendez, Senior Researcher Dr. Mariam Azin, President Submitted on August

More information

PROVIDING AND COMMUNICATING CLEAR LEARNING GOALS. Celebrating Success THE MARZANO COMPENDIUM OF INSTRUCTIONAL STRATEGIES

PROVIDING AND COMMUNICATING CLEAR LEARNING GOALS. Celebrating Success THE MARZANO COMPENDIUM OF INSTRUCTIONAL STRATEGIES PROVIDING AND COMMUNICATING CLEAR LEARNING GOALS Celebrating Success THE MARZANO COMPENDIUM OF INSTRUCTIONAL STRATEGIES Celebrating Success Copyright 2016 by Marzano Research Materials appearing here are

More information

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language

Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Defragmenting Textual Data by Leveraging the Syntactic Structure of the English Language Nathaniel Hayes Department of Computer Science Simpson College 701 N. C. St. Indianola, IA, 50125 nate.hayes@my.simpson.edu

More information

Author: Justyna Kowalczys Stowarzyszenie Angielski w Medycynie (PL) Feb 2015

Author: Justyna Kowalczys Stowarzyszenie Angielski w Medycynie (PL)  Feb 2015 Author: Justyna Kowalczys Stowarzyszenie Angielski w Medycynie (PL) www.angielskiwmedycynie.org.pl Feb 2015 Developing speaking abilities is a prerequisite for HELP in order to promote effective communication

More information

Total Knowledge Management. May 2002

Total Knowledge Management. May 2002 Total Knowledge Management May 2002 1 Tacit knowledge isn t captured. It s exchanged. It s about people sharing know-how in ways that help organizations succeed. Tacit knowledge is exchanged. It s about

More information

learning collegiate assessment]

learning collegiate assessment] [ collegiate learning assessment] INSTITUTIONAL REPORT 2005 2006 Kalamazoo College council for aid to education 215 lexington avenue floor 21 new york new york 10016-6023 p 212.217.0700 f 212.661.9766

More information

VIA ACTION. A Primer for I/O Psychologists. Robert B. Kaiser

VIA ACTION. A Primer for I/O Psychologists. Robert B. Kaiser DEVELOPING LEADERS VIA ACTION LEARNING A Primer for I/O Psychologists Robert B. Kaiser rkaiser@kaplandevries.com Practitioner Forum presented at the 20th Annual SIOP Conference Los Angeles, CA April 2005

More information

Probability estimates in a scenario tree

Probability estimates in a scenario tree 101 Chapter 11 Probability estimates in a scenario tree An expert is a person who has made all the mistakes that can be made in a very narrow field. Niels Bohr (1885 1962) Scenario trees require many numbers.

More information

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE

Edexcel GCSE. Statistics 1389 Paper 1H. June Mark Scheme. Statistics Edexcel GCSE Edexcel GCSE Statistics 1389 Paper 1H June 2007 Mark Scheme Edexcel GCSE Statistics 1389 NOTES ON MARKING PRINCIPLES 1 Types of mark M marks: method marks A marks: accuracy marks B marks: unconditional

More information

Number of students enrolled in the program in Fall, 2011: 20. Faculty member completing template: Molly Dugan (Date: 1/26/2012)

Number of students enrolled in the program in Fall, 2011: 20. Faculty member completing template: Molly Dugan (Date: 1/26/2012) Program: Journalism Minor Department: Communication Studies Number of students enrolled in the program in Fall, 2011: 20 Faculty member completing template: Molly Dugan (Date: 1/26/2012) Period of reference

More information

Shank, Matthew D. (2009). Sports marketing: A strategic perspective (4th ed.). Upper Saddle River, NJ: Pearson/Prentice Hall.

Shank, Matthew D. (2009). Sports marketing: A strategic perspective (4th ed.). Upper Saddle River, NJ: Pearson/Prentice Hall. BSM 2801, Sport Marketing Course Syllabus Course Description Examines the theoretical and practical implications of marketing in the sports industry by presenting a framework to help explain and organize

More information

HDR Presentation of Thesis Procedures pro-030 Version: 2.01

HDR Presentation of Thesis Procedures pro-030 Version: 2.01 HDR Presentation of Thesis Procedures pro-030 To be read in conjunction with: Research Practice Policy Version: 2.01 Last amendment: 02 April 2014 Next Review: Apr 2016 Approved By: Academic Board Date:

More information

BSM 2801, Sport Marketing Course Syllabus. Course Description. Course Textbook. Course Learning Outcomes. Credits.

BSM 2801, Sport Marketing Course Syllabus. Course Description. Course Textbook. Course Learning Outcomes. Credits. BSM 2801, Sport Marketing Course Syllabus Course Description Examines the theoretical and practical implications of marketing in the sports industry by presenting a framework to help explain and organize

More information

SEMAFOR: Frame Argument Resolution with Log-Linear Models

SEMAFOR: Frame Argument Resolution with Log-Linear Models SEMAFOR: Frame Argument Resolution with Log-Linear Models Desai Chen or, The Case of the Missing Arguments Nathan Schneider SemEval July 16, 2010 Dipanjan Das School of Computer Science Carnegie Mellon

More information

School Year 2017/18. DDS MySped Application SPECIAL EDUCATION. Training Guide

School Year 2017/18. DDS MySped Application SPECIAL EDUCATION. Training Guide SPECIAL EDUCATION School Year 2017/18 DDS MySped Application SPECIAL EDUCATION Training Guide Revision: July, 2017 Table of Contents DDS Student Application Key Concepts and Understanding... 3 Access to

More information

Ohio s Learning Standards-Clear Learning Targets

Ohio s Learning Standards-Clear Learning Targets Ohio s Learning Standards-Clear Learning Targets Math Grade 1 Use addition and subtraction within 20 to solve word problems involving situations of 1.OA.1 adding to, taking from, putting together, taking

More information

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2

Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2 Ted Pedersen Department of Computer Science University of Minnesota Duluth, MN, 55812 USA tpederse@d.umn.edu

More information