OVERVIEW IN ASSESSMENT OF LEARNING: a summary by Faye
In classrooms where assessment for learning is practiced, students know at the outset of a unit of study what they are expected to learn. At the beginning of the unit, the teacher will work with the student to understand what she or he already knows about the topic as well as to identify any gaps or misconceptions. As the unit progresses, the teacher and student work together to assess the student’s knowledge, what she or he needs to learn to improve and extend this knowledge, and how the student can best get to that point (formative assessment). Assessment for learning occurs at all stages of the learning process.
Assessment for learning
- assessment can be based on a variety of information sources (e.g., portfolios, works in progress, teacher observation, conversation)
- verbal or written feedback to the student is primarily descriptive and emphasizes strengths, identifies challenges, and points to next steps
- as teachers check on understanding they adjust their instruction to keep students on track
- no grades or scores are given - record-keeping is primarily anecdotal and descriptive
- occurs throughout the learning process, from the outset of the course of study to the time of summative assessment
- begins as students become aware of the goals of instruction and the criteria for performance
- involves goal-setting, monitoring progress, and reflecting on results
- implies student ownership and responsibility for moving his or her thinking forward occurs throughout the learning process
- assessment that is accompanied by a number or letter grade (summative)
- compares one student’s achievement with standards
- results can be communicated to the student and parents
- occurs at the end of the learning unit
- judgment made on the basis of a student’s performance
- assessment made to determine what a student does and does not know about a topic
- occurs at the beginning of a unit of study
- used to inform instruction: makes up the initial phase of assessment for learning
- assessment made to determine a student’s knowledge and skills, including learning gaps as they progress through a unit of study
- used to inform instruction and guide learning
- occurs during the course of a unit of study
- makes up the subsequent phase of assessment for learning
- assessment that is made at the end of a unit of study to determine the level of understanding the student has achieved
- includes a mark or grade against an expected standard
Evaluation
Diagnostic assessment (now referred to more often as "pre-assessment")
Assessment made to determine a student's learning style or preferences used to determine how well a student can perform a certain set of skills related to a particular subject or group of subjects
Formative assessment
Summative assessment
ESTABLISHING LEARNING TARGETS : a summary by Josephine
Establishing an assessment for students learning can be made precise, accurate and dependable only if what are to be achieved are clearly stated and feasible. The word learning is used to convey that targets emphasis on the importance of how students will change.
Learning target is defined as a statement of student performance that includes both a description of what student should know and able to do at the end of a unit of instruction and a criteria for judging the level of performance demonstrated. It is composed of content where students should know and able to do and a criteria where the students performance is used for judging attainment. Thus, considering learning targets involve knowledge, reasoning, product, skills and affective learning that a students should practiced.
Establishing learning targets for the student has something to do with the Behavioral objectives or Instructional objectives which emphasize on students outcomes as manifested in their performance. Instructional objective must be stated in terms of expected behavior, specify the conditions under which the students are expected to perform and specify the minimum acceptable level of performance.
Lastly, after a teacher established the learning target of the students, it is expected that they have learned the three outcomes. These are Cognitive domain, Psychomotor domain and Affective domain that will lead them to become a good student to be.
KEYS TO EFFECTIVE TESTING : a summary by Jhonna
Test is a systematic procedure for measuring an individuals behavior ( Brown, 1991).
It has a different uses. School administrators use test results for making decisions with regards to the performance of the students to promote or retain. It is also used for the improvement or enrichment of the curriculum and to determine the performance of the teachers as well. Teachers however, use test to know the effectiveness of instruction and gain feedback from students’ progress. It is necessary in the educational assessment of an institution.
Constructing a test also needs preparation. And here are some of the general steps in preparing a test.
1. Identification of instructional objectives and learning outcomes.
2. Listing the topics to be covered by the test.
3. Prepare the TOS (Table of Specification).
4. Selection of the appropriate types of tests.
5. Writing of the test items.
6. Sequencing the test items.
7. Writing the directions or instructions.
8. Preparation of the answer sheets and scoring keys.
Test for them to be useful will be conducted or administered. Thus, testing occurs. Testing is the process of administering the test, scoring the test and interpreting the test results. Effective testing is helpful in obtaining the useful and valid information needed by the teachers.
Test has also different classifications. These may be classified according to:
Mode of administration
a. Individual
b. Group
Scoring
a. Objective
b. Subjective
Sort of response being emphasized
a. Power
b. Speed
Mode of response
a. Oral
b. Written
c. Practical
Nature of being compared
a. Teacher-made test
b. Standardized test
What is measured
a. Sample test
b. Sign test
Mode of interpreting the result
a. Norm-reference
b. Criterion-reference
Nature of answer
a. Personality test
b. Intelligence test
c. Aptitude test
d. Achievement test
e. Summative test
f. Diagnostic test
g. Formative test
h. Socio-metric test
i. Trade test
j. Survey
Lastly, one of the most important thing to do in preparing and constructing the test is the Table os Specifications or TOS.
TOS is a matrix where the rows consist of specific topic or skills and the objectives cost in terms of Bloom’s Taxonomy. It is sometimes called a test blueprint or context validity chart.
How to make it?
1. List down the topics covered for inclusion on the test.
2. Determine the percentage illustration of the test item for each topic.
% for a topic = total number of days/hours spent divided by the total # of days/hours spent divided by the total # of days/hrs. spent teaching the topic.
These are some of the steps that the teacher needs to observe in preparing the TOS.
DEVELOPMENT OF ASSESSMENT TOOLS : a summary by Leofer
Assessment tools are known to be one of the most effective means of gathering information
about a student's performance for evaluation. It is why, this assessment tools should better be develop in order to get the best quality of assessment and evaluation a teacher must obtain for the achievement of effective teaching and learning.
Test defined as the systematic procedure for measuring an individual’s behavior is the
most example of such tools. There are several types of tests including multiple choice test, binary test, matching type test, cloze test, completion test and essay test, each having different objectives and target assessment areas.
Multiple Choice Test is a test wherein the student is asked to select the correct answer out of the given
choices in the list. Multiple choice items consist of a stem and a set of options.
Stem - the beginning part of the item that presents the item as a problem to be solved
Options - the possible answers that the examiner can choose from
· key – the correct answer
· distractors. incorrect answer
Multiple choice type of test should specifically contain the case, one-best-answer and the matching set all by following the guidelines in writing this type of test.
Binary Test
Binary- choice test( true or false) is utilized to assess a student’s ability to recognize the accuracy of a declarative statement. This type of test requires the respondent to recognize and mark an item as true or false. Other possible options are agree or disagree, yes or no, valid or invalid, fact or opinion and cause or effect. Questions in this type are well- suited for measuring knowledge, comprehension and application levels of understanding.
Types of Binary Test
1. Simple True or False
2. Modified True or False
3. True or False with Correction
4. Cluster True or False
5. True or False with Options
6. Fact or Opinion
7. Identifying Inconsistence in the Paragraph
Matching Type Test is a test wherein the examinee associates an item in one column with a choice in the
second column. This type of test is simple to construct and score and is well suited in measuring associations.
Types of Matching Type Test
1. Perfect Matching
2. Imperfect Matching
3. Sequencing Matching
4. Multiple Matching
Cloze Test
Cloze test measures student’s
comprehension abilities through short text with blanks where some of the words
should be and asking them to fill in the blanks. This test requires the
respondent to build an internal representation of the text, to put the words
together in a meaningful way so that they’ll be able to interpolate the words
belong in the blanks.
Types of Cloze Test
1. Fixed-rated Deletion
2. Selective Deletion( also known as rational cloze)
3. Multiple-choice Cloze
4. Cloze Elide and the C- Test
Completion Test is a test that requires the student to answer a question or finish an incomplete statement by filling in the blank with the correct word or phrase. In this type of test, the lower levels of cognitive ability of the student are efficiently measured.
Essay Test is a type of test which gives freedom to the students to respond within broad limits. It is a type of assessment used to comment on student’s progress, the quality of their thinking, the depth of their understanding, and the difficulties they may be having. (McKeachie, 1986)
Pointers in Writing Essay Test
·Specify limitations.
·Structure the task.
· Make each item relatively short and increase the number of items.
· Give all the students the same essay questions if the content is relevant.
· Ask questions in a direct manner.
Four Ways of Rating an Essay Test
1. Analytic or Point system
2. Universal or Holistic Approach
3. Sorting Method
4. Demerits
OTHER ASSESSMENT TOOLS AND TECHNIQUES
Rating scale that is often used in recording results of observations. It is easily applied in collecting self- observation of self-report data. Under which are numerical scales, graphic scales and checklist scales that are most frequently used in educational setting.
Questionnaire is also another type of tools for measuring acquisition scale. It is done for the purpose of gathering information from respondents. The gathered data suggests the ways in which educational resources might be usefully directed to improved student learning.
Sociogram which is used to provide additional information regarding the student’s social interaction. It is a helpful tool for teachers to determine the social makeup of the class. This tool is also known as friendship chart. Interest Inventory is used in career planning that assesses one’s likes and dislikes of a variety of activities, objectives and type of persons. It is for the purpose of giving insight into a person’s interest and to help one generate and maintain self-motivation.
Anecdotal Record is a written record kept in a positive tone of a child’s progress based on milestones particular to that child’s social, physical, aesthetic and cognitive development.
CHARACTERISTICS OF A GOOD TEST : a summary by Sheevie Marielle
Stimulating and gratifying a good test requires validity, reliability, objectivity, comprehensiveness, simplicity and score. Moreover, it is like making something with the different components and putting them together in a creative way.
This lesson will present the different characteristics of a good test. This will also guide future teachers on how to organized and analyze the different phase of a test. Good test is needed to sort out the knowledge from the information. Hence, it would be a great help to know the outcome of a test, even so it will guide us to know if the lesson objectives was reached. Validity refers to the degree to which a test measures what it is supposed to measure. Test validity or the validation of a test, explicitly means validating the use of a test in a specific context. Furthermore there are types of validity, content validity which is usually established by the test items and those covered in the syllabus/ course of study or textbook. Next is criterion-related validity, in which it usually makes prediction about how the operation will perform, based on our theory of construct. Lastly is the Construct validity, which measures an attribute or quality it is supposed to measure. However it depends on the author construct the test questions and the way it is rate.
Reliability is the degree of consistency of the test. Test reliability refers to the consistency of the scores obtained by the same person when retested by the sane test or by an equivalent form test. Thus, there are factors that affect the reliability of a test which are adequacy, objectivity, testing condition, test administration procedures and difficulty of a test. Moreover reliability of a test can be improved through increasing the number of test items, heterogeneity of the student group, making the test of average difficulty, scoring a test objectively and setting time limits for consuming the test. In overall view reliability and validity of a test must go hand and hand.
ANALYZING AND USING OF TEST ITEM DATA : a summary by Josephine
Constructing test items without analyzing the data is deemed abortive. Considering the factors that is helpful in doing test item data is needed so that it will become reliable and valid.
Module 6, tackles about analyzing and using of test item data. This module illustrates what is iteming analysis its benefits purposes elements and procedures in doing items analysis. On the other hand, this module discuss on how to compute for index of difficulty (Df) and index of discrimination ( Di) where measure in the test item. Through this process, it could also help to and very easy. By this result, test item could be interpreted as retained, rejected or revised. And lastly, this module also tackles about interpreting test scores of the students. Whether through raw score, percentile score, test score, transmutation or ranking.
EDUCATIONAL STATISTICS : a summary by Jhonna
Statistics as a discipline is the development and application of methods to collect, analyze and interpret data. Modern statistical methods involve the design and analysis of experiments and surveys, the quantification of biological, social and scientific phenomenon and the application of statistical principles to understand more about the world around us (http://statistics.unl.edu/whatis.shtml)
Descriptive statistics are used to describe the basic features of the data in a study. Descriptive statistics are typically distinguished from inferential statistics. With descriptive statistics you are simply describing what is or what the data shows. With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone. For instance, we use inferential statistics to try to infer from the sample data what the population might think. (http://www.socialresearchmethods.net/kb/statdesc.php)
Definition of Measures of Central Tendency
- A measure of central tendency is a measure that tells us where the middle of a bunch of data lies.
- The three most common measures of central tendency are the mean, the median, and the mode.
- Mean: Mean is the most common measure of central tendency. It is simply the sum of the numbers divided by the number of numbers in a set of data. This is also known as average.
- Median: Median is the number present in the middle when the numbers in a set of data are arranged in ascending or descending order. If the number of numbers in a data set is even, then the median is the mean of the two middle numbers.
- Mode: Mode is the value that occurs most frequently in a set of data.
www.icoachmath.com/math_dictionary/measures_of_central_tendency.html
Correlation is a measure of the relation between two or more variables. The measurement scales used should be at least interval scales, but other correlation coefficients are available to handle other types of data. Correlation coefficients can range from -1.00 to +1.00. The value of -1.00 represents a perfect negative correlation while a value of +1.00 represents a perfect positive correlation. A value of 0.00 represents a lack of correlation.(www.statsoft.com/textbook/basic-statistics)
RUBRICS, PORTFOLIO AND PERFORMANCE BASED ASSESSMENT : a summary by Leofer
There are ample of assessment tools that are deemed helpful in the enhancement and improvement of teaching-learning process. Among which are rubrics, portfolios and performance- based assessment tools like individual or group projects, student logs and journals.
Rubrics – an assessment tool for communicating expectations of quality. It supports student self-reflection and self- assessment as well as communication between the assessor and the assesses.
Components of Rubrics:
1. Performance Element: the major, critical attributes which focus upon best practice.
2. Scale: the possible points to be assigned ( high to low)
3. Criteria- the conditions of the performance that must be met for it to be considered successful
4. Standard- a description of how well the criteria must be met
5. Descriptors- statements that describe each level of performance
6. Indicators- specific, concrete examples or tell-tale signs of what to look for at each level of the performance
Types of Rubrics:
1. Analytic Rubrics
2. Holistic Rubrics
3. General Rubrics
4. Task Specific Rubrics
Steps in Preparing and Using Rubrics
Step 1: Select a process or product to be taught.
Step 2: State performance criteria for the process or product.
Step 3: Decide on the number of scoring levels for the rubric, usually three to five.
Step 4: State description of the performance criteria at the highest level of pupil's performance.
Step 5: State descriptions of the performance criteria at the remaining level.
Step 6: Compare each pupil's performance to each scoring level.
Step 7: Select the level closest to the pupil's actual performance.
Step 8: Grade the pupil.
Portfolio- a purposeful collection of student work that exhibits the student's efforts, progress and achievements in one or more areas of the curriculum.
Types of Portfolios:
1. Documentation Portfolio
2. Process Portfolio
3. Showcase Portfolio
Portfolio assessment allows the teachers to witness student's achievements in ways that standardized or state testing often cannot, such as the development of skills and strategies, and the cognitive process.
Performance- Based Assessment – the direct, systematic observation of an actual student performance and the rating of that performance according to previously established performance criteria. In this type of assessment, students are asked to perform a performance task or to create a product.
Ways to Record the Results of PBAs:
1. Checklist approach
2. Narrative/ Anecdotal approach
3. Rating Scale approach
4. Memory approach
Forms of Performance- Based Assessment:
1. using observation in the assessment process
2. individual or group projects
3. portfolios
4. performance
5. student logs
6. journal
GRADING AND REPORTING PRACTICES : a summary by Sheevie Marielle
Grading and marking up of grades for the learners performance are quantifiable of reasons and purposes guided by some criteria and standards of giving a grades. Awarding of grades has number of advantages over awarding of numerical marks. It considerably reduces inter and intra examiner’s variability in marking.
It also takes care of imperfection of tools used for assessment. Statistical research in assessment techniques indicates that there is a possibility of variation of scores awarded to individuals. Putting students of similar potential in same ability bands (grades) automatically takes care of all these aberrations in assessment techniques. Lastly, it will reduce undesired and unsound comparison of small difference of marks. While grade inflation is the arbitrary assignment of higher grades for work that would have received lower grades in the past. The higher grades do not reflect a genuine improvement in student achievement. Only with systematic research can it be determined whether rising grades are a result of grade inflation or higher achievement.
Thus grades and marks are indicators of how much a learner attained learning within the classroom instruction.