Question 2
Like performance assessment,
classroom assessment provides evidence of student learning. It is a formative
form of assessment geared towards assisting teachers to gauge students
progression and performance in the classroom. It is important because it
enables teachers to effective tailor lessons for optimal learning opportunities
for students. The principles of classroom assessment discussed in this chapter
are validity and reliability. Both principles are integral because a test can
be reliable but not valid and vice versa. Together, they both ensure truthful
and dependable assessment in the classroom.
Question 2(a)
Classroom
principle of Validity.
“Validity
is the degree to which scores on an appropriately administered test support
inferences about variation in the construct that the instrument was developed
to measure” (Cizek, 2020). Validity in classroom assessment guarantees that
assessment tasks and related criteria efficiently evaluates student execution
of the intended learning outcomes at the appropriate level. To be valid, all
aspects of assessment must be related to the concept being assessed. An example
of validity is” if your scale is off by 5lbs, it reads your weight every day
with an excess of 5lbs. the scale is reliable because it consistently reports
the same weight every day, but it is not valid because it adds 5lbs to your
weight. It is not a valid measure of your weight” (Phelan
& Wren, 2019).
There
are three types of validity to consider when performing classroom assessment
which is face, criterion-related, formative and sampling validity. Face validity
means that the content of the test looks like it is assessing what it is
supposed to assess at a glance. Criterion assessment is used to assess student
performance by linking tests results and test criteria. Formative validity is
how well an assessment can provide information on methods to improve future
lessons. Sampling validity involves covering a broad range of areas within the
concept being assessed. It entails samples from all domains.
Classroom
principle of Reliability.
Reliability relates to a measure of consistency “Reliability refers to how well a score represents an
individual’s ability, and within education, ensures that assessments accurately
measure student knowledge. Reliable scores help students grasp their level of
development, and help instructors improve their teaching effectiveness”
(“Developing Reliable Student Assessments | Poorvu Center for Teaching and
Learning,” n.d.). According to the article “Exploring reliability in academic assessment’
by Colin Phelan and Julie Wren, their defines different types of reliability
as:-
- Test-retest reliability is a measure of
reliability obtained by administering the same test twice over some time
to a group of individuals. The scores from Time 1 and Time 2 can
then be correlated to evaluate the test for stability over time.
- Parallel forms reliability is a measure of
reliability obtained by administering different versions of an assessment
tool (both versions must contain items that probe the same construct,
skill, knowledge base, etc.) to the same group of individuals. The
scores from the two versions can then be correlated to evaluate the
consistency of results across alternate versions.
- Inter-rater reliability is a measure of
reliability used to assess the degree to which different judges or raters
agree in their assessment decisions. Inter-rater reliability is
useful because human observers will not necessarily interpret answers the
same way; raters may disagree as to how well certain responses or material
demonstrate knowledge of the construct or skill being assessed.
- Internal consistency
reliability is
a measure of reliability used to evaluate the degree to which different
test items that probe the same construct produce similar results.
No comments:
Post a Comment