Submission Date

4-29-2019

Document Type

Paper- Restricted to Campus Access

Department

Neuroscience

Second Department

Psychology

Adviser

Jennifer Stevenson

Second Adviser

Joel Bish

Committee Member

Jennifer Stevenson

Committee Member

Joel Bish

Committee Member

Robin Clouser

Department Chair

Ellen Dawley

Department Chair

Brent Mattingly

Project Description

Evaluating acquisition of student knowledge is a pivotal part of the educational process, as it is essential to determine what knowledge students have actually acquired, and what needs to be further elaborated. Traditionally, evaluation of student learning has been through explicit means (e.g., quizzes and tests); however, these assessment methods may have underlying disadvantages, one of which being that it is often very challenging to measure higher-level critical thinking through integration of material. Recent learning research has begun to investigate the efficiency of more implicit forms of learning acquisition evaluation. One implicit method of evaluation is the Structural Assessment of Knowledge (SAK) approach. There is a paucity of extent research directly comparing implicit and explicit evaluations, which was the purpose of the current study. More specifically, the current study directly compared the SAK approach to traditional explicit quizzes in three neuroscience domains: structure-function relationships, statistics, and neuroscience techniques. This comparison was explored in undergraduate students across year (sophomore versus senior), and major (neuroscience versus psychology). Finally, this study furthers previous research, which found differences in SAK performance based on the type of expert comparison group. Both the implicit and explicit assessment yield similar patterns of learning between domains. Additionally, seniors, in general, outperformed sophomores, and neuroscience majors outperformed psychology majors for neuroscience tailored domains (i.e., structure-function relationships and neuroscience techniques). Moreover, results on the SAK replicated findings regarding differences among expert comparison groups. These results have implications for using SAK as a supplementary learning evaluation tool.

Share

COinS