Date of Award

12-9-2004

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Middle-Secondary Education and Instructional Technology

First Advisor

Stephen W. Harmon, Ed. D. - Chair

Second Advisor

Mary B. Shoffner, Ph.D.

Third Advisor

T. Chris Oshima, Ph.D.

Fourth Advisor

William Evans, Ph.D.

Abstract

This work outlines the theoretical underpinnings, method, results, and implications for constructing a discussion list analysis tool that categorizes online, educational discussion list messages into levels of cognitive effort. Purpose The purpose of such a tool is to provide evaluative feedback to instructors who facilitate online learning, to researchers studying computer-supported collaborative learning, and to administrators interested in correlating objective measures of students’ cognitive effort with other measures of student success. This work connects computer–supported collaborative learning, content analysis, and artificial intelligence. Method Broadly, the method employed is a content analysis in which the data from the analysis is modeled using artificial neural network (ANN) software. A group of human coders categorized online discussion list messages, and inter-rater reliability was calculated among them. That reliability figure serves as a measuring stick for determining how well the ANN categorizes the same messages that the group of human coders categorized. Reliability between the ANN model and the group of human coders is compared to the reliability among the group of human coders to determine how well the ANN performs compared to humans. Findings Two experiments were conducted in which artificial neural network (ANN) models were constructed to model the decisions of human coders, and the experiments revealed that the ANN, under noisy, real-life circumstances codes messages with near-human accuracy. From experiment one, the reliability between the ANN model and the group of human coders, using Cohen’s kappa, is 0.519 while the human reliability values range from 0.494 to 0.742 (M=0.6). Improvements were made to the human content analysis with the goal of improving the reliability among coders. After these improvements were made, the humans coded messages with a kappa agreement ranging from 0.816 to 0.879 (M=0.848), and the kappa agreement between the ANN model and the group of human coders is 0.70.

DOI

https://doi.org/10.57709/1059079

Included in

Education Commons

Share

COinS