Author ORCID Identifier

0009-0002-5605-8271

Date of Award

5-6-2024

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Applied Linguistics and English as a Second Language

First Advisor

Sara Cushing

Second Advisor

Eric Friginal

Third Advisor

Diane Belcher

Fourth Advisor

Hongli Li

Abstract

Professional background may play a part in rater subjectivity and there has been limited exploration of the evaluations of non-teacher (lay) professionals of texts written by English as a second language (ESL) learners. In the present study, I investigated the impressions of lay professionals on L2 learner writing on a workplace and a classroom writing task.

I measured differences between raters with English teaching and lay backgrounds in their evaluations of L2 English writers’ functional adequacy and the writing features that impacted their ratings. I analyzed data from 100 teachers and 104 lay raters to give functional adequacy (FA) ratings on 20 texts randomly selected from a corpus of 400 total texts. The corpus consisted of learner responses to two tasks: meeting summaries and personal essays. The texts were compiled from an online English course. The evaluation instrument asked participants to rate their certainty of meaning, adequacy and relevance of information, comprehensibility of the texts, and the overall communicative effectiveness. I also used multi-dimensional analyses (MDA) to describe the functional dimensions (FDs) present in each of the response types and used regression analyses to determine if there were any significant relationships between FD indices and FA ratings. The MDA consisted of measuring text features and then grouping the features into FDs. The meeting summary corpus had two dimensions: involved-informational focus and recommendation-factual recount. The extreme sports corpus had three dimensions: involved-informational focus, hypothetical-factual, and person-topic focus. Finally, qualitative data were collected in the form of rater comments and analyzed for instances of raters focusing on rhetorical and linguistic features, as well as their metacognitive strategies.

The findings show that overall rater groups behaved similarly; however, there was some evidence that professional background influenced scoring decisions. For example, giving corrective feedback was salient in the teacher data but not in the lay data, whereas lay raters applied domain-specific knowledge to understand the information. Regarding differences in rating behavior between prompts, the workplace prompt was rated more severely than the personal essay. Raters also communicated genre expectations specific to each prompt.

DOI

https://doi.org/10.57709/36994759

File Upload Confirmation

1

Available for download on Friday, May 01, 2026

Share

COinS