Author ORCID Identifier

https://orcid.org/https://orcid.org/0000-0001-9490-2897

Date of Award

Summer 8-11-2020

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Information Systems

First Advisor

Mark Keil

Second Advisor

Likoebe Maruping

Third Advisor

J.J. Po-An Hsieh

Fourth Advisor

Aaron M. Baird

Fifth Advisor

Lingyao (Ivy) Yuan

Abstract

The investment in AI agents has steadily increased over the past few years, yet the adoption of these agents has been uneven. Industry reports show that the majority of people do not trust AI agents with important tasks. While the existing IS theories explain users’ trust in IT artifacts, several new studies have raised doubts about the applicability of current theories in the context of AI agents. At first glance, an AI agent might seem like any other technological artifact. However, a more in-depth assessment exposes some fundamental characteristics that make AI agents different from previous IT artifacts. The aim of this dissertation, therefore, is to identify the AI-specific characteristics and behaviors that hinder and contribute to trust and distrust, thereby shaping users’ behavior in human-AI interaction. Using a custom-developed conversational AI agent, this dissertation extends the human-AI literature by introducing and empirically testing six new constructs, namely, AI indeterminacy, task fulfillment indeterminacy, verbal indeterminacy, AI inheritability, AI trainability, and AI freewill.

DOI

https://doi.org/10.57709/17866661

File Upload Confirmation

1

Share

COinS