Loading Events

« All Events

Ph.D. Proposal: Theodore Jensen

March 8 @ 10:00 am - 11:00 am EST

Doctoral Dissertation Proposal

TitleThe Effect of Humanness Design on User Perceptions of Automation Trustworthiness and Trust Calibration

Ph.D. Candidate: Theodore Jensen

Major Advisor:  Dr. Mohammad Khan

Associate Advisors:  Dr. Ross Buck, Dr. Kristine Nowak, Dr. Dong-Guk Shin

Review Committee Members: Dr. Caiwen Ding and Dr. Mukul Bansal

Date/Time: Monday, March 8th, 2021, 10:00 am


Meeting link:  https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m1d91a3cfc07391aa231cf34fc89d8288

Password: PjPnQnmi764

Join by phone: +1-415-655-0002

Access code: 120 478 8265



 Automated systems are being employed in an increasing variety of tasks in our daily lives. Trust is a critical factor in human-automation interaction–users look to various cues that inform their willingness to rely on automation to accomplish their goals. Appropriate trust ensures that automation is relied upon only to the extent that it provides benefit, and not relied upon when it may actually hurt performance. Thus, the outcomes of these interactions ultimately depend on the extent to which system design allows for appropriate trust calibration with respect to a system’s reliability. Drawing from the Computers as Social Actors (CASA) paradigm, this dissertation seeks to examine the effects of humanness design, or the inclusion of humanlike features in a technological interface, on user trust calibration.

We first investigate whether human-human perceived trustworthiness characteristics of ability, integrity, and benevolence apply to automated trustees. Subsequently, we study how various humanness design features influence these perceptions of trustworthiness as well as reliance on automated systems (i.e., behavioral trust). We clarify the target of user trust by examining whether perceptions of an automated system are the same as perceptions of its developers. Then, we delve into users’ perceptions of anthropomorphism (i.e., humanness). We seek to establish whether behavioral cues contribute to perceived anthropomorphism, which can inform how expectations of system behavior may be influenced by subtle design characteristics.

Based on our preliminary findings, we propose a final study investigating the effects of messages deliberately designed for trust calibration on trust appropriateness. We seek to provide recommendations regarding the degree of humanness that should be implemented in order to facilitate users’ accurate perceptions of system reliability and performance. This dissertation will inform increasingly prominent research into trust in automation and AI-based systems by establishing the importance of appropriate trust, while also contributing to system design that supports appropriate trust calibration.



March 8
10:00 am - 11:00 am EST

Connect With Us