Loading Events

« All Events

  • This event has passed.

Ph.D. Defense: Theodore Jensen

October 15, 2021 @ 9:00 am - 10:30 am EDT

TitleThe Effect of Humanness Design on User Perceptions of Automation Trustworthiness and Trust Calibration

Ph.D. Candidate: Theodore Jensen

Major Advisor:  Dr. Mohammad Khan

Associate Advisors:  Dr. Ross Buck, Dr. Kristine Nowak, Dr. Dong-Guk Shin

Date/Time: Friday, October 15th, 2021, 9:00 am


Meeting link:  https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m25b716570d4cb163e291b56edc383b36

Meeting number: 2624 647 9243

Password: fEQ5QfX52vw

Join by phone: +1-415-655-0002

Access code: 2624 647 9243


 Trust is a critical factor in human-automation interaction, as users look to various cues that inform their willingness to rely on automation to accomplish specific goals. Appropriate trust entails that automation is relied upon to the extent that it provides benefit and not relied upon when it may hurt performance. Thus, the outcomes of human-automation interactions ultimately depend on the extent to which interface design allows for the user’s appropriate trust calibration with respect to system reliability.  Drawing from the domains of human factors, human-computer interaction, psychology, sociology and communication, this dissertation seeks to examine the effects of humanness design, or the inclusion of features meant to connect or communicate with humans in an automated system’s interface, on user trust calibration.

The first study in this dissertation investigates whether human-human perceived trustworthiness characteristics of ability, integrity, and benevolence apply to automated trustees. Subsequently, a series of experiments observes how humanness design features influence these perceptions of trustworthiness as well as reliance on automated systems (i.e., behavioral trust). The second study clarifies the extent to which users consider system developers when interacting with an automated system. The third and fourth studies delve into users’ perceptions of anthropomorphism (i.e., humanness) and how behavioral and visual cues contribute to perceived anthropomorphism, perceived trustworthiness, and reliance on automation. The final study investigates the efficacy of trust dampening messages, which attempt to lower trust in anticipation of poor system performance, and the role of agent anthropomorphism in the delivery of such trust-calibrating messages.

The key findings are summarized toward recommendations for designers and researchers regarding trust calibration and the facilitation of users’ accurate perceptions of system reliability and performance. This dissertation will inform increasingly prominent research into trust in automation and AI-based systems by establishing the importance of appropriate trust, the role of humanness design in trust calibration, and ultimately by contributing to human-centered design that supports users and fosters positive performance outcomes via appropriate trust.


October 15, 2021
9:00 am - 10:30 am EDT

Connect With Us