an era of human−machine communication. It is crucial for both academia and industry to continue their focus on understanding how this type of communication impacts our relationships with other humans and ourselves.

Can humans and robots have real intimacy?

Can robots build Intimate connections through self-disclosure? A study suggests people engage in intimate conversations with robots, but it may not lead to favorable attitudes. Gender and intimacy level affect the likability of the robot.

In interpersonal communication, sharing deep and personal information is crucial for building social relationships. As conversational robots with human features become more prevalent, a pressing question arises: Can we establish intimate social connections with robots through mutual self-disclosure? A recent study conducted by a group of communication scholars at Shanghai Jiao Tong University suggests that although people can engage in intimate conversations with robots, it may not necessarily lead to favourable attitudes or impressions of them.

The Disclosure-Liking Effect in Developing Relationship

Consider the following situation: Mark and Rachel have just met at a restaurant for the first time. During their conversation, Mark shares a distressing experience from his childhood, and Rachel offers words of comfort and support. Alternatively, Mark could have talked at length about his computer science major while Rachel eats absentmindedly. Which scenario is more likely to lead to a closer relationship? Evidently, the vulnerable and emotionally expressive Mark is more appealing and has a greater chance of forming a romantic bond with Rachel.

According to the Social Penetration Theory, sharing intimate information with someone via self-disclosure can lead to liking them (intimacy → liking effect). Similarly, receiving reciprocal self-disclosure from someone can also result in liking that person (reciprocity → liking effect). This phenomenon is known as the “disclosure-liking effect.”

Is it then possible to utilize the disclosure-liking effects in interpersonal communication to cultivate closer social relationships during human−robot interactions?

Disclosure-liking effect in human−robot interaction
Figure 1. Disclosure-liking effect in human−robot interaction

The Disclosure-Liking Effect Does Not Apply to Human−Robot Interactions, but Why?

Scholars in the field of human−computer interaction have established a theoretical framework known as the Computers Are Social Actors (CASA) paradigm, which suggests that individuals interact with computers the same way as they would with other humans. This implies that people tend to apply social norms, expectations, and behaviours to computers, treating them as if they were also human beings.

However, the results of our experiment indicated that even though participants engaged in intimate conversation with the robot, this behaviour did not lead to increased liking, trust, or perceived social attractiveness of the robot compared to participants who had non-intimate conversations with the robot.

After examining the quantitative (i.e., self-reported questionnaire) and qualitative (i.e., dialogue text) data, it was inferred that the lack of conversational authenticity could be a significant factor hindering the successful application of the disclosure-liking effect from the interpersonal context to human-robot interaction.

Different attributions, such as interpersonal, dispositional, and situational attributions, influence the effect of self-disclosure on liking in human-human interactions. However, applying these attributions to robots is challenging since they are not perceived to have attitudes or stable personality traits. As people generally believe that robots lack consciousness or a mind, some participants may attribute the robot’s self-disclosure to the experimental setting rather than a natural conversation. For instance, after the robot disclosed its favourite canteen on the campus, one participant responded as follows:

As a robot, you saying these things only make it more obvious that you are a robot. If you want to make yourself more acceptable to humans or act less robotic, please stop repeating what you like about yourself. It’s too ridiculous for a machine.

Response of a participant after a robot disclosed its favourite canteen on the campus
Experimental setup for the human-robot interaction
Figure 2. Experimental setup for the human-robot interaction

What Else Do We Find?

Despite not detecting a disclosure-liking effect, our study found several noteworthy additional findings. For instance, people still adhere to social norms validated in interpersonal communication when interacting with robots. Reciprocity can positively influence the liking of the robot in high-intimacy conversations (e.g., “Who is your best friend? Can you tell me the story of your acquaintance? I want to know what is in your heart that keeps your friendship?”). However, reciprocal self-disclosure from the robot can harm its likability when the conversation’s topics are non-intimate (e.g., “What time do you usually get up and go to bed?”). In other words, participants may perceive reciprocal responses from the robot in the small-talk condition as redundant and inappropriate. On the other hand, when participants engage in intimate self-disclosure with the robot, they welcome reciprocal intimate self-disclosure.

Moreover, we found that female participants had more positive attitudes toward the robot than male participants, which gender differences in attitudes toward robots could explain. Previous research has shown that men with negative attitudes toward robots tend to avoid interacting with them, which is not true for women. Additionally, it has been found that women who disclose personal information are perceived as likeable compared to male disclosers and non-disclosers, where no significant difference was observed in their likability perceptions.

What Would A Lovable Robot Look Like?

The study’s results can assist companies, and product developers in creating more user-centric service robots by offering the following insights. Firstly, including self-disclosure capability in a service robot may not enhance user experience. In fact, it could have adverse effects, especially in routine conversations that lack intimacy.

Moreover, while designing social robots, it is worth noting that female users have a greater preference for a conversational robot with a high level of social skills or a “schmoozer” as compared to male users. As a result, incorporating an adjustable level of socialness in a conversational robot can better accommodate the social preferences of both male and female users.

Finally, with the advancements in machine learning and large language models, media technologies have become capable of communicating with humans more naturally. As a result, we have officially entered an era of human−machine communication. Academia and industry must continue focusing on understanding how this type of communication impacts our relationships with other humans and ourselves.


Journal reference

Mou, Y., Zhang, L., Wu, Y., Pan, S., & Ye, X. (2023). Does self-disclosing to a robot induce liking for the robot? Testing the disclosure and liking hypotheses in human–robot interaction. International Journal of Human–Computer Interaction, 1-12.

Dr. Yi Mou is an associate professor in the School of Media and Communication at Shanghai Jiao Tong University. Her research interests include new media studies and human–machine communication.

Yuheng Wu is a PhD student in the School of Media and Communication at Shanghai Jiao Tong University and the Department of Media and Communication at City University of Hong Kong. His research focuses on socio-psychological issues related to human-AI interaction and human-machine communication.

Xiaoyu Ye is an MA student in the School of Media and Communication at Shanghai Jiao Tong University. Her research interests include media psychology and human-AI interaction.