Voicing one's true opinions is a struggle that many of us face, especially in a group setting. What intrigued my team of four is whether or not a robot could be designed to
encourage people to speak their minds. In order to answer this question, we designed, prototyped and tested a robot, named Rob, who would discourage group
conformity through dissent and expertise. While this project was a collaborative effort, I took the lead in designing Rob's look and motions that conveyed trust among
participants. Hands down, one of the coolest projects I have ever worked on.
Research - Literature review
While the problem statement was provided to us, none of us had any idea about the subject at hand. So, we conducted a literature review of published papers in the fields
of HCI, HRI and psychology similar to the topic.
The papers we found defined our perception of the robot's look, role and behavior. For instance, we discovered that people find anthropomorphic robots trustworthy. So,
we ensured that the robot would react and move like a human. The most impactful paper that we found was Asch's study based on group conformity, confederates
(people who are hired by researchers to act as a participant) and participants. After narrowing down our findings from the research, we found two main points to consider
for the robot's role in the study.
∙ Level of dissent: Conforming or dissenting
∙ Perceived expertise: Peer or expert
Question and Ideation
After tirelessly conducting the literature review, we had a good concept about the topic and the questions to be answered. Based on the literature review, we posed two
questions - design and research.
∙Design Question: How might we design a robot to reduce the likelihood of conforming to a group?
∙Research Question: How might a robot’s level of dissent and expertise influence group conformity?
Our next step was to suggest an idea for a robot that would address these issues in a group setting. The actions of the robot would ensure that a participant hesitating to
speak their mind could be persuaded to voice their opinion. The questions we posed yielded six broad themes that were a foundation for the ideas that we came up with.
∙Task: What should our participants do to generate conformity?
∙Honesty: How do we push people to say what they’re truly believing?
∙Logistics: How to count votes/determine majority opinion?
∙Environment: How to create a collaborative/safe environment?
∙Robot: How do we get our robot to carry out a function?∙Other: Anything that doesn’t quite fit in its own theme.
After finalizing these themes, the team gathered for 'The Mothership' - ideation. Each of us came up with 50 crazy ideas for our robot and wrote them down on post-it
notes. Some ideas were original but, a majority of them were inspired by ideas of other teammates. After clubbing together post-it notes under the corresponding theme,
we revised the themes one more time. For instance, 'Robot' broke down to ' Dissenting' and 'Conforming'. The largest cluster was 'Collaboration', a new cluster that came
up during the revision whose focus was to ensure that constructive debate could take place easily.
Based on all the clusters and ideas, we started narrowing down the features and actions of our robot. During the discussion, we realized the importance
of narrowing down a procedure for testing. We felt that a task similar to Asch’s could be the easiest to implement. Being the Harry Potter nerd that I am, I suggested the
idea of combining Asch's study with jelly beans, which was readily accepted by the group.
After the ideation process, we started focusing on the design of the Asch study setting and the robot. Each of us came up with ten sketches and paper prototypes of robots
and ten ideas for Asch related studies. For the Asch studies, we took into account the findings from the literature survey, namely level of dissent and perceived expertise.
To perceive the robot as an expert, it's enough to simply tell the user that the robot has complex algorithms. However, for the user to treat the robot like a
peer, the robot needs to look trustworthy. Hence, robot's designs were anthropomorphic. Apart from the look, the robot should also move similar to a human. This would
also lead to an increase in trust. As a result, mannerisms that represented waving, listening, processing, paying attention and providing opinions were incorporated in
Since I had the most experience in design in the group, my contributions in this portion of the project were primary. My insights on the designs for the robot and study led
to us finalizing the following:
∙Study: The robot would be a part of group with one participant and two confederates that had to guess the number of jelly beans in a jar. The robot would be introduced
either as an expert or as a peer. First, the participant and confederate would note down their guesses. Then, each entity including the robot would provide their guess (the
participant goes last). The robot would either conform with the group or dissent. The study would test the level of conformity that the participant exhibits depending on
the robot's level of conformity, the feasibility of the guesses given and the robot's status as a peer or an expert.
∙Robot: The robot would primarily be circular to promote anthropomorphism. It had a hand on one side and a cavity in the head to hold an iPhone. The robot would
display jelly bean guesses and greetings on the phone screen. It would also have various motions that increased trust such as waving its hand when being introduced to
while displaying 'HELLO' on the iPhone and moving its body towards the person who is guessing and nodding or shaking its head to agree or disagree with the person's
guess. Further, the robot's face would be displayed on the screen. Its eyes would blink, the mouth would turn up if it agreed to a guess and frown if it disagreed.
Prototype and MTurk Study
After the designs were finalized, the engineers (Kevin and Priscilla) implemented the prototype. Servomotors placed within the body and head afforded all of the robot's
movements. Arduino programming allowed the research team to control Rob’s movements during study sessions via a Wizard of Oz protocol. Hence, the robot would
appear to be autonomous but, it was actually being operated by one of the teammates.
After the prototype was implemented, we conducted an MTurk study that asked survey takers questions related to the robot's look and feel. The survey was conducted
primarily to fine tune the robot's interactions and its impressions on people. We incorporated multiple videos of users interacting with the robot in the study to improve
lucidity among the respondents. We named the robot 'Rob' while compiling the study to make it easier for users to refer to it.
Our experimental paradigm drew inspiration from Asch’s early work on conformity. So, we enlisted the help of two confederates to create a scenario of group conformity
with Rob. First, an experimenter would ask each person to introduce themselves. Rob would also introduce itself by raising its hand and displaying a message that said:
“Hey! I’m Rob!”. Then, the experimenter would give a brief overview of the study task, noting that each participant, including Rob, would have to guess the number of
purple jelly beans in a jar filled with purple and yellow jelly beans.
Participants in the expert condition would be told that Rob was an expert at counting because of a special algorithm. Those in the non-expert condition were not given any
information about Rob’s counting capabilities. Then, the experimenter would instruct each participant to write down their responses. Unbeknownst to the participant, they
were the only person in the room who would be making genuine guesses. The two confederates in the room were instructed to guess similarly across all 8 trials.
Furthermore, based on the dissent condition, Rob would either agree with the confederates’ guesses or disagree once confederates began to give unreasonable guesses.
When Rob agreed with a response, he would nod his head in the direction of the person who was guessing. On the other hand, when Rob disagreed with a response, he
would shake his head in the direction of the person who was guessing.
To ensure that the participant was always exposed to the guesses of the confederates, the experimenter asked the confederates, Rob, and then the participant to say their
guesses out loud. Once the eight trials were over, the experimenter would hand over a survey for the participants to fill out. This survey asked participants about their
opinions of the task, Rob, the group, as well as some basic demographic questions. Confederates were instructed to pretend to fill out the survey as the participant
actually completed all of the measures. Once the participant completed the survey questions, the experimenter made sure to inform him or her about the true intentions
of the study.
We recruited 6 undergraduate students from a private, American university to participant in our pilot study. However, due to technical difficulties, data from one of the six
participants was removed leaving us with a total sample of 5 participants. Our preliminary results from the experimental pilot study suggest that Rob may, indeed, reduce
group conformity. Since Rob had an additional task of being treated as a peer, participants’ perceptions of Rob indicate that people may have felt a level of affect toward
him, especially when they were in different disagreement conditions.
In particular, we find support for our first hypothesis that a robot dissenting from the group majority will most likely encourage a participant to also dissent. Unfortunately,
we were unable to assess the extent to which Rob’s level of expertise influences group conformity because of our small sample size and technical difficulties.
Hence, our findings have important and interesting implications for group work. Developing robots like Rob that are built to challenge group member’s opinions could
help to counter group conformity, which has been shown to have negative consequences on group tasks such as creative brain storming.