Answers from the Crowd: How Credible are Strangers in Social Q&A?

Similar documents
Trust and Community: Continued Engagement in Second Life

Why Some Wikis are More Credible than Others: Structural Attributes of Collaborative Websites as Credibility Cues

What is beautiful is useful visual appeal and expected information quality

Greek Teachers Attitudes toward the Inclusion of Students with Special Educational Needs

The Political Engagement Activity Student Guide


MAINTAINING CURRICULUM CONSISTENCY OF TECHNICAL AND VOCATIONAL EDUCATIONAL PROGRAMS THROUGH TEACHER DESIGN TEAMS

Improving Conceptual Understanding of Physics with Technology

Positive turning points for girls in mathematics classrooms: Do they stand the test of time?

Factors Influencing the Response Rate in Social Question &Answering Behavior

The Good Judgment Project: A large scale test of different methods of combining expert predictions

Situational Virtual Reference: Get Help When You Need It

Assessment and Evaluation

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Level 1 Mathematics and Statistics, 2015

Active Ingredients of Instructional Coaching Results from a qualitative strand embedded in a randomized control trial

A cognitive perspective on pair programming

Developing creativity in a company whose business is creativity By Andy Wilkins

Institutional repository policies: best practices for encouraging self-archiving

ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY DOWNLOAD EBOOK : ADVANCED MACHINE LEARNING WITH PYTHON BY JOHN HEARTY PDF

ECON 365 fall papers GEOS 330Z fall papers HUMN 300Z fall papers PHIL 370 fall papers

What is PDE? Research Report. Paul Nichols

Student Name: OSIS#: DOB: / / School: Grade:

Number of students enrolled in the program in Fall, 2011: 20. Faculty member completing template: Molly Dugan (Date: 1/26/2012)

Major Milestones, Team Activities, and Individual Deliverables

Preprint.

A BEGINNERS GUIDE TO SUCCESSFUL ONLINE SURVEYS

Deploying Agile Practices in Organizations: A Case Study

Should a business have the right to ban teenagers?

Best Practices in Internet Ministry Released November 7, 2008

Student-Centered Learning

A. What is research? B. Types of research

Introduction. 1. Evidence-informed teaching Prelude

A Coding System for Dynamic Topic Analysis: A Computer-Mediated Discourse Analysis Technique

Characterizing Mathematical Digital Literacy: A Preliminary Investigation. Todd Abel Appalachian State University

Language Arts: ( ) Instructional Syllabus. Teachers: T. Beard address

A Study of Successful Practices in the IB Program Continuum

Beveridge Primary School. One to one laptop computer program for 2018

Alpha provides an overall measure of the internal reliability of the test. The Coefficient Alphas for the STEP are:

ISSN X. RUSC VOL. 8 No 1 Universitat Oberta de Catalunya Barcelona, January 2011 ISSN X

Reflective problem solving skills are essential for learning, but it is not my job to teach them

Running head: DEVELOPING MULTIPLICATION AUTOMATICTY 1. Examining the Impact of Frustration Levels on Multiplication Automaticity.

PREP S SPEAKER LISTENER TECHNIQUE COACHING MANUAL

Audit Documentation. This redrafted SSA 230 supersedes the SSA of the same title in April 2008.

What s in a Step? Toward General, Abstract Representations of Tutoring System Log Data

school students to improve communication skills

Some Basic Active Learning Strategies

BRAG PACKET RECOMMENDATION GUIDELINES

TU-E2090 Research Assignment in Operations Management and Services

Running head: THE INTERACTIVITY EFFECT IN MULTIMEDIA LEARNING 1

CONFERENCE PAPER NCVER. What has been happening to vocational education and training diplomas and advanced diplomas? TOM KARMEL

Knowledge based expert systems D H A N A N J A Y K A L B A N D E

Arizona s English Language Arts Standards th Grade ARIZONA DEPARTMENT OF EDUCATION HIGH ACADEMIC STANDARDS FOR STUDENTS

Individual Interdisciplinary Doctoral Program Faculty/Student HANDBOOK

HISTORY COURSE WORK GUIDE 1. LECTURES, TUTORIALS AND ASSESSMENT 2. GRADES/MARKS SCHEDULE

EdX Learner s Guide. Release

University of Waterloo School of Accountancy. AFM 102: Introductory Management Accounting. Fall Term 2004: Section 4

Meriam Library LibQUAL+ Executive Summary

Motivation to e-learn within organizational settings: What is it and how could it be measured?

ICT/IS 200: INFORMATION LITERACY & CRITICAL THINKING Online Spring 2017

The Introvert s Guide to Building Rapport With Anyone, Anywhere

GALICIAN TEACHERS PERCEPTIONS ON THE USABILITY AND USEFULNESS OF THE ODS PORTAL

Executive Summary. Abraxas Naperville Bridge. Eileen Roberts, Program Manager th St Woodridge, IL

1/25/2012. Common Core Georgia Performance Standards Grade 4 English Language Arts. Andria Bunner Sallie Mills ELA Program Specialists

Feature-oriented vs. Needs-oriented Product Access for Non-Expert Online Shoppers

Freshman On-Track Toolkit

I. STATEMENTS OF POLICY

SOCIAL SCIENCE RESEARCH COUNCIL DISSERTATION PROPOSAL DEVELOPMENT FELLOWSHIP SPRING 2008 WORKSHOP AGENDA

UW-Stout--Student Research Fund Grant Application Cover Sheet. This is a Research Grant Proposal This is a Dissemination Grant Proposal

EECS 571 PRINCIPLES OF REAL-TIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;

Practice Examination IREB

PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school

Mandarin Lexical Tone Recognition: The Gating Paradigm

Students attitudes towards physics in primary and secondary schools of Dire Dawa City administration, Ethiopia

IMPROVING ICT SKILLS OF STUDENTS VIA ONLINE COURSES. Rozita Tsoni, Jenny Pange University of Ioannina Greece

E-Learning project in GIS education

Achievement Level Descriptors for American Literature and Composition

Executive Summary. Lincoln Middle Academy of Excellence

Getting Started with Deliberate Practice

PROJECT MANAGEMENT AND COMMUNICATION SKILLS DEVELOPMENT STUDENTS PERCEPTION ON THEIR LEARNING

The Dynamics of Social Learning in Distance Education

Research Update. Educational Migration and Non-return in Northern Ireland May 2008

Speed Reading: Perception Enhancement Exercises

Final Teach For America Interim Certification Program

Positive Behavior Support In Delaware Schools: Developing Perspectives on Implementation and Outcomes

International Business BADM 455, Section 2 Spring 2008

AQUA: An Ontology-Driven Question Answering System

UPPER SECONDARY CURRICULUM OPTIONS AND LABOR MARKET PERFORMANCE: EVIDENCE FROM A GRADUATES SURVEY IN GREECE

The Implementation of Interactive Multimedia Learning Materials in Teaching Listening Skills

1 Use complex features of a word processing application to a given brief. 2 Create a complex document. 3 Collaborate on a complex document.

One of the aims of the Ark of Inquiry is to support

Exploration. CS : Deep Reinforcement Learning Sergey Levine

ICTCM 28th International Conference on Technology in Collegiate Mathematics

Welcome to WRT 104 Writing to Inform and Explain Tues 11:00 12:15 and ONLINE Swan 305

CHAPTER 5: COMPARABILITY OF WRITTEN QUESTIONNAIRE DATA AND INTERVIEW DATA

James W. Lloyd, DVM, PhD Associate Dean for Budget, Planning, and Institutional Research College of Veterinary Medicine Michigan State University

Characterizing Online Discussion Using Coarse Discourse Sequences

Understanding and Interpreting the NRC s Data-Based Assessment of Research-Doctorate Programs in the United States (2010)

essays personal admission college college personal admission

Post-intervention multi-informant survey on knowledge, attitudes and practices (KAP) on disability and inclusive education

Transcription:

Answers from the Crowd: How Credible are Strangers in Social Q&A? Grace YoungJoo Jeon 1 and Soo Young Rieh 1 1 School of Information, University of Michigan Abstract Individuals may encounter distinct kinds of challenges in assessing credibility in a social Q&A setting where they interact with strangers. It is necessary to better understand how people make credibility judgments when seeking information using social Q&A services because people increasingly use such services to obtain personalized answers from a large pool of unknown people. In this paper, we report preliminary findings from a quasi-field study where participants were asked to use Yahoo! Answers for one week and were interviewed afterwards. We find that participants assessment of the credibility of strangers who answered their questions occurred in three different dimensions: attitude, trustworthiness, and expertise. Furthermore, different elements were noticed and interpreted in each dimension of the credibility assessment. Our work provides insights into source credibility assessment in social Q&A settings and implications for the design of social technologies that better support people s online credibility assessment. Keywords: social Q&A, credibility, information seeking, social media, crowdsourcing Citation: Jeon, G. Y., & Rieh, S. Y. (2014). Answers from the Crowd: How Credible are Strangers in Social Q&A? In iconference 2014 Proceedings (p. 663 668). doi:10.9776/14309 Copyright: Copyright is held by the authors. Contact: yjeon@umich.edu, rieh@umich.edu 1 Introduction Today, online social tools and services enable people to easily reach the crowd to seek information in the context of their daily lives. An example of such online services is a social question-answering (Q&A) service. Social Q&A services such as Yahoo! Answers allow people to meet their information needs by asking questions and receiving answers from other users on a broad range of topics. People are increasingly using social Q&A services to seek information because these services enable them to obtain personalized answers to their questions quickly from a large number of people (Harper, Raban, Rafaeli, & Konstan, 2008; Shah, Oh, & Oh, 2008). Credibility research has found that many people find it difficult to judge the value and credibility of information based on author, content, and source on the Web due to a lack of quality control mechanisms and a limited number of available cues (Metzger, 2007; Rieh, 2002). In social Q&A settings, where people interact with people they do not know and with online content created by those people, individuals may encounter different challenges in judging the credibility of information. For example, when evaluating information on social Q&A sites, do people distinguish between the sources of information (i.e., answerers) and the content of answers? Do people become more dependent on new types of social cues in this process of finding credible answers? Prior work has addressed issues surrounding credibility assessment in social Q&A settings, such as the identification of criteria used to evaluate answers and the effect of particular cues on trust in the answerer (Golbeck & Fleischmann, 2010; Kim, 2010; Kim & Oh, 2009). However, we still know relatively little about how people make credibility judgments in this new online environment. The rapid recent growth of social tools and services that enable interactions with the crowd has magnified the importance of understanding people s credibility assessment of strangers. We focus on examining how individuals judge the credibility of unknown people who answer their questions in a social Q&A setting. To address this question, we conducted a quasi-field study on a social

Q&A service, Yahoo! Answers. The preliminary findings from the study indicate that participants assessment of the credibility of strangers who answered their questions occurred in three different dimensions of attitude, trustworthiness, and expertise. Moreover, different elements were noticed and interpreted in each dimension of the credibility assessment. 2 Related Work As a principal component of information quality, credibility is the believability of some information and its source. It is a multi-dimensional construct with two main components: expertise and trustworthiness (Metzger, 2007; Rieh, 2010). Credibility is not a property of information or a source, but it is the judgment and perception of an individual (Metzger, 2007; Rieh, 2010). The prominence-interpretation theory proposed by Fogg (2003) suggests that online credibility assessment entails two phases: noticing an element and making a judgment about the noticed element. The former refers to prominence, while the latter refers to interpretation. Hilligoss and Rieh (2008) developed a theoretical framework of credibility assessment that includes three distinct levels of credibility judgments: construct, heuristics, and interaction. The construct level relates to how users conceptualize credibility. The heuristics level entails credibility assessment based on general rules of thumb. The interaction level involves effortful assessment of specific sources or content cues. In Web environments, it is difficult to identify or authenticate a source of information (Metzger, Flanagin, & Medders, 2010). Source attribution research has emphasized that the source of Web-based information is what or who one believes it to be (Sundar & Nass, 2001) and thus individuals tend to distinguish between different levels of sources, and salience of source attributes at the time of evaluation may affect people s credibility assessment (Flanagin & Metzger, 2007). In this vein, a number of studies have examined the effect of source attribution on credibility assessment in the context of online reviews. People appear to be influenced by information describing reviewers identity or expertise that is available either in a profile or in the content of review when assessing helpfulness of online reviews and credibility of online reviewers (Forman, Ghose, & Wiesenfeld, 2008; Willemsen, Neijens, & Bronner, 2012). With regard to research on credibility judgments in social Q&A settings, studies have reported that people pick up affective cues such as attitude or tone, which are embedded in questions and answers (Kim, 2010; Kim & Oh, 2009). Furthermore, any cues may be helpful for developing trust in online settings where there is no strong community or where users often lack long-term engagement, as is the case with social Q&A sites (Golbeck & Fleischmann, 2010). 3 Methods A quasi-field study was conducted in order to obtain data drawn from participants experiences in the context of their daily lives. Yahoo! Answers (http://answers.yahoo.com/) was selected for this study because it is the largest and most popular social Q&A service. We instructed participants to use Yahoo! Answers for a period of one week and interviewed them at the conclusion of one week. Twenty-one undergraduate students (age range, 19 to 24 years) from a research university in the Midwestern United States participated in this study. 1 Eight (40%) were males and 12 (60%) were females. The majority of participants (60%) had little or no experience with Yahoo! Answers. Data were collected through a background questionnaire, interviews, and a post-interview questionnaire. In particular, semi-structured in-person interviews served as the primary source of data collection, gathering data about participants overall experience using Yahoo! Answers for this study and 1 One participant was excluded from the data because the participant only answered questions in Yahoo! Answers without posting any questions. 664

their question asking and answer evaluation process in each episode. The content of questions submitted by participants and answers they received were also collected. All interviews were transcribed and coded. The initial set of codes was developed based on the interview protocols and then additional codes were added to the code book through iterative analyses of the interview transcripts. In the present paper, we report preliminary findings based on the analysis of interview data, focusing on participants credibility assessment of strangers who answered their questions in Yahoo! Answers. 4 Findings While a small number of participants reported that they were not very concerned about the credibility of those who answered their questions, in general participants assessed the credibility of the answerer by utilizing the limited cues available in the social Q&A setting. Specifically, people s perceived credibility of the answerer was constructed based on credibility assessment occurring in three different dimensions: attitude, trustworthiness, and expertise. In addition, the credibility assessment in each dimension was based on people s interpretations of certain elements they noticed, as Fogg (2003) suggested. 4.1 Three Dimensions of Credibility Assessment 4.1.1 Attitude-Dimension of Assessment In the attitude-dimension, people assessed the answerer s involvement and effort. In particular, people judged how much the answerer had been invested in and had participated in Yahoo! Answers, and how engaged the answerer was, and whether he or she did hard work. Elements people noticed in this dimension were cues that tended to require relatively less effort. These included a profile picture, Yahoo! Answers points or levels, a top contributor badge, the act of answering itself, and the act of doing research. Although only seven of twenty participants utilized system-generated cues such as a profile and top contributor badge, those who did found them useful to gauge the level of involvement of the answerer. With regard to the profile, S01 indicated that uploading a profile picture meant that person is a little bit more invested in actually participating in the site. Similarly, S04 stated that having a profile picture showed the answerer s investment of his time in Yahoo! Answers. Some participants used information on points or levels in Yahoo! Answers from the profile to judge the answerer s involvement. For example, S04 said that those who had higher levels were the people who spend more time on Yahoo! Answers. Participants also perceived that those who had top contributor badges were users who were making large contributions to the site. However, it was noted that some participants who did not use these cues voiced suspicions about the top contributor badge, stating that having it did not necessarily mean the answerer provided highquality answers. Participants appeared to appreciate the fact that those who answered their questions took their time to answer them. Both S06 and S11 mentioned that the act of answering itself indicated that that person knew something and made an effort because that person spent time to write the answer. In a similar vein, S08 described the significance of effort in assessing credibility of the answerer, stating that the answerer seemed to do his research, given the content of received answer. 4.1.2 Trustworthiness-Dimension of Assessment With respect to trustworthiness, people judged the answerer s intention or decency. These judgments were based on elements such as punctuation, wording, format, links, and the way of answering. Compared to elements perceived in the attitude-dimension, these elements required more effort because participants needed to read the content of the answer in order to notice and interpret cues. Participants believed that the way that the answerer typed and punctuated, and the answerer s word choice and formatting style determined the legitimacy of the answerer. For example, S04 stated that 665

if people write out the punctuation, that means that they want to avoid spam. Furthermore, participants considered those who included sources such as links to websites more trustworthy as these answerers provided objective evidence that supported their answers. Interestingly, one participant (S21) mentioned that she perceived the answerers were unbiased and thus trustworthy because they were strangers who knew nothing about her. In addition, for some participants, the way of answering mattered in assessing the answerer s credibility. S08 stated that he could believe the answerers because they re not trolling here. Similarly, S10 reported that the answerer who was making a joke or acting like it s a message board lost his or her credibility. In contrast, some participants had a fundamental belief that people were well-intentioned, given that they did not think that people take time out to answer someone else s question to lie (S06). 4.1.3 Expertise-Dimension of Assessment When assessing expertise, participants evaluated the perceived knowledge or experience of the answerer. Participants noticed and utilized a wide range of cues to decide whether the answerer had the necessary expertise, knowledge, or experience to answer their questions. These elements required the most effort, in comparison to elements perceived in the previously mentioned two dimensions, as people needed to read and process the content of the answer or to take the extra step of clicking a link to access more detailed profile information. The content of an answer itself played an important role in helping participants to assess answerers qualifications. S06 stated that self-proclaimed expertise in the answer made him think that he knows what he s talking about. Providing a specific answer which exactly met the needs of the person who asked the question also seemed to indicate answerers experience, as S08 reported. Another content-related cue used by participants was congruence between the answerer s response and that of other users who provided answers. S07 said that she believed the answerer because there was already like two [other] people that said the same thing. Along with content-related cues, system-generated cues based on social feedback were also used. For example, some participants went to the answerer s profile and looked at other questions that person had answered previously. According to S05, who posted a track-related question, the fact that the answerer answered some other questions about track indicated that it s not a random person answering, enhancing the answerer s credibility. Similarly, S03 said that it seemed obvious that the answerer had some sort of experience or some sort of knowledge in finance, as this person answered a lot of questions related to finance. 5 Discussion and Conclusion We have presented the preliminary findings from a quasi-field study conducted on Yahoo! Answers. The preliminary findings demonstrate that people employ the limited cues available in Yahoo! Answers to assess the credibility of strangers who answer their questions. This credibility assessment takes place in three different dimensions with different elements being noticed and interpreted in each dimension. Research on question asking using social network sites such as Facebook and Twitter has shown that people prefer to obtain an answer from those in their social networks over unknown people because they tend to trust the opinions of people they know (Morris, Teevan, & Panovich, 2010). In Yahoo! Answers, people interact with strangers they do not know and have no prior relationship with; thus, the asker is responsible for assessing the credibility of the answerer. Our preliminary findings provide insights into what kinds of cues people use in order to perceive the credibility of the source of information, strangers who answer their questions, in a social Q&A setting. Moreover, by identifying multiple dimensions of source credibility assessment in the social Q&A setting and discovering elements that people notice, this study 666

helps to inform social technology designers about what elements need to be salient to better support people s credibility assessment. Future work will be needed to develop a more nuanced understanding of people s credibility assessment of the crowd in the social Q&A context. It would be interesting to examine how these three dimensions of assessment interact in the credibility assessment process to affect the perceived credibility of information obtained in the social Q&A setting. In addition, we could consider the degree of effort expended by an individual to make credibility judgments in each dimension. This might allow us to develop a new credibility assessment framework applicable to the social media environments based on Hilligoss and Rieh s (2008) framework. In spite of several limitations of this study, including homogeneity of participants, the artificial number of questions to be posted that was imposed on participants, and selection of one particular social Q&A site, we believe that our work contributes to a better understanding of people s credibility assessment in the social Web environment by identifying specific elements people may notice and interpret in order to make credibility judgments about strangers in the social Q&A context. 6 References Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user attributes, and information verification behaviors on the perceived credibility of web-based information. New Media & Society, 9(2), 319-342. Fogg, B. J. (2003). Prominence-interpretation theory: Explaining how people assess credibility online. In Extended Abstracts of the SIGCHI Conference on Human Factors in Computing Systems (pp. 722-723). ACM. Forman, C., Ghose, A., & Wiesenfeld, B. (2008). Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Information Systems Research, 19(3), 291-313. Golbeck, J., & Fleischmann, K.R. (2010). Trust in social Q&A: The impact of text and photo cues of expertise. Proceedings of the American Society for Information Science and Technology, 47. Harper, F. M., Raban, D., Rafaeli, S., & Konstan, J. A. (2008). Predictors of answer quality in online Q&A sites. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 865-874). ACM. Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management, 44(4), 1467-1484. Kim, S. (2010). Questioners' credibility judgments of answers in a social question and answer site. Information Research, 15(2), paper 432. Kim, S., & Oh, S. (2009). Users' relevance criteria for evaluating answers in a social Q&A site. Journal of the American Society for Information Science and Technology, 60(4), 716-727. Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078-2091. Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and Heuristic Approaches to Credibility Evaluation Online. Journal of Communication, 60(3), 413-439. Morris, M. R., Teevan, J., & Panovich, K. (2010). What do people ask their social networks, and why?: A survey study of status message Q&A behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1739-1748). ACM. Rieh, S. Y. (2002). Judgment of information quality and cognitive authority in the Web. Journal of the American Society for Information Science and Technology, 53(2), 145-161. 667

Rieh, S. Y. (2010). Credibility and cognitive authority of information. In M. Bates & M. N. Maack (Eds.) Encyclopedia of Library and Information Sciences, 3rd Ed. (pp. 1337-1344), New York: Taylor and Francis Group, LLC. Shah, C., Oh, J. S., & Oh, S. (2008). Exploring characteristics and effects of user participation in online social Q&A sites. First Monday, 13(9-1). Sundar, S. S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of Communication, 51(1), 52-72. Willemsen, L. M., Neijens, P. C., & Bronner, F. (2012). The ironic effect of source identification on the perceived credibility of online product reviewers. Journal of Computer Mediated Communication, 18(1), 16-31. 668