Explicit & Implicit Interaction Design for Multi-Focus Visualizations

Similar documents
BUILD-IT: Intuitive plant layout mediated by natural interaction

This is the author s version of a work that was submitted/accepted for publication in the following source:

On Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC

Communication around Interactive Tables

SOFTWARE EVALUATION TOOL

Feature-oriented vs. Needs-oriented Product Access for Non-Expert Online Shoppers

A Context-Driven Use Case Creation Process for Specifying Automotive Driver Assistance Systems

Unit purpose and aim. Level: 3 Sub-level: Unit 315 Credit value: 6 Guided learning hours: 50

Modeling user preferences and norms in context-aware systems

LEGO MINDSTORMS Education EV3 Coding Activities

A Case Study: News Classification Based on Term Frequency

Bootstrapping Personal Gesture Shortcuts with the Wisdom of the Crowd and Handwriting Recognition

Identifying Novice Difficulties in Object Oriented Design

Including the Microsoft Solution Framework as an agile method into the V-Modell XT

College of Liberal Arts (CLA)

DIGITAL GAMING & INTERACTIVE MEDIA BACHELOR S DEGREE. Junior Year. Summer (Bridge Quarter) Fall Winter Spring GAME Credits.

CWIS 23,3. Nikolaos Avouris Human Computer Interaction Group, University of Patras, Patras, Greece

7. Stepping Back. 7.1 Related Work Systems that Generate Folding Nets. The problem of unfolding three-dimensional models is not a new one (c.f.

LEt s GO! Workshop Creativity with Mockups of Locations

Automating the E-learning Personalization

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

A Coding System for Dynamic Topic Analysis: A Computer-Mediated Discourse Analysis Technique

Innovation of communication technology to improve information transfer during handover

AGENDA LEARNING THEORIES LEARNING THEORIES. Advanced Learning Theories 2/22/2016

Internal Double Degree. Management Engineering and Product-Service System Design

DYNAMIC ADAPTIVE HYPERMEDIA SYSTEMS FOR E-LEARNING

Author Keywords Groupware, multimodal interaction, specification notation.

1 Use complex features of a word processing application to a given brief. 2 Create a complex document. 3 Collaborate on a complex document.

Using Virtual Manipulatives to Support Teaching and Learning Mathematics

A Pipelined Approach for Iterative Software Process Model

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Evaluation of Usage Patterns for Web-based Educational Systems using Web Mining

Five Challenges for the Collaborative Classroom and How to Solve Them

The Creation and Significance of Study Resources intheformofvideos

Prototype Development of Integrated Class Assistance Application Using Smart Phone

Training Priorities identified from Training Needs Analysis survey (January 2015)

OilSim. Talent Management and Retention in the Oil and Gas Industry. Global network of training centers and technical facilities

What is beautiful is useful visual appeal and expected information quality

From Virtual University to Mobile Learning on the Digital Campus: Experiences from Implementing a Notebook-University

Adaptation Criteria for Preparing Learning Material for Adaptive Usage: Structured Content Analysis of Existing Systems. 1

Exploiting Virtual Environments to Support. Collaborative e-learning Communities

The Importance of Awareness for Team Cognition in Distributed Collaboration

prehending general textbooks, but are unable to compensate these problems on the micro level in comprehending mathematical texts.

An ICT environment to assess and support students mathematical problem-solving performance in non-routine puzzle-like word problems

M-Learning. Hauptseminar E-Learning Sommersemester Michael Kellerer LFE Medieninformatik

WikiAtoms: Contributions to Wikis as Atomic Units

Vodcasting: A case study in adaptability to meet learners needs and preferences

Digital Fabrication and Aunt Sarah: Enabling Quadratic Explorations via Technology. Michael L. Connell University of Houston - Downtown

On the Open Access Strategy of the Max Planck Society

The Good Judgment Project: A large scale test of different methods of combining expert predictions

A Systems Approach to Principal and Teacher Effectiveness From Pivot Learning Partners

Deploying Agile Practices in Organizations: A Case Study

A virtual surveying fieldcourse for traversing

CHANCERY SMS 5.0 STUDENT SCHEDULING

Practical Integrated Learning for Machine Element Design

TA Certification Course Additional Information Sheet

Introduction to CRC Cards

Academic Choice and Information Search on the Web 2016

Security & Technology. Track & Tennis. Repairs. Remodeling & Interior Repairs. Exterior Wall. Repairs

Patterns for Supervising Thesis Projects

Introducing New IT Project Management Practices - a Case Study

Multimedia Courseware of Road Safety Education for Secondary School Students

Trust and Community: Continued Engagement in Second Life

Strengthening assessment integrity of online exams through remote invigilation

Supporting Transitions in Work: Informing Large Display Application Design by Understanding Whiteboard Use

DOCTORAL SCHOOL TRAINING AND DEVELOPMENT PROGRAMME

CONSISTENCY OF TRAINING AND THE LEARNING EXPERIENCE

Programme Specification. MSc in International Real Estate

Protocols for building an Organic Chemical Ontology

UCEAS: User-centred Evaluations of Adaptive Systems

Storytelling Made Simple

A cognitive perspective on pair programming

The Enterprise Knowledge Portal: The Concept

New Paths to Learning with Chromebooks

Notes on The Sciences of the Artificial Adapted from a shorter document written for course (Deciding What to Design) 1

Telekooperation Seminar

Computerized Adaptive Psychological Testing A Personalisation Perspective

Education for an Information Age

Math Pathways Task Force Recommendations February Background

Introduction to the Revised Mathematics TEKS (2012) Module 1

The feasibility, delivery and cost effectiveness of drink driving interventions: A qualitative analysis of professional stakeholders

Application of Virtual Instruments (VIs) for an enhanced learning environment

Introduction to Mobile Learning Systems and Usability Factors

HDR Presentation of Thesis Procedures pro-030 Version: 2.01

Pragmatic Use Case Writing

ATENEA UPC AND THE NEW "Activity Stream" or "WALL" FEATURE Jesus Alcober 1, Oriol Sánchez 2, Javier Otero 3, Ramon Martí 4

Learning Methods for Fuzzy Systems

Chamilo 2.0: A Second Generation Open Source E-learning and Collaboration Platform

QUT Digital Repository:

Cambridge NATIONALS. Creative imedia Level 1/2. UNIT R081 - Pre-Production Skills DELIVERY GUIDE

Chapter 10 APPLYING TOPIC MODELING TO FORENSIC DATA. 1. Introduction. Alta de Waal, Jacobus Venter and Etienne Barnard

Master s Programme in European Studies

How Does Physical Space Influence the Novices' and Experts' Algebraic Reasoning?

Beyond PDF. Using Wordpress to create dynamic, multimedia library publications. Library Technology Conference, 2016 Kate McCready Shane Nackerud

ISSN X. RUSC VOL. 8 No 1 Universitat Oberta de Catalunya Barcelona, January 2011 ISSN X

Lectora a Complete elearning Solution

Introduction and survey

COMPETENCY-BASED STATISTICS COURSES WITH FLEXIBLE LEARNING MATERIALS

Setting the Scene: ECVET and ECTS the two transfer (and accumulation) systems for education and training

Characteristics of Collaborative Network Models. ed. by Line Gry Knudsen

Transcription:

Explicit & Implicit Interaction Design for Multi-Focus Visualizations Simon Butscher HCI Group, University of Konstanz Universitätsstraße 10 Konstanz, 78457, Germany Simon.Butscher@uni-konstanz.de Abstract Many tasks that have to be performed to analyze data in large visual information spaces require the user to have several foci. This is for example the case for comparing or organizing digital artefacts. In my research, I explore alternative interaction concepts for multi-focus visualizations in the context of single and multi-user scenarios. Alongside explicit interaction for navigating within multi-focus visualizations I investigate implicit interaction for making the visualization react on the social and spatial context. To evaluate different designs, measures like task completion time, spatial memory and subjective preferences are examined. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). ITS '14, Nov 16-19 2014, Dresden, Germany ACM 978-1-4503-2587-5/14/11. http://dx.doi.org/10.1145/2669485.2669487 Author Keywords Multi-Focus+context; implicit interaction; explicit interaction; touch interaction; proxemic interactions ACM Classification Keywords H.5.2. Information interfaces and presentation (e.g., HCI): User Interfaces Interaction styles Research Background I received a degree (german Diplom) in Business Information Systems at the University of Cooperative Education in Ravensburg, Germany. For my thesis I designed and implemented a solution for effective data analysis within a business intelligence application for companies in the healthcare sector. In addition I hold a master s degree in Information Engineering from the University of Konstanz, Germany. For my master s thesis I designed, implemented, and evaluated a focus+context visualization and remote interaction concept for the analysis of the traffic situation on a road network [13]. With this project, I gained valuable expertise and insights into the combination of focus+context visualizations with suitable input modalities for their control. Since the project s completion, I have been interested in the interplay between visualization techniques and input modalities and especially in the combination of multi-focus

Figure 1. The social and spatial context within a future control room. Relationships of people to each other and to the displays can be used for an implicit interaction with visualizations. visualizations with alternative input modalities to facilitate data analysis tasks in visual information spaces. Since the beginning of my Ph.D. I have been affiliated with different projects ranging from designing future control rooms, 1 blending the physical and digital space in public 2 or academic 3 libraries and analyzing mobile health care data. One of the main tasks to be supported within all of these projects is data analysis. For my future research I will focus on two application scenarios: the analysis of traffic situation in traffic control rooms and the analysis of health data collected with mobile devices. My supervisor is Harald Reiterer from the University of Konstanz. Motivation Many tasks that have to be performed in multiscale visual information spaces require several foci. This is also true for data analysis tasks in which digital artifacts have to be compared or organized. Whereas virtually all interfaces allow the user to change between foci over time (e.g., by navigating a map with pan/zoom gestures), fewer interfaces allow the simultaneous presentation of multiple foci. Examples of such multi-focus visualization techniques include multi-window systems, split-screen interfaces, and a variety of research prototypes (e.g. [5,6,11]). Multi-focus visualization techniques are a well-studied field in HCI (for an overview of research about multifocus visualizations see [4]). Yet, research on how alternative input modalities can facilitate the interaction 1 http://hci.uni-konstanz.de/holisticworkspace 2 http://hci.uni-konstanz.de/libros 3 http://hci.uni-konstanz.de/blendedlibrary with these visualizations is limited. Novel input modalities offer great potential to design interaction concepts which suit multi-focus visualizations better than state-of-the art approaches. For example, in contrast to many other input devices, multi-touch surfaces seem to naturally enable multi-focus interaction, e.g., by using one hand per focus point. Furthermore and with respect to multi-user scenarios, many alternative input modalities (e.g. touch or gestural input) facilitate a democratized form of collaboration, which mouse input, by contrast, does not. However explicit interaction to navigate within a multifocus visualization is only one part of the possible design space. The interpretation of the social and spatial context (see Figure 1) can be another part and is especially promising for an implicit interaction to automatically adapt the appearance of the visualization (e.g. placing information on a display depending on the position of a user in front of it). Consequently, I intend to support the different activities of data analyst, such as navigating in large information spaces or comparing digital artifacts by using multi-focus visualization techniques that exploit alternative input modalities and combine explicit and implicit interaction styles. Background and Related Work Although multi-focus visualization techniques are a well-studied research topic, an unanswered question is how these visualization techniques can benefit from alternative input modalities. Most research has concentrated on the presentation, not on the interaction (e.g. [5]). Only some research has emphasized the interaction with the visualization and

Figure 2. The five dimensions of proxemics [7]. makes use of input modalities like touch to facilitate the definition of areas of interest within a visual information space (e.g. [8,11]). However, they fail to exploit the additional possibilities provided by alternative input modalities in all facets. Most research in the field of multi-focus visualizations is limited to explicit interaction and ignores the possibilities of adapting the visualizations to the implicit input of the user. Implicit interaction can be realized by analyzing and interpreting the spatial and social context. Some research areas other than multi-focus visualizations have implemented systems that provide a mixture of explicit interaction and implicit interaction. An interactive public display installation for example interprets the spatial context (position of the user in front of a display) and thus shows customized information to the user [14]. Another system designed for multi-display environments uses the users head position as implicit input to correct the perspective distortion of the elements visualized on the screens [12]. Proxemic Interactions [7], provide a general perspective on the spatial and social context in terms of relationships between persons and objects (e.g. displays) but also between persons to each other. To describe these relationships five dimensions of proximity are proposed: distance, orientation, movement, identity, and location (see Figure 2). These proxemic dimensions offer a design space for adaptive visualizations which react on the spatial and social environment. Research Approach I make use of a taxonomy to describe the design space for investigating data analysis tasks. This taxonomy is built on the task to perform and the number of parallel users: Tasks: In my research I focus on data analysis in visual information spaces in the sense of exploring the space, searching for objects, comparing objects and organizing them. Depending on the task, different requirements are relevant, like orientation within the information space, spatial memory to remember object locations or multiple foci to compare or move digital artefacts to other location in the visual space. Number of parallel users: Data analysis can be an individual but also a collaborative activity, e.g., within a control room. For collaborative scenarios I have to deal with additional requirements like a democratized from of collaboration or an enhanced situational awareness (awareness of ongoing activities, and awareness of who is in control of shared artefacts [9]). To investigate how multi-focus techniques can facilitate data analysis tasks different combinations of the two dimensions of this taxonomy have to be explored. Depending on the task and the number of parallel users, either single-focus or multi-focus solutions may be appropriate. Whereas the visualization has to support multiple foci for multiple users regardless of the task at hand, in single user scenarios only the compare task requires multiple foci. The combination of the task and the number of parallel users also influences the choice of the input modality. Proxemic dimensions can help to capture the spatial and social context and enable an automatic reaction (e.g. if two persons move closer to each other the focus

of both users could be merged in order to facilitate discussions about digital artefacts). Thus in multi-user ecosystems proxemic dimensions plays a key role in analyzing the social context, and can be used to consider social conventions. Whereas I already investigated some combinations of the taxonomy other combinations are still to be explored. Figure 3. Supporting single users comparing human neural stem cells: (left) SpaceFold Fold the visual space like a sheet of paper to bring two areas of interest closer to each other; (right) PhysicLenses Create multiple magnification lenses to see detailed views of areas of interest. Completed Work For the dimension of single-user interaction to perform compare and organize (drag & drop) tasks we conducted a controlled experiment to compare different solutions [1]. We introduced two novel navigation techniques that combine multiple foci and bimanual touch, and thus enable the isochronic definition of areas of interest, leading to simultaneous multifocus navigation. SpaceFold folds the visual space in the third dimension, allowing users to bring objects closer to each other (see Figure 3, left). Our technique enables a direct, bimanual manipulation of a folded space and is highly flexible. PhysicLenses uses multiple magnification lenses to compare objects (see Figure 3, right). Using a physics model, PhysicLenses introduces a general solution for the arrangement of multiple lenses within the viewport. We conducted a controlled experiment with 24 participants to compare the techniques with split screen. The results show that SpaceFold significantly outperformed all other techniques, whereas PhysicLenses was just as fast as split screen. Figure 4. Supporting multiple users analyzing the traffic situation: A multi-focus visualization on a large wall-sized display can be controlled remotely by the operators through self-centering devices. For the multi-user scenario within a traffic control room we investigated solutions for collaboratively analyzing the traffic situation [2]. First, to gain an understanding of relevant tasks and the social environment that shape the work of operators, a context-of-use analysis in two

Figure 5. Supporting multiple users to analyze a visual information space: A combination of an explicit interaction for the selection of areas of interest and an implicit interaction for the tailored positioning of the information. The position of the user in front of the display is mapped to the position were detailed information are shown. The aim is to enable seamless switching between tightly-coupled collaboration and looselycoupled parallel work. Figure 6. Multi-display environment equipped with OptiTrack tracking system to capture the spatial and social context. freeway traffic monitoring control rooms was conducted. Based on the analysis we created a multifocus visualization for large wall-sized displays, which makes it possible to examine local details without losing contextual information (see Figure 4). We combined a visualization based on the SpaceFolding technique by Elmqvist et al. [5] with our content-aware navigation technique [13]. To evaluate the applicability of the concept, a study with eleven participants from the domain of traffic control rooms was conducted. The results show that the multi-focus visualization facilitated the awareness on ongoing activities. It enabled an implicit communication, which helps the operators to coordinate themselves. Future Work My future research will focus on data analysis in multiuser environments. Proxemic Interactions are used to analyze and interpret the spatial and social context. I am especially interested not only in considering the distance and orientation of people to a display but also to each other. Furthermore the movement of people can give insight to the actual situation. Possible interpretations of these metrics could be that if people move closer to the display they want to see the information space in greater detail, or that if people move closer to each other they want to discuss about a shared artifact. As users switch between individual tasks and collaborative tasks, switching between several independent foci on the information space and shared foci is an issue which needs to be investigated. Proxemic measures can either be used as explicit input (e.g. defining a focus region of a multi-focus visualization according to a user s location and orientation in physical space [3]) or as implicit input modality to adapt visualizations. Figure 5 shows a concept in which we combine explicit and implicit input for the manipulation of a multi-focus visualization. The explicit input modality in form of a smartphone used as touchpad is used to define the position of a focus region. The location of the user as an implicit input modality is used to place the enlarged view of the selected focus region directly in front of the user. In order to implement prototypes and evaluate them to show the feasibility of multi-focus visualizations using proxemic dimensions I make use of a laboratory equipped with an OptiTrack 4 tracking system (see Figure 6). The tracking systems consists of 24 infrared camera which are connected to the Proximity Toolkit [10]. This toolkit offers great support in capturing the proximic dimensions. Statement of Thesis and Goals In my dissertation I address the research question of how to use combinations of explicit and implicit interaction to control multi-focus visualizations for large visual information spaces. My contributions are: 1.) Explore alternative input modalities and interaction concepts for navigating multi-focus visualizations of visual information spaces to support data analysis tasks like searching, comparing or organizing digital artefacts. 2.) Design and implement prototypes that employ proxemic dimensions as an implicit input modality for adapting multi-focus visualizations depending on the spatial and social context. 4 https://www.naturalpoint.com/optitrack/

3.) Evaluate the developed concepts and prototypes within controlled experiments. Depending on the task metrics like task completion time, mental workload, spatial memory or subjective preferences are investigated. Expected Contributions Through my research, I aim to develop interaction concepts to support data analysis tasks through multifocus visualization techniques and combine them with alternative input modalities to support explicit and implicit interaction. To show the feasibility of the concepts for real world problems I will apply the concepts to single-user and multi-user scenarios from two different domains. References [1] Butscher, S., Hornbæk, K., and Reiterer, H. SpaceFold and PhysicLenses : Simultaneous Multifocus Navigation on Touch Surfaces. In Proc. AVI 2014, ACM Press (2014), 209 216. [2] Butscher, S., Müller, J., Schwarz, T., and Reiterer, H. Blended Interaction as an Approach for Holistic Control Room Design. CHI 2013 Workshop on Blended Interaction, (2013). [3] Butscher, S., Müller, J., Weiler, A., Rädle, R., Reiterer, H., and Scholl, M. Multi-user Twitter Analysis for Crisis Room Environments. Collaborative Human- Computer-Interaction with Big Wall Displays BigWallHCI 2013, 3rd JRC ECML Crisis Management Technology Workshop, Publications Office of the European Union (2013), 28 34. [4] Cockburn, A., Karlson, A., and Bederson, B.B. A Review of Overview+Detail, Zooming, and Focus+Context Interfaces. ACM Computing Surveys (CSUR) 41, 1 (2008). [5] Elmqvist, N., Henry, N., Riche, Y., and Fekete, J.- D. Mélange: Space Folding for Multi-Focus Interaction. In Proc. CHI 2008, ACM Press (2008), 1333 1342. [6] Forlines, C. and Shen, C. DTLens: Multi-user Tabletop Spatial Data Exploration. In Proc. UIST 2005, ACM Press (2005), 119 122. [7] Greenberg, S., Marquardt, N., Ballendat, T., Diaz- Marino, R., and Wang, M. Proxemic Interactions : The New Ubicomp? interactions 18, January (2011), 42 50. [8] Käser, D., Agrawala, M., and Pauly, M. FingerGlass: Efficient Multiscale Interaction on Multitouch Screens. In Proc. CHI 2011, ACM Press (2011), 1601 1610. [9] Kulyk, O., van der Veer, G., and van Dijk, B. Situational awareness support to enhance teamwork in collaborative environments. In Proc. ECCE 2008, ACM Press (2008), 18 22. [10] Marquardt, N., Diaz-Marino, R., Boring, S., and Greenberg, S. The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies. In Proc. UIST 2011, ACM Press (2011), 315 325. [11] Mikulecky, K., Hancock, M., Brosz, J., and Carpendale, S. Exploring Physical Information Cloth on a Multitouch Table. In Proc. ITS 2011, ACM Press (2011), 140 149. [12] Nacenta, M.A., Sakurai, S., Yamaguchi, T., et al. E- conic: a Perspective-Aware Interface for Multi-Display Environments. In Proc. UIST 2007, ACM Press (2007), 279 288. [13] Schwarz, T., Butscher, S., Müller, J., and Reiterer, H. Content-aware navigation for large displays in context of traffic control rooms. In Proc. AVI 2012, ACM Press (2012), 249 252. [14] Vogel, D. and Balakrishnan, R. Interactive Public Ambient Displays: Transitioning from Implicit to Explicit, Public to Personal, Interaction with Multiple Users. In Proc. UIST 14, ACM Press (2004), 137 146.