DIVA Group

Videos

LTCI - Télécom Paris - IP Paris

        Search:      Year:
2018
video   MobiLimb: Augmenting Mobile Devices with a Robotic Limb
M. Teyssier, G. Bailly, C. Pelachaud, E. Lecolinet. In UIST'18: Proceedings of the ACM Symposium on User Interface Software and Technology, ACM (2018). 53-63. doi pdf bibcite
@inproceedings{MT:UIST-18,
 author = M. {Teyssier} and G. {Bailly} and C. {Pelachaud} and E. {Lecolinet},
 booktitle = UIST'18: Proceedings of the ACM Symposium on User Interface Software and Technology,
 month = oct,
 pages = 53--63,
 publisher = ACM,
 title = MobiLimb: Augmenting Mobile Devices with a Robotic Limb,
 year = 2018,
 image = mobilimb-UIST18.png,
 link = https://uist.acm.org/uist2018/,
 project = https://www.marcteyssier.com/projects/mobilimb/,
 video = https://www.youtube.com/watch?v=wi3INyIDNdk,
}
keywords
Mobile device, Actuated devices, Robotics, Mobile Augmentation
abstract
In this paper, we explore the interaction space of MobiLimb, a small 5-DOF serial robotic manipulator attached to a mobile device. It (1) overcomes some limitations of mobile devices (static, passive, motionless); (2) preserves their form factor and I/O capabilities; (3) can be easily attached to or removed from the device; (4) offers additional I/O capabilities such as physical deformation and (5) can support various modular elements such as sensors, lights or shells. We illustrate its potential through three classes of applications: As a tool, MobiLimb offers tangible affordances and an expressive con- troller that can be manipulated to control virtual and physical objects. As a partner, it reacts expressively to users' actions to foster curiosity and engagement or assist users. As a medium, it provides rich haptic feedback such as strokes, pat and other tactile stimuli on the hand or the wrist to convey emotions during mediated multimodal communications.
link project
video   Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures
B. Fruchard, E. Lecolinet, O. Chapuis. In AVI'18: International Conference on Advanced Visual Interfaces, Article No. 14 (9 pages), ACM (2018). doi hal pdf bibcite
@inproceedings{BL:AVI-18,
 address = Grosseto, Italie,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = AVI'18: International Conference on Advanced Visual Interfaces,
 month = jun,
 number = Article No. 14 (9 pages),
 publisher = ACM,
 title = Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures,
 year = 2018,
 hal = hal-01764757,
 image = bodyloci-AVI18.png,
 link = https://sites.google.com/dis.uniroma1.it/avi2018/,
 video = https://vimeo.com/251815791,
}
keywords
Semantic aids; Memorization; Command selection; On-body interaction; Marking menus; Virtual reality
abstract
Previous studies have shown that spatial memory and semantic aids can help users learn and remember gestural commands. Using the body as a support to combine both dimensions has therefore been proposed, but no formal evaluations have yet been reported. In this paper, we compare, with or without semantic aids, a new on-body interaction technique (BodyLoci) to mid-air Marking menus in a virtual reality context. We consider three levels of semantic aids: no aid, story-making, and story-making with background images. Our results show important improvement when story-making is used, especially for Marking menus (28.5% better retention). Both techniques performed similarly without semantic aids, but Marking menus outperformed BodyLoci when using them (17.3% better retention). While our study does not show a benefit in using body support, it suggests that inducing users to leverage simple learning techniques, such as story-making, can substantially improve recall, and thus make it easier to master gestural techniques. We also analyze the strategies used by the participants for creating mnemonics to provide guidelines for future work.
link
2017
video   MarkPad: Augmenting Touchpads for Command Selection
B. Fruchard, E. Lecolinet, O. Chapuis. In CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 5630-5642. doi hal pdf bibcite
@inproceedings{MP:CHI-17,
 address = Denver, Colorado, Etats-Unis,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 5630--5642,
 publisher = ACM,
 title = MarkPad: Augmenting Touchpads for Command Selection,
 year = 2017,
 hal = hal-01437093/en,
 image = MP-CHI-17.png,
 link = https://chi2017.acm.org/,
 project = http://brunofruchard.com/markpad.html,
 video = https://www.youtube.com/watch?v=rUGGTrYPuSM,
 software = http://brunofruchard.com/markpad.html,
}
keywords
Gestural interaction; bezel gestures; tactile feedback; spatial memory; touchpad; user-defined gestures; Marking menus
abstract
We present MarkPad, a novel interaction technique taking advantage of the touchpad. MarkPad allows creating a large number of size-dependent gestural shortcuts that can be spatially organized as desired by the user. It relies on the idea of using visual or tactile marks on the touchpad or a combination of them. Gestures start from a mark on the border and end on another mark anywhere. MarkPad does not conflict with standard interactions and provides a novice mode that acts as a rehearsal of the expert mode. A first study showed that an accuracy of 95% could be achieved for a dense configuration of tactile and/or visual marks allowing many gestures. Performance was 5% lower in a second study where the marks were only on the borders. A last study showed that borders are rarely used, even when the users are unaware of the technique. Finally, we present a working prototype and briefly report on how it was used by two users for a few months.
link project software
video   CoReach: Cooperative Gestures for Data Manipulation on Wall-sized Displays
C. Liu, O. Chapuis, M. Beaudouin-Lafon, E. Lecolinet. In CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 6730-6741. doi hal pdf bibcite
@inproceedings{LCBL:CHI-17,
 author = C. {Liu} and O. {Chapuis} and M. {Beaudouin-Lafon} and E. {Lecolinet},
 booktitle = CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 6730--6741,
 publisher = ACM,
 title = CoReach: Cooperative Gestures for Data Manipulation on Wall-sized Displays,
 year = 2017,
 hal = hal-01437091/en,
 image = LCBL-CHI-17.jpg,
 link = https://chi2017.acm.org/,
 video = https://www.lri.fr/~chapuis/publications/CHI17-coreach.mp4,
}
keywords
Shared interaction, wall display, co-located collaboration
abstract
Multi-touch wall-sized displays afford collaborative exploration of large datasets and re-organization of digital content. However, standard touch interactions, such as dragging to move content, do not scale well to large surfaces and were not designed to support collaboration, such as passing an object around. This paper introduces CoReach, a set of collaborative gestures that combine input from multiple users in order to manipulate content, facilitate data exchange and support communication. We conducted an observational study to inform the design of CoReach, and a controlled study showing that it reduced physical fatigue and facilitated collaboration when compared with traditional multi-touch gestures. A final study assessed the value of also allowing input through a handheld tablet to manipulate content from a distance.
link
video   VersaPen: An Adaptable, Modular and Multimodal I/O Pen
M. Teyssier, G. Bailly, E. Lecolinet. In CHI'17 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 2155-2163. doi hal pdf bibcite
@inproceedings{Teyssier:VersaPen-2017,
 address = Denver, USA,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet},
 booktitle = CHI'17 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 2155--2163,
 publisher = ACM,
 title = VersaPen: An Adaptable, Modular and Multimodal I/O Pen,
 year = 2017,
 hal = hal-01521565,
 image = VersaPen-wp-CHI17.png,
 link = https://chi2017.acm.org/,
 video = https://www.youtube.com/watch?v=WhhZc67geAQ,
}
keywords
Pen input; Multimodal interaction; Modular input
abstract
While software often allows user customization, most physical devices remain mainly static. We introduce VersaPen, an adaptable, multimodal, hot-pluggable pen for expanding input capabilities. Users can create their own pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen offers multiple advantages. Allowing in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. It also enriches interaction by providing multimodal capabilities, as well as a mean to encapsulate virtual data into physical modules which can be shared by users to foster collaboration. We present various applications to demonstrate how VersaPen enables new interaction techniques.
link
video   VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen
M. Teyssier, G. Bailly, E. Lecolinet. In CHI'17 Extended Abstracts (demonstration): ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 377-380. doi hal pdf bibcite
@inproceedings{teyssier:hal-01521566,
 address = Denver, USA,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet},
 booktitle = CHI'17 Extended Abstracts (demonstration): ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 377--380,
 publisher = ACM,
 title = VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen,
 year = 2017,
 hal = hal-01521566,
 image = VersaPen-demo-CHI17.png,
 link = https://chi2016.acm.org/,
 video = https://www.youtube.com/watch?v=LYIjfUDTdbU,
}
keywords
Pen input ; Multimodal interaction
abstract
We introduce and demonstrate VersaPen, a modular pen for expanding input capabilities. Users can create their own digital pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen investigate the benefits of adaptable devices and enriches interaction by providing multimodal capabilities, allows in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. The device integrates seamlessly thanks to a visual programming interface, allowing end users to connect input and output sources in other existing software. We present various applications to demonstrate the power of VersaPen and how it enables new interaction techniques
link
2001
  video   Zoomable and 3D Representations for Digital Libraries
P. Plenacoste, E. Lecolinet, S. Pook, C. Dumas, J.-D. Fekete. In Conférence franco-britannique IHM-HCI (Interaction Homme-Machine / Human Computer Interaction),, (2001). pdf bibcite
@inproceedings{reference124,
 address = Lille,
 author = P. {Plenacoste} and E. {Lecolinet} and S. {Pook} and C. {Dumas} and J.-D. {Fekete},
 booktitle = Conf\'erence franco-britannique IHM-HCI (Interaction Homme-Machine / Human Computer Interaction),,
 month = sep,
 title = Zoomable and 3D Representations for Digital Libraries,
 year = 2001,
 video = http://www.telecom-paristech.fr/~elc/videos/biblinum.mov,
}

2000
video   Context and Interaction in Zoomable User Interfaces
S. Pook, E. Lecolinet, G. Vaysseix, E. Barillot. In International Conference on Advanced Visual Interfaces (AVI'200), ACM Press (2000). 227-231. bibcite
@inproceedings{reference194,
 address = Palerme (Italie),
 author = S. {Pook} and E. {Lecolinet} and G. {Vaysseix} and E. {Barillot},
 booktitle = International Conference on Advanced Visual Interfaces (AVI'200),
 month = may,
 pages = 227--231,
 publisher = ACM Press,
 title = Context and Interaction in Zoomable User Interfaces,
 year = 2000,
 image = Zomit-AVI00.jpg,
 video = http://www.telecom-paristech.fr/~elc/videos/zomit.mov,
}

video   Control Menus: Execution and Control in a Single Interactor
S. Pook, E. Lecolinet, G. Vaysseix, E. Barillot. In ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'2000), (2000). 263-264. bibcite
@inproceedings{reference196,
 address = The Hague (The Netherlands),
 author = S. {Pook} and E. {Lecolinet} and G. {Vaysseix} and E. {Barillot},
 booktitle = ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'2000),
 month = apr,
 organization = ACM Press,
 pages = 263--264,
 title = Control Menus: Execution and Control in a Single Interactor,
 year = 2000,
 image = Zomit-CHI00.jpg,
 video = http://www.telecom-paristech.fr/~elc/videos/zomit.mov,
 rank = 2,
}

video
hal bibcite
@inproceedings{cohe:infovis15,
 hal = hal-01442946,
 image = cohe-infovis15.png,
 video = https://vimeo.com/152548517,
 rank = 0,
}
abstract
System schematics, such as those used for electrical or hydraulic systems, can be large and complex. Fisheye techniques can help navigate such large documents by maintaining the context around a focus region, but the distortion introduced by traditional fisheye techniques can impair the readability of the diagram. We present SchemeLens, a vector-based, topology-aware fisheye technique which aims to maintain the readability of the diagram. Vector-based scaling reduces distortion to components, but distorts layout. We present several strategies to reduce this distortion by using the structure of the topology, including orthogonality and alignment, and a model of user intention to foster smooth and predictable navigation. We evaluate this approach through two user studies: Results show that (1) SchemeLens is 16-27% faster than both round and rectangular flat-top fisheye lenses at finding and identifying a target along one or several paths in a network diagram; (2) augmenting SchemeLens with a model of user intentions aids in learning the network topology.

video
hal bibcite
@inproceedings{ICHK:CHI-17,
 hal = hal-01444365,
 image = iconhk.gif,
 link = https://chi2017.acm.org/,
 video = https://www.youtube.com/watch?v=ZzLASYkt6qc,
}
abstract
We propose a novel perspective on the design of toolbar but- tons that aims to increase keyboard shortcut accessibility. IconHK implements this perspective by blending visual cues that convey keyboard shortcut information into toolbar buttons without denaturing the pictorial representation of their command. We introduce three design strategies to embed the hotkey, a visual encoding to convey the modifiers, and a magnification factor that determines the blending ratio between the pictogram of the button and the visual representation of the keyboard shortcut. Two studies examine the benefits of IconHK for end-users and provide insights from professional designers on the practicality of our approach for creating iconsets. Building on these insights, we develop a tool to assist designers in applying the IconHK design principle.
link
video
CHI 2017 Best Paper Award. url bibcite
@inproceedings{BN:CHI-17,
 url = https://youtu.be/ThDzB5ewbZM,
 award = CHI 2017 Best Paper Award,
 image = BN-CHI-17.jpg,
 link = https://chi2017.acm.org/,
 video = https://youtu.be/N2P-LFh1oLk,
 video-preview = https://youtu.be/ThDzB5ewbZM,
}
abstract
This paper introduces BIGnav, a new multiscale navigation technique based on Bayesian Experimental Design where the criterion is to maximize the information-theoretic concept of mutual information, also known as information gain. Rather than simply executing user navigation commands, BIGnav interprets user input to update its knowledge about the user's intended target. Then it navigates to a new view that maximizes the information gain provided by the user's expected subsequent input. We conducted a controlled experiment demonstrating that BIGnav is significantly faster than conventional pan and zoom and requires fewer commands for distant targets, especially in non-uniform information spaces. We also applied BIGnav to a realistic application and showed that users can navigate to highly probable points of interest on a map with only a few steps. We then discuss the tradeoffs of BIGnav including efficiency vs. increased cognitive load and its application to other interaction tasks.
link
video
hal bibcite
@inproceedings{LIU:CHI16,
 hal = hal-01275535,
 image = LIU-CHI16.jpg,
 video = https://www.youtube.com/watch?v=X7IA9XFGL38,
}
abstract
Wall-sized displays support small groups of users working together on large amounts of data. Observational studies of such settings have shown that users adopt a range of collaboration styles, from loosely to closely coupled. Shared interaction techniques, in which multiple users perform a command collaboratively, have also been introduced to support co-located collaborative work. In this paper, we operationalize five collaborative situations with increasing levels of coupling , and test the effects of providing shared interaction support for a data manipulation task in each situation. The results show the benefits of shared interaction for close collaboration: it encourages collaborative manipulation, it is more efficient and preferred by users, and it reduces physical navigation and fatigue. We also identify the time costs caused by disruption and communication in loose collaboration and analyze the trade-offs between parallelization and close collaboration. These findings inform the design of shared interaction techniques to support collaboration on wall-sized displays.

video
hal pdf bibcite
@inproceedings{GB:CHI16-living,
 pdf = http://www.gillesbailly.fr/publis/BAILLY_LivingDesktop.pdf,
 hal = hal-01403719,
 image = GB-CHI16-living.jpg,
 video = https://www.youtube.com/watch?v=hC3hDlUIZ6s,
}
abstract
We investigate the potential benefits of actuated devices for the desktop workstation which remains the most used environment for daily office works. A formative study reveals that the desktop workstation is not a fixed environment because users manually change the position and the orientation of their devices. Based on these findings, we present the LivingDesktop, an augmented desktop workstation with devices (mouse, keyboard, monitor) capable of moving autonomously. We describe interaction techniques and applications illustrating how actuated desktop workstations can improve ergonomics, foster collaboration, leverage context and reinforce physicality. Finally, the findings of a scenario evaluation are (1) the perceived usefulness of ergonomics and collaboration applications; (2) how the LivingDesktop inspired our participants to elaborate novel accessibility and social applications; (3) the location and user practices should be considered when designed actuated desktop devices.

video
CHI 2016 Honorable Mention Award. hal pdf bibcite
@inproceedings{RM:CHI16,
 pdf = https://hal.inria.fr/hal-01315130/document,
 award = CHI 2016 Honorable Mention Award,
 hal = hal-01315130,
 image = RM-CHI16.jpg,
 video = https://www.youtube.com/watch?v=axe79jX_ZPw,
}
abstract
TOUCHTOKENS make it possible to easily build interfaces that combine tangible and gestural input using passive tokens and a regular multi-touch surface. The tokens constrain users' grasp, and thus, the relative spatial configuration of fingers on the surface, theoretically making it possible to design algorithms that can recognize the resulting touch patterns. We performed a formative user study to collect and analyze touch patterns with tokens of varying shape and size. The analysis of this pattern collection showed that individual users have a consistent grasp for each token, but that this grasp is user dependent and that different grasp strategies can lead to confounding patterns. We thus designed a second set of tokens featuring notches that constrain users' grasp. Our recognition algorithm can classify the resulting patterns with a high level of accuracy (>95%) without any training, enabling application designers to associate rich touch input vo

video
CHI 2015 Best Paper Award. doi bibcite
@inproceedings{GB:iskin-2015,
 doi = http://dl.acm.org/citation.cfm?id=2702391,
 award = CHI 2015 Best Paper Award,
 image = GB-iskin-2015.png,
 project = http://embodied.mpi-inf.mpg.de/research/iskin/,
 video = https://youtu.be/9cvZnhvzrBI,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2702391,
}
abstract
We propose iSkin, a novel class of skin-worn sensors for touch input on the body. iSkin is a very thin sensor overlay, made of biocompatible materials, and is flexible and stretch- able. It can be produced in different shapes and sizes to suit various locations of the body such as the finger, forearm, or ear. Integrating capacitive and resistive touch sensing, the sensor is capable of detecting touch input with two levels of pressure, even when stretched by 30% or when bent with a radius of 0.5 cm. Furthermore, iSkin supports single or multiple touch areas of custom shape and arrangement, as well as more complex widgets, such as sliders and click wheels. Recognizing the social importance of skin, we show visual design patterns to customize functional touch sensors and allow for a visually aesthetic appearance. Taken together, these contributions enable new types of on-body devices. This includes finger-worn devices, extensions to conventional wearable de- vices, and touch input stickers, all fostering direct, quick, and discreet input for mobile computing.
project
  video
hal bibcite
@inproceedings{GOSH-ELC:CHI-EA15,
 hal = hal-01147717,
 video = http://sarthakg.in/portfolio/content-page-supervision.html,
}
keywords
Smart Home; Pico-projector; Spatial memory; Visualization; SuperVision
abstract
In this paper, we propose SuperVision, a new interaction technique for distant control of objects in a smart home. This technique aims at enabling users to point towards an object, visualize its current state and select a desired functionality as well. To achieve this: 1) we present a new remote control that contains a pico-projector and a slider; 2) we introduce a visualization technique that allows users to locate and control objects kept in adjacent rooms, by using their spatial memories. We further present a few example applications that convey the possibilities of this technique.

video
CHI 2014 Best Paper Award. hal bibcite
@inproceedings{LIU-ELC:CHI14,
 award = CHI 2014 Best Paper Award,
 hal = hal-00957269,
 image = EffectDisplaySize-CHI14.jpg,
 video = http://www.youtube.com/watch?feature=player_embedded&v=SBXwW5lz-4o,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?doid=2556288.2557020,
}
abstract
The advent of ultra-high resolution wall-size displays and their use for complex tasks require a more systematic anal- ysis and deeper understanding of their advantages and draw- backs compared with desktop monitors. While previous work has mostly addressed search, visualization and sense-making tasks, we have designed an abstract classification task that involves explicit data manipulation. Based on our observa- tions of real uses of a wall display, this task represents a large category of applications. We report on a controlled experiment that uses this task to compare physical navigation in front of a wall-size display with virtual navigation using pan- and-zoom on the desktop. Our main finding is a robust interaction effect between display type and task difficulty: while the desktop can be faster than the wall for simple tasks, the wall gains a sizable advantage as the task becomes more dif- ficult. A follow-up study shows that other desktop techniques (overview+detail, lens) do not perform better than pan-and- zoom and are therefore slower than the wall for difficult tasks.

video
CHI 2014 Honorable Mention Award. hal bibcite
@inproceedings{WAGNER-ELC:CHI14,
 award = CHI 2014 Honorable Mention Award,
 hal = hal-01447407,
 image = MultiFingerChords-CHI14.jpg,
 video = https://www.youtube.com/watch?feature=player_embedded&v=W6aC9cqgrH0,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?doid=2556288.2556958,
}
abstract
Despite the demonstrated benefits of multi-finger input, todays gesture vocabularies offer a limited number of postures and gestures. Previous research designed several posture sets, but does not address the limited human capacity of retaining them. We present a multi-finger chord vocabulary, introduce a novel hand-centric approach to detect the iden- tity of fingers on off-the-shelf hand-held tablets, and report on the detection accuracy. A between-subjects experiment comparing 'random' to a `categorized' chord-command mapping found that users retained categorized mappings more accurately over one week than random ones. In response to the logical posture-language structure, people adapted to logical memorization strategies, such as `exclusion', `order', and `category', to minimize the amount of information to retain. We conclude that structured chord-command mappings support learning, short-, and long-term retention of chord-command mappings.

video
hal bibcite
@inproceedings{SP:2013,
 hal = hal-01115851,
 image = watchit.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=fDxmYJgD6Qw,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2466192&dl,
}
abstract
We present WatchIt, a prototype device that extends interaction beyond the watch surface to the wristband, and two interaction techniques for command selection and execution. Because the small screen of wristwatch computers suffers from visual occlusion and the fat finger problem, we investigate the use of the wristband as an available interaction resource. Not only does WatchIt use a cheap, energy efficient and invisible technology, but also it involves simple, basic gestures that allow good performance after little training, as suggested by the results of a pilot study. We propose a novel gesture technique and an adaptation of an existing menu technique suitable for wristband interaction. In a user study, we investigate their usage in eyes-free contexts, finding that they perform well. Finally, we present a technique where the bracelet is used in addition to the screen to provide precise continuous control on lists. We also report on a preliminary survey of traditional and digital jewelry that points to the high frequency of watches and bracelets in both genders and gives a sense of the tasks people would like to perform on such devices.

video
hal bibcite
@inproceedings{QR:CHI-2013,
 hal = hal-01164207,
 image = AugmentedLetters-CHI13.png,
 video = http://www.dailymotion.com/video/xxobz5_augmented-letters-mnemonic-gesture-based-shortcuts_tech,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2481321,
}
abstract
We propose Augmented Letters, a new technique aimed at augmenting gesture-based techniques such as Marking Menus [9] by giving them natural, mnemonic associations. Augmented Letters gestures consist of the initial of command names, sketched by hand in the Unistroke style, and affixed with a straight tail. We designed a tentative touch device interaction technique that supports fast interactions with large sets of commands, is easily discoverable, improves user's recall at no speed cost, and supports fluid transition from novice to expert mode. An experiment suggests that Augmented Letters outperform Marking Menu in terms of user recall.

video
hal bibcite
@inproceedings{SERRANO:CHI-2013,
 hal = hal-01115852,
 image = BezelTap-CHI13.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/bezel-tap-chi13.m4v,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2481421,
}
abstract
We present Bezel-Tap Gestures, a novel family of interaction techniques for immediate interaction on handheld tablets regardless of whether the device is alive or in sleep mode. The technique rests on the close succession of two input events: first a bezel tap, whose detection by accelerometers will awake an idle tablet almost instantly, then a screen contact. Field studies confirmed that the probability of this input sequence occurring by chance is very low, excluding the accidental activation concern. One experiment examined the optimal size of the vocabulary of commands for all four regions of the bezel (top, bottom, left, right). Another experiment evaluated two variants of the technique which both allow two-level selection in a hierarchy of commands, the initial bezel tap being followed by either two screen taps or a screen slide. The data suggests that Bezel-Tap Gestures may serve to design large vocabularies of micro-interactions with a sleeping tablet.

video
hal bibcite
@inproceedings{ELC:AVI-12,
 hal = hal-00757125,
 image = SNotebook-AVI12.jpg,
 video = http://www.telecom-paristech.fr/~via/media/videos/s-notebook.m4v,
}
abstract
This paper presents S-Notebook, a tool that makes it possible to "extend" mobile devices with augmented paper. Paper is used to overcome the physical limitations of mobile devices by offering additional space to annotate digital files and to easily create relationships between them. S-Notebook allows users to link paper annotations or drawings to anchors in digital files without having to learn pre-defined pen gestures. The systems stores meta data such as spatial or temporal location of anchors in the document as well as the zoom level of the view. Tapping on notes with the digital pen make appear the corresponding documents as displayed when the notes were taken. A given piece of augmented paper can contain notes associated to several documents, possibliy at several locations. The annotation space can thus serve as a simple way to relate various pieces of one or several digital documents between them. When the user shares his notes, the piece of paper becomes a tangible token that virtually contains digital information.

video
bibcite
@inproceedings{baglioni11-flickandbrake,
 image = FlickAndBrake-CHI-EA11.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/flick-brake.m4v,
}

video
hal bibcite
@inproceedings{GB:ICMI-11,
 hal = hal-00705413,
 image = GestureAwareRemotes-ICMI11.png,
 video = https://www.youtube.com/watch?v=PfYwcCZapm4,
}
abstract
Interaction with TV sets, set-top boxes or media centers strongly differs from interaction with personal computers: not only does a typical remote control suffer strong form factor limitations but the user may well be slouching in a sofa. In the face of more and more data, features, and services made available on interactive televisions, we propose to exploit the new capabilities provided by gesture-aware remote controls. We report the data of three user studies that suggest some guidelines for the design of a gestural vocabulary and we propose five novel interaction techniques. Study 1 reports that users spontaneously perform pitch and yaw gestures as the first modality when interacting with a remote control. Study 2 indicates that users can accurately select up to 5 items with eyes-free roll gestures. Capitalizing on our findings, we designed five interaction techniques that use either device motion, or button-based interaction, or both. They all favor the transition from novice to expert usage for selecting favorites. Study 3 experimentally compares these techniques. It reveals that motion of the device in 3D space, associated with finger presses at the surface of the device, is achievable, fast and accurate. Finally, we discuss the integration of these techniques into a coherent multimedia system menu.

video
hal bibcite
@inproceedings{MB:ICMI-11,
 hal = hal-00705420,
 image = JerkTilts-ICMI11.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/jerktilts.m4v,
}
keywords
Interaction techniques, handheld devices, input, accelerometers, gestures, Marking menu, self-delimited
abstract
This paper introduces JerkTilts, quick back-and-forth gestures that combine device pitch and roll. JerkTilts may serve as gestural self-delimited shortcuts for activating commands. Because they only depend on device acceleration and rely on a parallel and independent input channel, these gestures do not interfere with finger activity on the touch screen. Our experimental data suggest that recognition rates in an eight-choice selection task are as high with JerkTilts as with thumb slides on the touch screen. We also report data confirming that JerkTilts can be combined successfully with simple touch-screen operation. Data from a field study suggest that inadvertent JerkTilts are unlikely to occur in real-life contexts. We describe three illustrative implementations of JerkTilts, which show how the technique helps to simplify frequently used commands.

video
bibcite
@inproceedings{SM:CHI-10,
 image = CycloStar-CHI10.png,
 video = https://www.youtube.com/watch?v=tcYX56TegbE#t=23,
}

video
bibcite
@inproceedings{GB:CHI-10,
 image = FingerCount-CHI10.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=P69spTHzYUM,
}

video
bibcite
@inproceedings{GB:AVI-10,
 image = WaveletMenu-AVI10.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/wavelet-menu.m4v,
}

video
url bibcite
@inproceedings{RA:CHI-09,
 url = http://dl.acm.org/citation.cfm?doid=1518701.1518843,
 image = MicroRolls-CHI09.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=bfH0-OqgbLw,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?doid=1518701.1518843,
}

video
830-834. url bibcite
@inproceedings{RA:INTERACT-09,
 pages = 830-834,
 url = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_90,
 image = TimeTilt-INTERACT09.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=7JBSpojUBm8,
 bdsk-url-1 = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_90,
}

video
url bibcite
@inproceedings{BG:INTERACT-09,
 url = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_69,
 image = LeafMenus-INTERACT09.jpg,
 video = http://www.youtube.com/watch?feature=player_embedded&v=bsswQ06pZrU,
 bdsk-url-1 = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_69,
}