:: AI-augmented mobile photographic memento to make our inter-generational interaction, especially with the elder, richer, healthier, and mutually inspiring
Aging often comes with declining social interaction, a known adversarial factor impacting the life satisfaction of senior population. Such decline appears even in family--a permanent social circle, as their adult children eventually go independent. We present MomentMeld, an AI-powered, cloud-backed mobile application that blends with everyday routine and naturally encourages rich and frequent inter-generational interactions in a family, especially those between the senior generation and their adult children. Firstly, we design a photographic interaction aid called mutually stimulatory memento, which is a cross-generational juxtaposition of semantically related photos to bring natural arousal of context-specific inter-generational empathy and reminiscence. Secondly, we build comprehensive ensemble AI models consisting of various deep neural networks and a runtime system that automates the creation of mutually stimulatory memento on top of the user's usual photo-taking routines. We deploy MomentMeld in-the-wild with six families for an eight-week period, and discuss the key findings and further implications.
- [ACM CHI 2021] MomentMeld: AI-augmented Mobile Photographic Memento towards Mutually Stimulatory Inter-generational Interaction.
- "Juxtaposing Contextually Similar Cross-generation Images" U.S. Patent No. 10678845. (Issue Date: June 9, 2020)
Related media coverage:
:: AI-powered robotic telepresence enables two-bodied family members to live at two places at the same time
People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those through remote face-to-face communication. We envision that enabling a remote person to be spatially superposed in one's living space would be a breakthrough to catalyze pseudo living-together interactivity. We propose HomeMeld, a zero-hassle self-mobile robotic system serving as a co-present avatar to create a persistent illusion of living together for those who are involuntarily living apart. The key challenges are 1) continuous spatial mapping between two heterogeneous floor plans and 2) navigating the robotic avatar to reflect the other's presence in real time under the limited maneuverability of the robot. We devise a notion of functionally equivalent location and orientation to translate a person's presence into another in a heterogeneous floor plan. We also develop predictive path warping to seamlessly synchronize the presence of the other. We conducted extensive experiments and deployment studies with real participants.
- [ACM MobiSys 2018] My Being to Your Place, Your Being to My Place: Co-present Robotic Avatars Create Illusion of Living Together. (Full paper; a related demo winning the Best Demo Award at MobiSys 2019)
- "Autonomous Robotic Avatars" U.S. Patent No. 10589425. (Issue Date: Mar. 17, 2020)
- "Autonomous Robotic Avatars" U.S. Patent Pending. Application No. 15/827686. (Application Date: Nov. 30, 2017)
Card-stunt as a Service
:: A new mobile-crowd technology for massively assembled citizens to create a powerful message
Imagine a densely packed crowd that gathers to convey a common message, such as people in a candlelight vigil or a protest. We envision an innovation through mobile computing technologies to empower such a crowd by enabling them simply to hold their phones up and create a massive collective visualization on top of them. We propose Card-stunt as a Service (CaaS). CaaS is a service enabling a densely packed crowd to instantly visualize symbols using their mobile devices and a server-side service. The key challenge toward realizing an instant collective visualization is how to achieve instant, infrastructure-free, decimeter-level localization of individuals in a massively packed crowd, while maintaining low latency. CaaS addresses the challenges by mobile visible-light angle-of-arrival (AoA) sensing and scalable constrained optimization. It reconstructs relative locations of all individuals and dispatches individualized timed pixels to each one so that they can do their part in the overall visualization. We evaluate CaaS with extensive experiments under diverse reality settings as well as under synthetic workloads scaling up to tens of thousands of people. We deploy CaaS to 49 participants so that they successfully perform a collective visualization cheering up MobiSys.
- [ACM MobiSys 2017] Card-stunt as a Service: Empowering a Massively Packed Crowd for Instant Collective Expressiveness. (Full paper; a related video winning the Best Video Award)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10229512. (Issue Date: Mar. 12, 2019)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10242463. (Issue Date: Mar. 26, 2019)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10362460. (Issue Date: July 23, 2019)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10559094. (Issue Date: Feb. 11, 2020)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10567931. (Issue Date: Feb. 18, 2020)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10623918. (Issue Date: Apr. 14, 2020)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10674328. (Issue Date: June 2, 2020)
- "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10776958. (Issue Date: Sep. 15, 2020)
- [NewScientist] App lets stadium crowds display giant messages with their phones.
- [IBM Research Blog] New applets crowds use their phones to display a single message.
:: Summoning Out-of-reach Touch Interface Beneath Your Thumbtip
As personal interactive devices become more ingrained into our daily lives, it becomes more important to understand how seamless interaction with those devices can be fostered. A typical mechanism to interface with a personal device is via a touch screen, in which users use their fingertip or stylus to scroll, type, select, or otherwise control device usage. Touch-based techniques, however, can become restrictive or inconvenient under a variety of scenarios. For example, personal devices such as phones or tablets are continuously increasing in size, making one-handed interaction difficult because one cannot easily hold the phone and touch the screen (with the thumb) at the same time with one hand.
We present Telekinetic Thumb, a new technique to interact with personal devices in which the screen and touch screen interactions can adapt to a user's grip or current touch constraints. Our key philosophy in implementing Telekinetic Thumb was to maximize the generalizability and friendliness to the application developers. Recognizing finger gestures and altering the foreground user interface could be implemented in multiple different ways, but it would be of diminishing value if it requires significant changes to an existing application’s code base to incorporate Telekinetic Thumb feature in their applications. In this light, we devised an implementation strategy asking the application developers to change only a single line of code per activity of their applications -- as simple as subclassing one of our Telekintetic Thumb-enabled Activity class family.
- [ACM MobiSys 2019] Demo: Telekinetic Thumb Summons Out-of-reach Touch Interface Beneath Your Thumbtip.
- "Displaying virtual target window on mobile device based on directional gesture" US Patent No. 10042550. (Issue Date: Aug. 7, 2018)
- "Displaying virtual target window on mobile device based on user intent" U.S. Patent No. 10091344. (Issue Date: Oct. 2, 2018)
:: We Put Together the 25th Hour for You. Create a Book for Your Baby
We introduce Zaturi, a system enabling parents to create an audio book for their babies by utilizing micro spare time at work. We define micro spare time at work as tiny fragments of time with low cognitive loads that frequently occur at work, such as waiting for an elevator. We show that putting together micro spare time at work helps a working parent (1) build a tangible symbol conveying his/her thoughts to the beloved baby and (2) develop his/her own feelings of parental achievement without compromising regular working hours. Zaturi lets the parent immediately be aware of micro spare time and provides a crafted interface to seamlessly record the book piece by piece, so that the baby can enjoy listening to the book recorded in the parent’s own voice. Through an extensive design process, we characterize the notion of micro spare time and build a working prototype of Zaturi. We also report parents’ perceptions and family reactions after a two-week deployment.
*Zaturi: a Korean word meaning remnants or offcuts that are unlikely useful on their own.
:: a Single Smartphone Approach to Enable Surface Interactivity on Arbitrary Glossy Surfaces
Driven to create intuitive computing interfaces throughout our everyday space, various state-of-the-art technologies have been proposed for near-surface localization of a user's finger input such as hover or touch. However, these works require specialized hardware not commonly available, limiting the adoption of such technologies. We present SymmetriSense, a technology enabling near-surface 3-dimensional fingertip localization above arbitrary glossy surfaces using a single commodity camera device such as a smartphone. SymmetriSense addresses the localization challenges in using a single regular camera by a novel technique utilizing the principle of reflection symmetry and the fingertip's natural reflection casted upon surfaces like mirrors, granite countertops, or televisions. SymmetriSense achieves typical accuracies at sub-centimeter levels in our localization tests with dozens of volunteers and remains accurate under various environmental conditions. We hope SymmetriSense provides a technical foundation on which various everyday near-surface interactivity can be designed.
- [ACM CHI 2016] SymmetriSense: Enabling Near-Surface Interactivity on Glossy Surfaces using a Single Commodity Smartphone.
- "Pre-touch Localization on a Reflective Surface" U.S. Patent No. 9823782. (Issue Date: Nov. 21, 2017)
- "Tracking of Objects using Pre-touch Localization on a Reflective Surface" U.S. Patent No. 9733764. (Issue Date: Aug. 15, 2017)
- "Dynamic Image Compensation for Pre-touch Localization on a Reflective Surface" U.S. Patent No. 10606468. (Issue Date Mar. 31, 2020)
:: Promoting Interpersonal Touch for Vibrant Workplace using Electrodermal Sensor Watches
Interpersonal touch is our most primitive social language strongly governing our emotional well-being. Despite the positive implications of touch in many facets of our daily social interactions, we find wide-spread caution and taboo limiting touch-based interactions in workplace relationships that constitute a significant part of our daily social life. In this paper, we explore new opportunities for ubicomp technology to promote a new meme of casual and cheerful interpersonal touch such as high-fives towards facilitating vibrant workplace culture. Specifically, we propose High5, a mobile service with a smartwatch-style system to promote high-fives in everyday workplace interactions. We first present initial user motivation from semi-structured interviews regarding the potentially controversial idea of High5. We then present our smartwatch-style prototype to detect high-fives based on sensing electric skin potential levels. We demonstrate its key technical observation and performance evaluation.
- [ACM UbiComp 2014] High5: Promoting Interpersonal Hand-to-Hand Touch for Vibrant Workplace with Electrodermal Sensor Watches
- "Systems and Methods for Sensing Interpersonal Touch Using Electrical Properties of Skin" U.S. Patent No. 9703395. (Issue Date: Jul. 11, 2017; Application No. 14/666339; Application Date: Mar. 24. 2015)
- "Systems and Methods for Sensing Interpersonal Touch Using Electrical Properties of Skin" Korea Patent No. 10-1551572. (Issue Date: Sep. 2, 2015; Application Date: Apr. 30, 2014)
:: Mobile Intervention Service for Everyday Family Care for Children with Language Delay
Language delay is a developmental problem of children who do not acquire language as expected for their chronological ages. Without timely intervention, language delay can act as a lifelong risk factor. Speech-language pathologists highlight that effective parent participation in everyday parent-child conversation is important to treat children's language delay. For effective roles, however, parents need to alter their own lifelong-established conversation habits, requiring extensive period of conscious effort and staying alert. In this paper, we present new opportunities for mobile and social computing to reinforce everyday parent-child conversation with therapeutic implications for children with language delays. Specifically, we propose TalkBetter, a mobile in-situ intervention service to help parents in daily parent-child conversation through real-time meta-linguistic analysis of ongoing conversations. Through extensive field studies with speech-language pathologists and parents, we report the multilateral motivations and implications of TalkBetter. We present our development of TalkBetter prototype and report its performance evaluation.
- [ACM CSCW 2014] TalkBetter: Family-driven Mobile Intervention Care for Children with Language Delay (Best Paper Award)
- "Language Delay Treatment System and Control Method for the Same" U.S. Patent No. 9875668. (Issue Date: Jan. 23, 2018; Application No. 14/047177; Application Date: Oct. 7, 2013)
- "Language Delay Treatment System and Control Method for the Same" Korea Patent No. 10-1478459. (Issue Date: Dec. 24, 2014; Application Date: Sep. 5, 2013)
:: Multi-swimmer Exergame that Transforms Swimming into a Collaborative Monster Hunting
The unique aquatic nature of swimming makes it very difficult to use social or technical strategies to mitigate the tediousness of monotonous exercises. In this study, we propose MobyDick, a smartphone-based multi-player exergame designed to be used while swimming, in which a team of swimmers collaborate to hunt down a virtual monster. In this paper, we present a novel, holistic game design that takes into account both human factors and technical challenges. Firstly, we perform a comparative analysis of a variety of wireless networking technologies in the aquatic environment and identify various technical constraints on wireless networking. Secondly, we develop a single phone-based inertial and barometric stroke activity recognition system to enable precise, real-time game inputs. Thirdly, we carefully devise a multi-player interaction mode viable in the underwater environment highly limiting the abilities of human communication. Finally, we prototype MobyDick on waterproof off-the-shelf Android phones, and deploy it to real swimming pool environments (n = 8). Our qualitative analysis of user interview data reveals certain unique aspects of multi-player swimming games.
:: Social Exergame that Transforms Heterogeneous Workouts into a Collaborative Virtual Game
Emerging pervasive games will be immersed into real-life situations and leverage new types of contextual interactions therein. For instance, a player's punching gesture, running activity, and fast heart rate conditions can be used as the game inputs. Although the contextual interaction is the core building blocks of pervasive games, individual game developers hardly utilize a rich set of interactions within a game play. Most challenging, it is significantly difficult for developers to expect dynamic availability of input devices in real life, and adapt to the situation without system-level support. Also, it is challenging to coordinate its resource use with other gaming logics or applications. To address such challenges, we propose Player Space Director (PSD), a novel mobile platform for pervasive games. PSD facilitates the game developers to incorporate diverse contextual interactions in their game without considering complications in player's real-life situations, e.g., heterogeneity, dynamics or resource scarcity of input devices. We implemented the PSD prototype on mobile devices, diverse set of sensors, and actuators. On top of PSD, we developed three exploratory applications, ULifeAvatar, Swan Boat, U-Theater, and showed the effectiveness of PSD through extensive deployment of those games.
:: Wearable Gesture Recognition for Energy-efficient Uninterrupted On-the-go Interactions
- [ACM TOSN] Designing Interactive Multi-swimmer Exergames: A Case Study
- [ACM SenSys 2014] MobyDick: An Interactive Multi-swimmer Exergame
- [ACM UbiComp 2013 Video & Poster] Dungeons and Swimmers: Designing an Interactive Exergame for Swimming
- "Method and System for Real-time Detection of Personalization Swimming Type" Korea Patent No. 10-1579380. (Issue Date: Dec. 15, 2015; Application Date: July 1, 2014)
:: Social Exergame that Transforms Heterogeneous Workouts into a Collaborative Virtual Game
- [ACM CHI 2014] Human Factors of Speed-based Exergame Controllers (Honorable Mention Award)
- [ACM MobiSys 2012] ExerLink: enabling pervasive social exergames with heterogeneous exercise devices (Best Demo Award)
- [ACM MobiGames 2012] Toward a mobile platform for pervasive games
- [IEEE SECON 2012] ExerLink – Enabling Social Exergames with Heterogeneous Exercise Devices (Honorable Mention for Demo)
:: Wearable Gesture Recognition for Energy-efficient Uninterrupted On-the-go Interactions
Gesture is a promising mobile User Interface modality that enables eyes-free interaction without stopping or impeding movement. In this paper, we present the design, implementation, and evaluation of E-Gesture, an energy-efficient gesture recognition system using a hand-worn sensor device and a smartphone. E-gesture employs a novel gesture recognition architecture carefully crafted by studying sporadic occurrence patterns of gestures in continuous sensor data streams and analyzing the energy consumption characteristics of both sensors and smartphones. We developed a closed-loop collaborative segmentation architecture, that can (1) be implemented in resource-scarce sensor devices, (2) adaptively turn off power-hungry motion sensors without compromising recognition accuracy, and (3) reduce false segmentations generated from dynamic changes of body movement. We also developed a mobile gesture classification architecture for smartphones that enables HMM-based classification models to better fit multiple mobility situations.
- [ACM SenSys 2011] E-Gesture: a collaborative architecture for energy-efficient gesture recognition with hand-worn sensor and mobile devices
- [ACM MobiSys 2011 Demo] Demo: e-gesture - a collaborative architecture for energy-efficient gesture recognition with hand-worn sensor and mobile devices
- [IEEE SECON 2012] E-Gesture – A Collaborative Architecture for Energy-efficient Gesture Recognition with Hand-worn Sensor and Mobile Devices (Best Demo Award)