Open Sesame? Open Salami! (OSOS)
:: Pervasive Sensing & LLM for Personalized Vocabulary Assessment-Intervention for Children


Children acquire language by interacting with their surroundings. Due to the different language environments each child is exposed to, the words they encounter and need in their life vary. Despite the standard tools for assessment and intervention as per predefined vocabulary sets, speech-language pathologists and parents struggle with the absence of systematic tools for child-specific custom vocabulary, i.e., out-of-standard but personally more important. We propose "Open Sesame? Open Salami! (OSOS)", a personalized vocabulary assessment and intervention system with pervasive language profiling and targeted storybook generation, collaboratively developed with speech-language pathologists. Melded into a child's daily life and powered by large language models (LLM), OSOS profiles the child's language environment, extracts priority words therein, and generates bespoke storybooks naturally incorporating those words. We evaluated OSOS through 4-week-long deployments to 9 families. We report their experiences with OSOS, and its implications in supporting personalization outside standards.

Related publication:
  • (to appear) [ACM CHI 2024] Open Sesame? Open Salami! Personalizing Vocabulary Assessment-Intervention for Children via Pervasive Profiling and Bespoke Storybook Generation

ProxiFit
:: Magnetic field sensing for single-device, non-contact, holistic weight exercise monitoring


Although many works bring exercise monitoring to smartphone and smartwatch, inertial sensors used in such systems require device to be in motion to detect exercises. We introduce ProxiFit, a highly practical on-device exercise monitoring system capable of classifying and counting exercises even if the device stays still. Utilizing novel proximity sensing of natural magnetism in exercise equipment, ProxiFit brings (1) a new category of exercise not involving device motion such as lower-body machine exercise, and (2) a new off-body exercise monitoring mode where a smartphone can be conveniently viewed in front of the user during workouts. ProxiFit addresses common issues of faint magnetic sensing by choosing appropriate preprocessing, negating adversarial motion artifacts, and designing a lightweight yet noise-tolerant classifier. Also, application-specific challenges such as a wide variety of equipment and the impracticality of obtaining large datasets are overcome by devising a unique yet challenging training policy. We evaluate ProxiFit on up to 10 weight machines (5 lower- and 5 upper-body) and 4 free-weight exercises, on both wearable and signage mode, with 19 users, at 3 gyms, over 14 months, and verify robustness against user and weather variations, spatial and rotational device location deviations, and neighboring machine interference.

Related publication:


AI-to-Human Actuation
:: A novel Human+AI symbiotic model in AI-rich pervasive sensory environment


Imagine a near-future smart home. Home-embedded visual AI sensors continuously monitor the resident, inferring her activities and internal states that enable higher-level services. Here, as home-embedded sensors passively monitor a free person, good inferences happen randomly. The inferences' confidence highly depends on how congruent her momentary conditions are to the conditions favored by the AI models, e.g., front-facing or unobstructed. We envision new strategies of AI-to-Human Actuation (AHA) that empower the sensory AIs with proactive actuation so that they induce the person's conditions to be more favorable to the AIs. In this light, we explore the initial feasibility and efficacy of AHA in the context of home-embedded visual AIs. We build a taxonomy of actuations that could be issued to home residents to benefit visual AIs. We deploy AHA in an actual home rich in sensors and interactive devices. With 20 participants, we comprehensively study their experiences with proactive actuation blended with their usual home routines. We also demonstrate the substantially improved inferences of the actuation-empowered AIs over the passive sensing baseline. This paper sets forth an initial step towards interweaving human-targeted AIs and proactive actuation to yield more chances for high-confidence inferences without sophisticating the model, in order to improve robustness against unfavorable conditions.

Related publication:


SleepGuru
:: Personalized sleep planning system that is grounded on a human sleep physiology model, and featuring real-time re-optimization for real-life actionability and negotiability


Widely-accepted sleep guidelines advise regular bedtimes and sleep hygiene. An individual’s adherence is often viewed as a matter of self-regulation and anti-procrastination. We pose a question from a different perspective: What if it comes to a matter of one’s social or professional duty that mandates irregular daily life, making it incompatible with the premise of standard guidelines? We propose SleepGuru, an individually actionable sleep planning system featuring one’s real-life compatibility and extended forecast. Adopting theories on sleep physiology, SleepGuru builds a personalized predictor on the progression of the user’s sleep pressure over a course of upcoming schedules and past activities sourced from her online calendar and wearable fitness tracker. Then, SleepGuru service provides individually actionable multi-day sleep schedules which respect the user’s inevitable real-life irregularities while regulating her week-long sleep pressure. We elaborate on the underlying physiological principles and mathematical models, followed by a 3-stage study and deployment. We develop a mobile user interface providing individual predictions and adjustability backed by cloud-side optimization. We deploy SleepGuru in-the-wild to 20 users for 8 weeks, where we found positive effects of SleepGuru in sleep quality, compliance rate, sleep efficiency, alertness, long-term followability, and so on.

Related publication:
Related patents:
  • "Personalized sleep planning system considering individual dynamic constraints and sleep schedule creating method using same" U.S. Patent Application 17/886446. (Application Date: Aug. 11, 2022) 


MomentMeld
:: AI-augmented mobile photographic memento to make our inter-generational interaction, especially with the elder, richer, healthier, and mutually inspiring


Aging often comes with declining social interaction, a known adversarial factor impacting the life satisfaction of senior population. Such decline appears even in family--a permanent social circle, as their adult children eventually go independent. We present MomentMeld, an AI-powered, cloud-backed mobile application that blends with everyday routine and naturally encourages rich and frequent inter-generational interactions in a family, especially those between the senior generation and their adult children. Firstly, we design a photographic interaction aid called mutually stimulatory memento, which is a cross-generational juxtaposition of semantically related photos to bring natural arousal of context-specific inter-generational empathy and reminiscence. Secondly, we build comprehensive ensemble AI models consisting of various deep neural networks and a runtime system that automates the creation of mutually stimulatory memento on top of the user's usual photo-taking routines. We deploy MomentMeld in-the-wild with six families for an eight-week period, and discuss the key findings and further implications.

Related publication:
Related patents:
  • "Juxtaposing Contextually Similar Cross-generation Images" U.S. Patent No. 10678845. (Issue Date: June 9, 2020) 
Related media coverage:


HomeMeld
:: AI-powered robotic telepresence enables two-bodied family members to live at two places at the same time


People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those through remote face-to-face communication. We envision that enabling a remote person to be spatially superposed in one's living space would be a breakthrough to catalyze pseudo living-together interactivity. We propose HomeMeld, a zero-hassle self-mobile robotic system serving as a co-present avatar to create a persistent illusion of living together for those who are involuntarily living apart. The key challenges are 1) continuous spatial mapping between two heterogeneous floor plans and 2) navigating the robotic avatar to reflect the other's presence in real time under the limited maneuverability of the robot. We devise a notion of functionally equivalent location and orientation to translate a person's presence into another in a heterogeneous floor plan. We also develop predictive path warping to seamlessly synchronize the presence of the other. We conducted extensive experiments and deployment studies with real participants.

Related publication:
Related patents:
  • "Autonomous Robotic Avatars" U.S. Patent No. 10589425. (Issue Date: Mar. 17, 2020) 
  • "Autonomous Robotic Avatars" U.S. Patent Pending. Application No. 15/827686. (Application Date: Nov. 30, 2017)


Card-stunt as a Service
:: A new mobile-crowd technology for massively assembled citizens to create a powerful message


Imagine a densely packed crowd that gathers to convey a common message, such as people in a candlelight vigil or a protest. We envision an innovation through mobile computing technologies to empower such a crowd by enabling them simply to hold their phones up and create a massive collective visualization on top of them. We propose Card-stunt as a Service (CaaS). CaaS is a service enabling a densely packed crowd to instantly visualize symbols using their mobile devices and a server-side service. The key challenge toward realizing an instant collective visualization is how to achieve instant, infrastructure-free, decimeter-level localization of individuals in a massively packed crowd, while maintaining low latency. CaaS addresses the challenges by mobile visible-light angle-of-arrival (AoA) sensing and scalable constrained optimization. It reconstructs relative locations of all individuals and dispatches individualized timed pixels to each one so that they can do their part in the overall visualization. We evaluate CaaS with extensive experiments under diverse reality settings as well as under synthetic workloads scaling up to tens of thousands of people. We deploy CaaS to 49 participants so that they successfully perform a collective visualization cheering up MobiSys.

Related publication:
Related patents:
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10229512. (Issue Date: Mar. 12, 2019) 
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10242463. (Issue Date: Mar. 26, 2019)
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10362460. (Issue Date: July 23, 2019)
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10559094. (Issue Date: Feb. 11, 2020)
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10567931. (Issue Date: Feb. 18, 2020)
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10623918. (Issue Date: Apr. 14, 2020)
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10674328. (Issue Date: June 2, 2020)
  • "Providing Visualization Data to a Co-located Plurality of Mobile Devices" U.S. Patent No. 10776958. (Issue Date: Sep. 15, 2020)
Related media coverage:


Telekinetic Thumb
:: Summoning Out-of-reach Touch Interface Beneath Your Thumbtip


As personal interactive devices become more ingrained into our daily lives, it becomes more important to understand how seamless interaction with those devices can be fostered. A typical mechanism to interface with a personal device is via a touch screen, in which users use their fingertip or stylus to scroll, type, select, or otherwise control device usage. Touch-based techniques, however, can become restrictive or inconvenient under a variety of scenarios. For example, personal devices such as phones or tablets are continuously increasing in size, making one-handed interaction difficult because one cannot easily hold the phone and touch the screen (with the thumb) at the same time with one hand.
We present Telekinetic Thumb, a new technique to interact with personal devices in which the screen and touch screen interactions can adapt to a user's grip or current touch constraints. Our key philosophy in implementing Telekinetic Thumb was to maximize the generalizability and friendliness to the application developers. Recognizing finger gestures and altering the foreground user interface could be implemented in multiple different ways, but it would be of diminishing value if it requires significant changes to an existing application’s code base to incorporate Telekinetic Thumb feature in their applications. In this light, we devised an implementation strategy asking the application developers to change only a single line of code per activity of their applications -- as simple as subclassing one of our Telekintetic Thumb-enabled Activity class family.

Related demo: 
Related patents: 
  • "Displaying virtual target window on mobile device based on directional gesture" US Patent No. 10042550. (Issue Date: Aug. 7, 2018)
  • "Displaying virtual target window on mobile device based on user intent" U.S. Patent No. 10091344. (Issue Date: Oct. 2, 2018)



Zaturi*
:: We Put Together the 25th Hour for You. Create a Book for Your Baby


We introduce Zaturi, a system enabling parents to create an audio book for their babies by utilizing micro spare time at work. We define micro spare time at work as tiny fragments of time with low cognitive loads that frequently occur at work, such as waiting for an elevator. We show that putting together micro spare time at work helps a working parent (1) build a tangible symbol conveying his/her thoughts to the beloved baby and (2) develop his/her own feelings of parental achievement without compromising regular working hours. Zaturi lets the parent immediately be aware of micro spare time and provides a crafted interface to seamlessly record the book piece by piece, so that the baby can enjoy listening to the book recorded in the parent’s own voice. Through an extensive design process, we characterize the notion of micro spare time and build a working prototype of Zaturi. We also report parents’ perceptions and family reactions after a two-week deployment.

*Zaturi: a Korean word meaning remnants or offcuts that are unlikely useful on their own.

Related publication:


SymmetriSense
:: a Single Smartphone Approach to Enable Surface Interactivity on Arbitrary Glossy Surfaces


Driven to create intuitive computing interfaces throughout our everyday space, various state-of-the-art technologies have been proposed for near-surface localization of a user's finger input such as hover or touch. However, these works require specialized hardware not commonly available, limiting the adoption of such technologies. We present SymmetriSense, a technology enabling near-surface 3-dimensional fingertip localization above arbitrary glossy surfaces using a single commodity camera device such as a smartphone. SymmetriSense addresses the localization challenges in using a single regular camera by a novel technique utilizing the principle of reflection symmetry and the fingertip's natural reflection casted upon surfaces like mirrors, granite countertops, or televisions. SymmetriSense achieves typical accuracies at sub-centimeter levels in our localization tests with dozens of volunteers and remains accurate under various environmental conditions. We hope SymmetriSense provides a technical foundation on which various everyday near-surface interactivity can be designed.

Related publication:
Related patents:
  • "Pre-touch Localization on a Reflective Surface" U.S. Patent No. 9823782. (Issue Date: Nov. 21, 2017)
  • "Tracking of Objects using Pre-touch Localization on a Reflective Surface" U.S. Patent No. 9733764. (Issue Date: Aug. 15, 2017) 
  • "Dynamic Image Compensation for Pre-touch Localization on a Reflective Surface" U.S. Patent No. 10606468. (Issue Date Mar. 31, 2020)


High5
:: Promoting Interpersonal Touch for Vibrant Workplace using Electrodermal Sensor Watches


Interpersonal touch is our most primitive social language strongly governing our emotional well-being. Despite the positive implications of touch in many facets of our daily social interactions, we find wide-spread caution and taboo limiting touch-based interactions in workplace relationships that constitute a significant part of our daily social life. In this paper, we explore new opportunities for ubicomp technology to promote a new meme of casual and cheerful interpersonal touch such as high-fives towards facilitating vibrant workplace culture. Specifically, we propose High5, a mobile service with a smartwatch-style system to promote high-fives in everyday workplace interactions. We first present initial user motivation from semi-structured interviews regarding the potentially controversial idea of High5. We then present our smartwatch-style prototype to detect high-fives based on sensing electric skin potential levels. We demonstrate its key technical observation and performance evaluation.

Related publication:
Related patents:
  • "Systems and Methods for Sensing Interpersonal Touch Using Electrical Properties of Skin" U.S. Patent No. 9703395. (Issue Date: Jul. 11, 2017; Application No. 14/666339; Application Date: Mar. 24. 2015)
  • "Systems and Methods for Sensing Interpersonal Touch Using Electrical Properties of Skin" Korea Patent No. 10-1551572. (Issue Date: Sep. 2, 2015; Application Date: Apr. 30, 2014)

TalkBetter
:: Mobile Intervention Service for Everyday Family Care for Children with Language Delay

Language delay is a developmental problem of children who do not acquire language as expected for their chronological ages. Without timely intervention, language delay can act as a lifelong risk factor. Speech-language pathologists highlight that effective parent participation in everyday parent-child conversation is important to treat children's language delay. For effective roles, however, parents need to alter their own lifelong-established conversation habits, requiring extensive period of conscious effort and staying alert. In this paper, we present new opportunities for mobile and social computing to reinforce everyday parent-child conversation with therapeutic implications for children with language delays. Specifically, we propose TalkBetter, a mobile in-situ intervention service to help parents in daily parent-child conversation through real-time meta-linguistic analysis of ongoing conversations. Through extensive field studies with speech-language pathologists and parents, we report the multilateral motivations and implications of TalkBetter. We present our development of TalkBetter prototype and report its performance evaluation.

Related publication:
Related patents:
  • "Language Delay Treatment System and Control Method for the Same" U.S. Patent No. 9875668. (Issue Date: Jan. 23, 2018; Application No. 14/047177; Application Date: Oct. 7, 2013)
  • "Language Delay Treatment System and Control Method for the Same" Korea Patent No. 10-1478459. (Issue Date: Dec. 24, 2014; Application Date: Sep. 5, 2013)
Related media coverage:


MobyDick
:: Multi-swimmer Exergame that Transforms Swimming into a Collaborative Monster Hunting

The unique aquatic nature of swimming makes it very difficult to use social or technical strategies to mitigate the tediousness of monotonous exercises. In this study, we propose MobyDick, a smartphone-based multi-player exergame designed to be used while swimming, in which a team of swimmers collaborate to hunt down a virtual monster. In this paper, we present a novel, holistic game design that takes into account both human factors and technical challenges. Firstly, we perform a comparative analysis of a variety of wireless networking technologies in the aquatic environment and identify various technical constraints on wireless networking. Secondly, we develop a single phone-based inertial and barometric stroke activity recognition system to enable precise, real-time game inputs. Thirdly, we carefully devise a multi-player interaction mode viable in the underwater environment highly limiting the abilities of human communication. Finally, we prototype MobyDick on waterproof off-the-shelf Android phones, and deploy it to real swimming pool environments (n = 8). Our qualitative analysis of user interview data reveals certain unique aspects of multi-player swimming games.

Related publication:
Related patents:
  • "Method and System for Real-time Detection of Personalization Swimming Type" Korea Patent No. 10-1579380. (Issue Date: Dec. 15, 2015; Application Date: July 1, 2014)


ExerLink
:: Social Exergame that Transforms Heterogeneous Workouts into a Collaborative Virtual Game

Emerging pervasive games will be immersed into real-life situations and leverage new types of contextual interactions therein. For instance, a player's punching gesture, running activity, and fast heart rate conditions can be used as the game inputs. Although the contextual interaction is the core building blocks of pervasive games, individual game developers hardly utilize a rich set of interactions within a game play. Most challenging, it is significantly difficult for developers to expect dynamic availability of input devices in real life, and adapt to the situation without system-level support. Also, it is challenging to coordinate its resource use with other gaming logics or applications. To address such challenges, we propose Player Space Director (PSD), a novel mobile platform for pervasive games. PSD facilitates the game developers to incorporate diverse contextual interactions in their game without considering complications in player's real-life situations, e.g., heterogeneity, dynamics or resource scarcity of input devices. We implemented the PSD prototype on mobile devices, diverse set of sensors, and actuators. On top of PSD, we developed three exploratory applications, ULifeAvatar, Swan Boat, U-Theater, and showed the effectiveness of PSD through extensive deployment of those games.

Related publication:


E-Gesture
:: Wearable Gesture Recognition for Energy-efficient Uninterrupted On-the-go Interactions

Gesture is a promising mobile User Interface modality that enables eyes-free interaction without stopping or impeding movement. In this paper, we present the design, implementation, and evaluation of E-Gesture, an energy-efficient gesture recognition system using a hand-worn sensor device and a smartphone. E-gesture employs a novel gesture recognition architecture carefully crafted by studying sporadic occurrence patterns of gestures in continuous sensor data streams and analyzing the energy consumption characteristics of both sensors and smartphones. We developed a closed-loop collaborative segmentation architecture, that can (1) be implemented in resource-scarce sensor devices, (2) adaptively turn off power-hungry motion sensors without compromising recognition accuracy, and (3) reduce false segmentations generated from dynamic changes of body movement. We also developed a mobile gesture classification architecture for smartphones that enables HMM-based classification models to better fit multiple mobility situations.

Related publication: