Remi
Summary
Although the smart-kitchen has been heralded as the future of domestic living since the Jetsons, this promise has yet to be realized beyond showrooms and research labs. “Smart” appliances can be awkward to control, especially while cooking. They are often controlled through multiple mobile apps, or a smart speaker. These interfaces require the user to shift their hands or attention from their task in order to provide input. They also typically can't detect the user's context: what they are making, what they are doing, where they are in the home.
Our project introduces the concept of “detached monitoring” in a context-adaptive cooking system. The system has two parts: the Rat, a device mounted above the stove and the Hat, an augmented reality (AR) headset worn by the user. The Rat, using a thermal and RGB camera, provides information about the user's actions and the food being cooked. This information, combined with information from the Hat, is used to determine the user's context. Instructions and status information are then embedded in the user's environment via the Hat.
Unlike other AR applications, our system requires little-to-no explicit input (e.g. gestures or voice commands). Instead, the system primarily uses “contextual input” — in other words, the actions of the user. When a user flips a pancake or turns on the stove, the system adapts accordingly. This was inspired by turn-by-turn directions on mobile GPS systems (the driver never needs to input a “next” command).
The system was piloted with 7 participants in a kitchen setting. The results indicated that users found the tasks easier the more detached monitoring it incorporated, and, overall, found detached monitoring to be intuitive.
Design Pillars
All of the images/videos were produced using the software that runs on the device (they are not renders).
Contextual Input
In our system, AR is enriched with the context of the user’s environment — what we call “contextual input.” The user can work alongside it, or leave for short periods of time with the peace of mind that their task is being supervised. It informs the AR system both when action should be taken, and once an action has been taken.
Contextual Anchoring
Our system’s UI is distributed throughout the user’s environment, anchored to “contexts.” This anchoring puts information where it belongs, and it keeps it out of the user’s way when they don’t need it.
If that context is not available (e.g. the user is in a different room) we recreate the context and map to that (a concept we call mini-mirror).
Contextual Mapping
The user interface should confirm to the context it’s anchored to. This is particularly useful for instructions. Rather than instruct the user to set their stove to “Medium-High” (which is quite ambiguous), we show the user exactly where to set the dial.
If a timer is set for a pot, the timer is wrapped around the rim of that pot. This has the effect of augmenting objects/contexts themselves (making the pot itself smart).
Adaptive Minimalism
The user needs to be able to see as much of the real world as possible. This is especially true in a safety-critical context like cooking. Our system tries to get out of the user’s way, as much as possible.
In this example, the UI is in a “minimized” state until looked at by the user (using Magic Leap’s eye-tracking)