Loci of interface

Exploration of boundary interface

"Loci blurs the boundary between physical and digital world"
Loci of Interface creates new possibilities to extend interaction with computer interfaces to physical objects. This interface leverages augmented reality and utilizes spatial dimensions of objects to attach any object with the corresponding channel of spatial anchors to digital interfaces. With Loci, even non-digital physical objects can be connected to your digital world and act as an adaptive personal interface collection.

Loci anchor is the core of this new interactive interface that links tangible objects to computer user interfaces using spatial intelligence. The anchor can be located on the selected surface of the object recognized by the computer vision. And the computer side of the anchor will be able to define the interactive area of the graphical user interface that corresponds to the Loci anchor. Loci anchors will be created/placed and navigated in the environment by a mouse, which embodies the traditional pointer of the computer peripheral.​​​​​​​

"Computer interface and environment merge into one"
- How do we enable users to move from a traditional computer interface to the entire environmental interface?
The second layer of information is given to environmental objects and spaces and displayed in front of users through the goggles and smartphone screens. This is currently the primary use of augmented reality interface. Are we expecting everyone will have VR headset or wear augmented reality filters? Do we have a more natural interaction with digital information? I am very interested in exploring the transitional stage in the immediate future.
Experiment & Development
In the initial series of experiments, I intended to explore intangible information in space. What if when people discover that an event has been triggered at a specific location? Can these events reflect some form of knowledge? However, we have recognized this intangible information storing from the spatial position, and also understand the meaning brought by the interaction, how will we retrieve it again?
EarlyEXP1(Left) — "Navigation of the hidden data." 
How can we understand the meaning of information represents through the hidden signal?
EarlyEXP2(Right) — "Simplify multimodal interface interaction"
What if we apply the usage of user interface we used to this new spatial interface?​​​​​​​
EarlyEXP3 (Left) — "From Navigation to sensing of invisible existence"
What if we can recognize the functions of the data through direct feedback in the 3D space?
EarlyEXP4 (Right)— "Giving Digital Content a Body"
What if the location of the physical body is where the data is stored on the computer?
"Physical address of digital information"
For me, the work desktop is a fascinating context for this spatial interface. We used to think in terms of space, like sticky notes and images on the wall. Using spatial proximity (grouping) is the most common way of sorting and navigating folders and files on the computer desktop. 
We often use the placement of objects to speed up our workflow - to program ourself. That's why our table are able to reveal so much information. Yet the vast majority of our table have a core theme, which can be laptops, notebooks, drawings, etc. This gives me a lot of imagination about the collaboration between this spatial interface and the working space. 
Will this interactive interface be a new mechanism for working together on virtual and physical tasks? 
Prototype 1.0-1
The first version of the prototype creates a direct physical address to the digital interface.​​​​​​​
Prototype 1.0-2
This demonstration showcases the possibility of seamless interaction to embed digital content onto our environment.​​​​​​​
"The form of the spatial objects as interface" 
- What if we can archive the objects around us as computer interaction?
Computer vision has always been one of the key technologies in spatial intelligence. It has the advantage of detecting changes in the environment and users in real-time, and the camera is technically a sensor that can acquire and analyze more information. Object recognition technology is well-developed nowadays. Open source code for machine learning platforms like Google TensorFlow or AR Kit for Apple and Android is also readily available. This technology is able to permanently embed spatial anchor points onto our environmental objects, which in turn triggers the interaction behind it. 
"Using the object's physical representation to allow users to customize its definition in digital world."
Placing loci anchor is like implanting the nerve of the digital interface onto the surface of a physical object, and the contact with the physical object will react to the digital interface. 
In current spatial computing interactions, even if AR&VR has a flat user interface design. The main thing is not to confuse the user with virtual and real objects. But what if an object which is already cognitively present in physical space is also a UI?  The advantage of translating a real object into a user interface is that the user can define the functions that are attached to it according to the existing object. The physical representation provided by the object itself can also unify virtual and realistic mental models, allowing users to generate another layer of digital awareness of physical objects. 
Skeuomorphism to Flat UI, and next? ​​​​​​​
Prototype 2.0-1
This refined prototype controls the anchor point on the surface of the object(mug) which is linked to the particular control(zoom buttons) of the map.

Prototype 2.0-2
This prototype demonstrates the usage of the customized and adaptive user interface to allow people to create their digital document library interface on their notebook.
Loci works as plugin software
(Left image ) Loci anchor attached to a linear moving element of the digital interface. 
(Right image) Loci anchors are embedded to the folder of the computer OS.
This project initially started with the idea that the value of digital content is that they have no physical form, can be transmitted, copied, and exist around any of our environments. However, they all need a material representation to convey the information value. By giving the limitation of the space dimension and storing in a specific scope, can we collect those digital content into our pockets like picking up stones on the roadside?
Assuming that we can physically interact with those digital content existing in space. What sort of feedback will be triggered by people? The approach of this experimental project is to generate hypotheses through a series of imaginations.

Back to Top