Loci of interface

Exploration of boundary interface

"Loci blurs the boundary between physical and digital world"
Loci of Interface creates new possibilities to extend interaction with computer interfaces to physical objects. This interface leverages augmented reality and utilizes spatial dimensions of objects to attach any object with the corresponding channel of spatial anchors to digital interfaces. With Loci, even non-digital physical objects can be connected to your digital world and act as an adaptive personal interface collection.

Loci anchor is the core of this new interactive interface that links tangible objects to computer user interfaces using spatial intelligence. The anchor can be located on the selected surface of the object recognized by the computer vision. And the computer side of the anchor will be able to define the interactive area of the graphical user interface that corresponds to the Loci anchor. Loci anchors will be created/placed and navigated in the environment by a mouse, which embodies the traditional pointer of the computer peripheral.​​​​​​​



"Computer interface and environment merge into one"
- How do we enable users to move from a traditional computer interface to the entire environmental interface?
The second layer of information is given to environmental objects and spaces and displayed in front of users through the goggles and smartphone screens. This is currently the primary use of augmented reality interface. Are we expecting everyone will have VR headset or wear augmented reality filters? Do we have a more natural interaction with digital information? I am very interested in exploring the transitional stage in the immediate future.
   
Prototype 1.0-1
The first version of the prototype creates a direct physical address to the digital interface.​​​​​​​
Prototype 1.0-2
This demonstration showcases the possibility of seamless interaction to embed digital content onto our environment.​​​​​​​

"The form of the spatial objects as interface" 
- What if we can archive the objects around us as computer interaction?
Computer vision has always been one of the key technologies in spatial intelligence. It has the advantage of detecting changes in the environment and users in real-time, and the camera is technically a sensor that can acquire and analyze more information. ​​​​​​​Object recognition technology is well-developed nowadays. Open source code for machine learning platforms like Google TensorFlow or AR Kit for Apple and Android is also readily available. This is what I've been looking for, to be able to permanently embed this spatial anchor point on our environmental objects, which in turn triggers the interaction behind it. 
  
Prototype 2.0-1
This refined prototype controls the anchor point on the surface of the object(mug) which is linked to the particular control(zoom buttons) of the map.

Prototype 2.0-2
This prototype demonstrates the usage of the customized and adaptive user interface to allow people to create their digital document library interface on their notebook.
Loci works as plugin software
(Left image ) Loci anchor attached to a linear moving element of the digital interface. 
(Right image) Loci anchors are embedded to the folder of the computer OS.
Early experiment journey

EarlyEXP1(Left) — "Navigation of the hidden data." 
How can we understand the meaning of information represents through the hidden signal?
EarlyEXP2(Right) — "Simplify multimodal interface interaction"
What if we apply the usage of user interface we used to this new spatial interface?​​​​​​​
EarlyEXP3 (Left) — "From Navigation to sensing of invisible existence"
What if we can recognize the functions of the data through direct feedback in the 3D space?
EarlyEXP4 (Right)— "Giving Digital Content a Body"
What if the location of the physical body is where the data is stored on the computer?

This project initially started with the idea that the value of digital content is that they have no physical form, can be transmitted, copied, and exist around any of our environments. However, they all need a material representation to convey the information value. By giving the limitation of the space dimension and storing in a specific scope, can we collect those digital content into our pockets like picking up stones on the roadside? Or will it be like the smell, with different information being delivered to people according to different substances and corresponding context?
Assuming that we can physically interact with those digital content existing in space. What sort of feedback will be triggered by people? The approach of this experimental project is to generate hypotheses through a series of imaginations.


Back to Top