~_RMD

Mastering the art of spatial gestures

~_RMD

Mastering the art of spatial gestures

~_RMD

Mastering the art of spatial gestures

#Kuji-kiri

Kuji-kiri, or "nine symbolic cuts", is a practice that involves using a sequence of nine hand gestures that are used in Japanese religions and martial arts.

The gestures are based on the kuji-in of Taoism, and are often associated with ninjas.
Kuji-kiri, or "nine symbolic cuts", is a practice that involves using a sequence of nine hand gestures that are used in Japanese religions and martial arts.

The gestures are based on the kuji-in of Taoism, and are often associated with ninjas.

#Kuji-vision at a Glance

Kuji-Vision: is a precise tool designed to bridge the gap between the physical and digital realms through the nuanced recognition of hand gestures. By tapping into the hand tracking feature of visionOS, Kuji-vision facilitates the creation of more intuitive and interactive applications, enhancing the way users engage with technology.


Technical Overview:

Kuji-vision utilizes Apple’s HandTrackingProvider to collect data on hand joints, transforming this data into a structured format we refer to as KujiPoses. These poses are catalogued into what we call a Scroll Library—a comprehensive collection of gestures that serve as the foundation for on-device machine learning model training.


The core of Kuji-vision lies in its ability to not only detect and catalog hand gestures but also to utilize these gestures to train a classifier directly on the device. This results in a machine learning model (CoreML) that is both lightweight and powerful, capable of running efficiently on-device for real-time gesture recognition.

#For Developers


  • Integration and Usage: Kuji-vision has been crafted with an eye for simplicity and ease of use. Its API and documentation, which will be available shortly, are designed to facilitate seamless integration into existing projects, allowing developers to quickly adopt and implement gesture-based interactions within their applications.

  • Machine Learning Capabilities: At its heart, Kuji-vision is about leveraging machine learning to enhance gesture recognition. By providing tools for both training and predicting hand poses, it opens up new avenues for creating dynamic and responsive user interfaces.

  • Expanding Gesture Libraries: The flexibility of Kuji-vision allows for the expansion and customization of gesture libraries. This means developers can not only use the predefined set of gestures but also create and integrate their own, tailoring the interaction experience to the specific needs of their applications.




  • Integration and Usage: Kuji-vision has been crafted with an eye for simplicity and ease of use. Its API and documentation, which will be available shortly, are designed to facilitate seamless integration into existing projects, allowing developers to quickly adopt and implement gesture-based interactions within their applications.

  • Machine Learning Capabilities: At its heart, Kuji-vision is about leveraging machine learning to enhance gesture recognition. By providing tools for both training and predicting hand poses, it opens up new avenues for creating dynamic and responsive user interfaces.

  • Expanding Gesture Libraries: The flexibility of Kuji-vision allows for the expansion and customization of gesture libraries. This means developers can not only use the predefined set of gestures but also create and integrate their own, tailoring the interaction experience to the specific needs of their applications.


#Practical Applications

Kuji-vision's potential applications range from educational tools like sign language apps to sophisticated UI/UX designs for spatial computing. Its ability to process and interpret hand gestures in real-time makes it an invaluable resource for developers looking to push the boundaries of user interaction in their software.


#Conclusion

Kuji-vision is a step forward in the realm of gesture-based interaction, offering developers a powerful tool for creating more natural and intuitive user interfaces. By leveraging the capabilities of visionOS and the principles of machine learning.


Advancing AR/VR and Spatial Computing? Let's Collaborate on XR Challenges.