Semester Projects from Medialogy, Aalborg University
Methods & practices for gathering robust VR gesture data
Creating a robust dataset of DSL signs, A to Å and 0 to 10, sampled from sign language experts.
Using hand-tracking on a Quest 2, we created a dataset of 40 signs from the Danish Sign Language. A 3-stage data gathering pipeline is employed:
A set of demonstration gestures was created by recording an expert interpreter in DSL eliciting all 40 signs.
The set was validated by 2 experts on DSL linguistics from “Afdeling for Dansk Tegnsprog - Dansk Sprognævn”
The validated set was used to gather data from 40 participants with little knowledge of DSL
Subsequently, a time-series model will be trained to demonstrate the use of this dataset, after which, the dataset will be made available for public use, along with guides on how to recreate the experiment as well as data processing and features.
My contributions: Programming, VR Hand-tracking, feature extraction, and time-series gesture detection model.
Multi-modal VR Animation Authoring
Comparing controller- and gesture-based interfaces for animation authoring in VR
Using hand-tracking data gathered from users on a Quest 2, we created a new approach to classifying gestures runtime in VR scenarios using a time-series weighted Neural Network. We compared the Usability of this new input modality with that of existing VR controllers, in the context of animation authoring. To test it, We created an in-situ animation authoring system and used that system to create a sequential task scenario, instructing 30 participants to create a solar system in VR using either input modality.
As a challenge, I created an entire XR interaction framework from scratch, inspired by XR Interaction Toolkit’s Interactor/Interactable paradigm, built on a modular event system I made using Scriptable Objects.
My contributions: Programming, Spatial interface design, and Interaction design based on state-of-the-art animation authoring.
PeriVAS - Peripheral Visual Attention System
Exploring Context-independent Attention Guidance in Virtual Reality
Using the HTC Vive with the Pupil Labs eye-tracking add-on, running a Unity solution, we created two different kinds of visual stimuli, a color cue & motion cue, to compare their attention guidance performance in the different visual regions of a users field of view(FoV). Furthermore, we investigated our system’s use of peripheral cues for the utility it could provide in cueing attention while maintaining visual diegesis in various contexts such as VR movies, games, and educational material.
My contributions: Programming, UX for test conductors & test flow for participants.
"Where’s my inventory?"
A comparative study on spatial reference for inventories in a VR environment
In Virtual Reality, there are numerous ways to position an inventory in relation to the user, our project dove into comparing the performance of two specific types of inventories, one attached to the user, and the other attached to an object placed in the interactive scenario. User performance is compared using data about their interactions with their assigned inventory, throughout a task-based test setup in which they must complete two dozen tasks using 11 distinct interactable items.
As a side project, I also created seamless portals using render textures
My contributions: Programming, VR UX Design, 3D modeling, Lighting
Sound-source triangulation
Portable Sound Source Localization for Navigational Assistance
The motivation for this project comes from using sound as a navigational tool, for example when visibility is low, or users have hearing problems. We polled 3 microphones at such small time intervals that we were able to derive an accurate Time Difference of Arrival, ie. time elapsed from a sound source reaching microphone 1 to microphones 2 & 3, from which planar directionality can be inferred.
This project resulted in a portable sound source localization device only 12 cm wide and 1 cm tall, able to detect the direction of incoming sound sources with sub-degree accuracy. The solution was made possible using the Teensy 4.1, with 3 microphones spread in equidistant directions from one another, mounted onto a 3D-printed disc. Limitations were however discovered at extreme azimuth from the disc plane.
My contributions: Programming, Hardware implementation, 3D modeling & printing
Hand-tracked TV Remote
Everyone knows how frustrating it is being unable to find the TV remote, well now there’s a solution for that! We created a time-series sensitive gesture detection algorithm using a live camera feed, enabling multiple layers of control schemes for remotely controlling TVs, while minimizing the possibility of false-positive detections from visual background noise. The solution was created in Python using the OpenCV framework.
My contributions: Programming, Computer Vision