Output list
Conference proceeding
Unsupervised Symbolization with Adaptive Features for LoRa-Based Localization and Tracking
Date presented 18/12/2024
2024 International Conference on Sustainable Technology and Engineering (i-COSTE)
International Conference on Sustainable Technology and Engineering (i-COSTE), 18/12/2024–20/12/2024, Perth, WA
While LoRa overcomes the high-power consumption and deployment costs of GPS and mobile networks, it faces challenges in accuracy. This paper presents a method for LoRa-based localization and tracking. It uses unsupervised symbolization to analyze received signal features. We use partitioning, D-Markov machines for symbolization and the Chinese restaurant process to achieve unsupervised symbolization. In particular, a novel adaptive feature extraction technique is proposed in partitioning to overcome the problems of over-tracking and under-tracking. Mean spectral kurtosis analysis is performed across several partitioning techniques to assess their symbolization effectiveness. This enables the selection of the most appropriate partitioning technique. This enhances the localization and tracking accuracy of target objects by focusing on robustness to noise and multipath effects. The proposed method learns and estimates the distance range simultaneously, thereby eliminating the need for a separate offline training phase and the storage of reference coordinates. Experimental results using LoRa highlight the proposed method's efficacy in real-time localization, tracking, and superiority over the state-of-the-art method.
Conference proceeding
Published 2024
Proceedings - 2024 25th International Conference on Digital Image Computing: Techniques and Applications, DICTA 2024, 9 - 16
25th International Conference on Digital Image Computing: Techniques and Applications (DICTA 2024), 27/11/2024–29/11/2024, Perth, WA
Learning to generate motions of thin structures such as plant leaves in dynamic view synthesis is challenging. This is because thin structures usually undergo small but fast, non-rigid motions as they interact with air and wind. When given a set of RGB images or videos of a scene with moving thin structures as input, existing methods that map the scene to its corresponding canonical space for rendering novel views fail as the object movements are too subtle compared to the background. Disentangling the objects with thin parts from the background scene is also challenging when the parts show fast and rapid motions. To address these issues, we propose a Neural Radiance Field (NeRF)-based framework that accurately reconstructs thin structures such as leaves and captures their subtle, fast motions. The framework learns the geometry of a scene by mapping the dynamic images to a canonical scene in which the scene remains static. We propose a ray masking network to further decompose the canonical scene into foreground and background, thus enabling the network to focus more on foreground movements. We conducted experiments using a dataset containing thin structures such as leaves and petals, which include image sequences collected by us and one public image sequence. Experiments show superior results compared to existing methods. Video outputs are available at https://dythinobjects.com/.