-
Notifications
You must be signed in to change notification settings - Fork 7.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(ieee): Restore author.ts #17688
fix(ieee): Restore author.ts #17688
Conversation
Successfully generated as following: http://localhost:1200/ieee/author/37264968900/newest/20 - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Simon Julier on IEEE Xplore</title>
<link>https://ieeexplore.ieee.org/author/37264968900</link>
<atom:link href="http://localhost:1200/ieee/author/37264968900/newest/20" rel="self" type="application/rss+xml"></atom:link>
<description>Simon J. Julier (M’93) is currently a Senior Lecturer with the Vision, Imaging and Virtual Environments Group, Department of Computer Science, University College London (UCL), London, U.K. Before joining UCL, he worked for nine years with the 3D Mixed and Virtual Environments Laboratory, Naval Research Laboratory, Washington, DC, USA. He has worked on a number of projects, including the development of systems for sports training, coordinated search, and rescue with swarms of UAVs, remote collaboration systems, enhanced security management systems for refugee camps, and sea border surveillance in the presence of small targets. His research interests include distributed data fusion, multitarget tracking, nonlinear estimation, object recognition, and simultaneous localization and mapping. - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>[email protected] (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Sat, 23 Nov 2024 09:35:49 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</title>
<description><p><span><big>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</big></span><br></p><p><span><small><i>Zhaozhong Chen; Harel Biggie; Nisar Ahmed; Simon Julier; Christoffer Heckman</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2024.3350587">https://doi.org/10.1109/TAES.2024.3350587</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>The nonlinear and stochastic relationship between noise covariance parameter values and state estimator performance makes optimal filter tuning a very challenging problem. Popular optimization-based tuning approaches can easily get trapped in local minima, leading to poor noise parameter identification and suboptimal state estimation. Recently, black box techniques based on Bayesian optimization with Gaussian processes (GPBO) have been shown to overcome many of these issues, using normalized estimation error squared and normalized innovation error statistics to derive cost functions for Kalman filter auto-tuning. While reliable noise parameter estimates are obtained in many cases, GPBO solutions obtained with these conventional cost functions do not always converge to optimal filter noise parameters and lack robustness to parameter ambiguities in time-discretized system models. This article addresses these issues by making two main contributions. First, new cost functions are developed to determine if an estimator has been tuned correctly. It is shown that traditional chi-square tests are inadequate for correct auto-tuning because they do not accurately model the distribution of innovations when the estimator is incorrectly tuned. Second, the new metrics (formulated over multiple time discretization intervals) is combined with a student-t processes Bayesian optimization to achieve robust estimator performance for time discretized state space models. The robustness, accuracy, and reliability of our approach are illustrated on classical state estimation problems.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10382621/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10382621/</guid>
</item>
<item>
<title>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</title>
<description><p><span><big>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2023.3256973">https://doi.org/10.1109/TAES.2023.3256973</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Motion tracking systems based on optical sensors typically suffer from poor lighting, occlusion, limited coverage, and may raise privacy concerns. Recently, radio-frequency (RF) based approaches using WiFi have emerged which offer low-cost ubiquitous sensing whilst preserving privacy. However, output range-Doppler or time-frequency spectrograms cannot represent human motion intuitively and usually requires further processing. In this study, we propose MDPose, a novel framework for human skeletal motion reconstruction based on WiFi micro-Doppler. MDPose provides an effective solution to represent human activity by reconstructing skeleton models with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way. Specifically, MDPose is implemented over three sequential stages to address various challenges: First, a denoising algorithm is employed to remove any unwanted noise that may affect feature extraction and enhance weak Doppler measurements. Second, a convolutional neural network (CNN)-recurrent neural network (RNN) architecture is applied to learn temporal-spatial dependency from clean micro-Doppler and restore velocity information to key points under the supervision of the motion capture (Mocap) system. Finally, a pose optimisation mechanism based on learning optimisation vectors is employed to estimate the initial skeletal state and to eliminate additional errors. We have conducted comprehensive evaluations in a variety of environments using numerous subjects with a single receiver radar system to demonstrate the performance of MDPose, and report 29.4mm mean absolute error over key points positions on several common daily activities, which has performance comparable to that of state-of-the-art RF-based pose estimation systems.11For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10068751/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10068751/</guid>
</item>
<item>
<title>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</title>
<description><p><span><big>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</big></span><br></p><p><span><small><i>Ziwen Lu; Jingyi Zhang; Kalila Shapiro; Nels Numan; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181">https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Augmented Reality (AR) and Virtual Reality (VR) users have distinct capabilities and experiences during Extended Reality (XR) collaborations: while AR users benefit from real-time contextual information due to physical presence, VR users enjoy the flexibility to transition between locations rapidly, unconstrained by physical space.Our research aims to utilize these spatial differences to facilitate engaging, shared XR experiences. Using Google Geospatial Creator, we enable large-scale outdoor authoring and precise localization to create a unified environment. We integrated Ubiq to allow simultaneous voice communication, avatar-based interaction and shared object manipulation across platforms.We apply AR and VR technologies in cultural heritage exploration. We selected the Euston Arch as our case study due to its dramatic architectural transformations over time. We enriched the co-exploration experience by integrating historical photos, a 3D model of the Euston Arch, and immersive audio narratives into the shared AR/VR environment.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10322275/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10322275/</guid>
</item>
<item>
<title>Revisiting Distribution-Based Registration Methods</title>
<description><p><span><big>Revisiting Distribution-Based Registration Methods</big></span><br></p><p><span><small><i>Himanshu Gupta; Henrik Andreasson; Martin Magnusson; Simon Julier; Achim J. Lilientha</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ECMR59166.2023.10256416">https://doi.org/10.1109/ECMR59166.2023.10256416</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions “heavily broadened likelihood NDT” (HBL- NDT) (34.7% success rate) and “over-lapping grid cells NDT” (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10256416/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10256416/</guid>
</item>
<item>
<title>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</title>
<description><p><span><big>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</big></span><br></p><p><span><small><i>Nels Numan; Ziwen Lu; Benjamin Congdon; Daniele Giunchi; Alexandros Rotsidis; Andreas Lernis; Kyriakos Larmos; Tereza Kourra; Panayiotis Charalambous; Yiorgos Chrysanthou; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VRW58643.2023.00029">https://doi.org/10.1109/VRW58643.2023.00029</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Most research on collaborative mixed reality (CMR) has focused on indoor spaces. In this paper, we present our ongoing work aimed at investigating the potential of CMR in outdoor spaces. These spaces present unique challenges due to their larger and more com-plex nature, particularly in terms of reconstruction, tracking, and interaction. Our prototype system utilises a photorealistic model to facilitate collaboration between remote virtual reality (VR) users and a local augmented reality (AR) user. We discuss our design considerations, lessons learnt, and areas for future work.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10108714/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10108714/</guid>
</item>
<item>
<title>Autonomous Mobile 3D Printing of Large-Scale Trajectories</title>
<description><p><span><big>Autonomous Mobile 3D Printing of Large-Scale Trajectories</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS47612.2022.9982274">https://doi.org/10.1109/IROS47612.2022.9982274</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Mobile 3D Printing (M3DP), using printing-in-motion, is a powerful paradigm for automated construction. A mobile robot, equipped with its own power, materials and an arm-mounted extruder, simultaneously navigates and creates its environment. Such systems can be highly scalable, parallelizable and flexible. However, planning and controlling the motion of the arm and base at the same time is challenging and most deployments either avoid robot-base motion entirely or use human prescribed robot-base paths. In a previous paper, we developed a high-level planning algorithm to automate M3DP given a print task. The generated robot-base paths avoid collisions and maintain task reachability. In this paper, we extend this work to robot control. We develop and compare three different ways to integrate the long-duration planned path with a short horizon Model Predictive Controller. Experiments are carried out via a new M3DP system - Armstone. We evaluate and demonstrate our algorithm in a 250 m long multi-layer print which is about 5 times longer than any previous physical printing-in-motion system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9982274/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9982274/</guid>
</item>
<item>
<title>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</title>
<description><p><span><big>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</big></span><br></p><p><span><small><i>Katherine Wang; Simon J. Julier; Youngjun Cho</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACCESS.2022.3147726">https://doi.org/10.1109/ACCESS.2022.3147726</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>With the rising prevalence of autism diagnoses, it is essential for research to understand how to leverage technology to support the diverse nature of autistic traits. While traditional interventions focused on technology for medical cure and rehabilitation, recent research aims to understand how technology can accommodate each unique situation in an efficient and engaging way. Extended reality (XR) technology has been shown to be effective in improving attention in autistic users given that it is more engaging and motivating than other traditional mediums. Here, we conducted a systematic review of 59 research articles that explored the role of attention in XR interventions for autistic users. We systematically analyzed demographics, study design and findings, including autism screening and attention measurement methods. Furthermore, given methodological inconsistencies in the literature, we systematically synthesize methods and protocols including screening tools, physiological and behavioral cues of autism and XR tasks. While there is substantial evidence for the effectiveness of using XR in attention-based interventions for autism to support autistic traits, we have identified three principal research gaps that provide promising research directions to examine how autistic populations interact with XR. First, our findings highlight the disproportionate geographic locations of autism studies and underrepresentation of autistic adults, evidence of gender disparity, and presence of individuals diagnosed with co-occurring conditions across studies. Second, many studies used an assortment of standardized and novel tasks and self-report assessments with limited tested reliability. Lastly, the research lacks evidence of performance maintenance and transferability. Based on these challenges, this paper discusses inclusive future research directions considering greater diversification of participant recruitment, robust objective evaluations using physiological measurements (e.g., eye-tracking), and follow-up maintenance sessions that promote transferrable skills. Pursuing these opportunities would lead to more effective therapy solutions, improved accessible interfaces, and engaging interactions.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9697342/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9697342/</guid>
</item>
<item>
<title>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</title>
<description><p><span><big>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon J. Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TGRS.2021.3121211">https://doi.org/10.1109/TGRS.2021.3121211</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Micro-Doppler signatures contain considerable information about target dynamics. However, the radar sensing systems are easily affected by noisy surroundings, resulting in uninterpretable motion patterns on the micro-Doppler spectrogram (
$\mu $
-DS). Meanwhile, radar returns often suffer from multipath, clutter, and interference. These issues lead to difficulty in, for example, motion feature extraction and activity classification using micro-Doppler signatures. In this article, we propose a latent feature-wise mapping strategy, called feature mapping network (FMNet), to transform measured spectrograms so that they more closely resemble the output from a simulation under the same conditions. Based on measured spectrogram and the matched simulated data, our framework contains three parts: an encoder which is used to extract latent representations/features, a decoder outputs reconstructed spectrogram according to the latent features, and a discriminator minimizes the distance of latent features of measured and simulated data. We demonstrate the FMNet with six activities data and two experimental scenarios, and final results show strong enhanced patterns and can keep actual motion information to the greatest extent. On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet. From final classification results, we can see significant improvements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9583945/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9583945/</guid>
</item>
<item>
<title>Consensus Based Networking of Distributed Virtual Environments</title>
<description><p><span><big>Consensus Based Networking of Distributed Virtual Environments</big></span><br></p><p><span><small><i>Sebastian Friston; Elias Griffith; David Swapp; Simon Julier; Caleb Irondi; Fred Jjunju; Ryan Ward; Alan Marshall; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TVCG.2021.3052580">https://doi.org/10.1109/TVCG.2021.3052580</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Distributed virtual environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN’s support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000’s of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9328611/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9328611/</guid>
</item>
<item>
<title>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</title>
<description><p><span><big>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</big></span><br></p><p><span><small><i>Sebastian A. Kay; Simon Julier; Vijay M. Pawar</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9636352">https://doi.org/10.1109/IROS51168.2021.9636352</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>To capture the geometry of an object by an autonomous system, next best view (NBV) planning can be used to determine the path a robot will take. However, current NBV planning algorithms do not distinguish between objects that need to be mapped and everything else in the environment; leading to inefficient search strategies. In this paper we present a novel approach for NBV planning that accounts for the importance of objects in the environment to inform navigation. Using weighted entropy to encode object utilities computed via semantic segmentation, we evaluate our approach over a set of virtual Gazebo environments comparable to construction scales. Our results show that using semantic information reduces the time required to capture a target object by at least 40 percent.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9636352/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9636352/</guid>
</item>
<item>
<title>Task-Consistent Path Planning for Mobile 3D Printing</title>
<description><p><span><big>Task-Consistent Path Planning for Mobile 3D Printing</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9635916">https://doi.org/10.1109/IROS51168.2021.9635916</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we explore the problem of task-consistent path planning for printing-in-motion via Mobile Manipulators (MM). MM offer a potentially unlimited planar workspace and flexibility for print operations. However, most existing methods have only mobility to relocate an arm which then prints while stationary. In this paper we present a new fully autonomous path planning approach for mobile material deposition. We use a modified version of Rapidly-exploring Random Tree Star (RRT*) algorithm, which is informed by a constrained Inverse Reachability Map (IRM) to ensure task consistency. Collision avoidance and end-effector reachability are respected in our approach. Our method also detects when a print path cannot be completed in a single execution. In this case it will decompose the path into several segments and reposition the base accordingly.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9635916/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9635916/</guid>
</item>
<item>
<title>Time Dependence in Kalman Filter Tuning</title>
<description><p><span><big>Time Dependence in Kalman Filter Tuning</big></span><br></p><p><span><small><i>Zhaozhong Chen; Christoffer Heckman; Simon Julier; Nisar Ahmed</i></small></span><br><span><small><i><a href="https://doi.org/10.23919/FUSION49465.2021.9626864">https://doi.org/10.23919/FUSION49465.2021.9626864</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose an approach to address the problems with ambiguity in tuning the process and observation noises for a discrete-time linear Kalman filter. Conventional approaches to tuning (e.g. using normalized estimation error squared and covariance minimization) compute empirical measures of filter performance. The parameters are selected, either manually or by some kind of optimization algorithm, to maximize these measures of performance. However, there are two challenges with this approach. First, in theory, many of these measures do not guarantee a unique solution due to observability issues. Second, in practice, empirically computed statistical quantities can be very noisy due to a finite number of samples. We propose a method to overcome these limitations. Our method has two main parts to it. The first is to ensure that the tuning problem has a single unique solution. We achieve this by simultaneously tuning the filter over multiple different prediction intervals. Although this yields a unique solution, practical issues (such as sampling noise) mean that it cannot be directly applied. Therefore, we use Bayesian Optimization. This technique handles noisy data and the local minima that it introduces. We demonstrate our results in a reference example and demonstrate that we are able to obtain good results. We share the source code for the benefit of the community
1
.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9626864/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9626864/</guid>
</item>
<item>
<title>Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring</title>
<description><p><span><big>Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring</big></span><br></p><p><span><small><i>Chong Tang; Shelly Vishwakarma; Wenda Li; Raviraj Adve; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RadarConf2147009.2021.9455314">https://doi.org/10.1109/RadarConf2147009.2021.9455314</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Human micro-Doppler signatures in most passive WiFi radar (PWR) scenarios are captured through real-world measurements using various hardware platforms. However, gathering large volumes of high quality and diverse real radar datasets has always been an expensive and laborious task. This work presents an open-source motion capture data-driven simulation tool SimHumalator that is able to generate human micro-Doppler radar data in PWR scenarios. We qualitatively compare the micro-Doppler signatures generated through SimHumalator with the measured real signatures. Here, we present the use of SimHumalator to simulate a set of human actions. We demonstrate that augmenting a measurement database with simulated data, using SimHumalator, results in an 8% improvement in classification accuracy. Our results suggest that simulation data can be used to augment experimental datasets of limited volume to address the cold-start problem typically encountered in radar research.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9455314/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9455314/</guid>
</item>
<item>
<title>Misclassification Risk and Uncertainty Quantification in Deep Classifiers</title>
<description><p><span><big>Misclassification Risk and Uncertainty Quantification in Deep Classifiers</big></span><br></p><p><span><small><i>Murat Sensoy; Maryam Saleki; Simon Julier; Reyhan Aydogan; John Reid</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/WACV48630.2021.00253">https://doi.org/10.1109/WACV48630.2021.00253</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose risk-calibrated evidential deep classifiers to reduce the costs associated with classification errors. We use two main approaches. The first is to develop methods to quantify the uncertainty of a classifier’s predictions and reduce the likelihood of acting on erroneous predictions. The second is a novel way to train the classifier such that erroneous classifications are biased towards less risky categories. We combine these two approaches in a principled way. While doing this, we extend evidential deep learning with pignistic probabilities, which are used to quantify uncertainty of classification predictions and model rational decision making under uncertainty.We evaluate the performance of our approach on several image classification tasks. We demonstrate that our approach allows to (i) incorporate misclassification cost while training deep classifiers, (ii) accurately quantify the uncertainty of classification predictions, and (iii) simultaneously learn how to make classification decisions to minimize expected cost of classification errors.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9423198/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9423198/</guid>
</item>
<item>
<title>Exploiting Semantic and Public Prior Information in MonoSLAM</title>
<description><p><span><big>Exploiting Semantic and Public Prior Information in MonoSLAM</big></span><br></p><p><span><small><i>Chenxi Ye; Yiduo Wang; Ziwen Lu; Igor Gilitschenski; Martin Parsley; Simon J. Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS45743.2020.9340845">https://doi.org/10.1109/IROS45743.2020.9340845</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose a method to use semantic information to improve the use of map priors in a sparse, feature-based MonoSLAM system. To incorporate the priors, the features in the prior and SLAM maps must be associated with one another. Most existing systems build a map using SLAM and then align it with the prior map. However, this approach assumes that the local map is accurate, and the majority of the features within it can be constrained by the prior. We use the intuition that many prior maps are created to provide semantic information. Therefore, valid associations only exist if the features in the SLAM map arise from the same kind of semantic object as the prior map. Using this intuition, we extend ORB-SLAM2 using an open source pre-trained semantic segmentation network (DeepLabV3+) to incorporate prior information from Open Street Map building footprint data. We show that the amount of drift, before loop closing, is significantly smaller than that for original ORB-SLAM2. Furthermore, we show that when ORB-SLAM2 is used as a prior-aided visual odometry system, the tracking accuracy is equal to or better than the full ORB-SLAM2 system without the need for global mapping or loop closure.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9340845/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9340845/</guid>
</item>
<item>
<title>Occupancy Detection and People Counting Using WiFi Passive Radar</title>
<description><p><span><big>Occupancy Detection and People Counting Using WiFi Passive Radar</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Kevin Chetty; Simon Julier; Karl Woodbridge</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RadarConf2043947.2020.9266493">https://doi.org/10.1109/RadarConf2043947.2020.9266493</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Occupancy detection and people counting technologies have important uses in many scenarios ranging from management of human resources, optimising energy use in intelligent buildings and improving public services in future smart cities. Wi-Fi based sensing approaches for these applications have attracted significant attention in recent years because of their ubiquitous nature, and ability to preserve the privacy of individuals being counted. In this paper, we present a Passive Wi-Fi Radar (PWR) technique for occupancy detection and people counting. Unlike systems which exploit the Wi-Fi Received Signal Strength (RSS) and Channel State Information (CSI), PWR systems can directly be applied in any environment covered by an existing WiFi local area network without special modifications to the Wi-Fi access point. Specifically, we apply Cross Ambiguity Function (CAF) processing to generate Range-Doppler maps, then we use Time-Frequency transforms to generate Doppler spectrograms, and finally employ a CLEAN algorithm to remove the direct signal interference. A Convolutional Neural Network (CNN) and sliding-window based feature selection scheme is then used for classification. Experimental results collected from a typical office environment are used to validate the proposed PWR system for accurately determining room occupancy, and correctly predict the number of people when using four test subjects in experimental measurements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9266493/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9266493/</guid>
</item>
<item>
<title>Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos</title>
<description><p><span><big>Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos</big></span><br></p><p><span><small><i>Anastasia Schmitz; Andrew MacQuarrie; Simon Julier; Nicola Binetti; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VR46266.2020.00024">https://doi.org/10.1109/VR46266.2020.00024</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Filmmakers of panoramic videos frequently struggle to guide attention to Regions of Interest (ROIs) due to consumers’ freedom to explore. Some researchers hypothesize that peripheral cues attract reflexive/involuntary attention whereas cues within central vision engage and direct voluntary attention. This mixed-methods study evaluated the effectiveness of using central arrows and peripheral flickers to guide and focus attention in panoramic videos. Twenty-five adults wore a head-mounted display with an eye tracker and were guided to 14 ROIs in two panoramic videos. No significant differences emerged in regard to the number of followed cues, the time taken to reach and observe ROIs, ROI-related memory and user engagement. However, participants’ gaze travelled a significantly greater distance toward ROIs within the first 500 ms after flicker-onsets compared to arrow-onsets. Nevertheless, most users preferred the arrow and perceived it as significantly more rewarding than the flicker. The findings imply that traditional attention paradigms are not entirely applicable to panoramic videos, as peripheral cues appear to engage both involuntary and voluntary attention. Theoretical and practical implications as well as limitations are discussed.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9089479/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9089479/</guid>
</item>
<item>
<title>Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging</title>
<description><p><span><big>Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging</big></span><br></p><p><span><small><i>Youngjun Cho; Nadia Bianchi-Berthouze; Manuel Oliveira; Catherine Holloway; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACII.2019.8925453">https://doi.org/10.1109/ACII.2019.8925453</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Automatically monitoring and quantifying stress-induced thermal dynamic information in real-world settings is an extremely important but challenging problem. In this paper, we explore whether we can use mobile thermal imaging to measure the rich physiological cues of mental stress that can be deduced from a person's nose temperature. To answer this question we build i) a framework for monitoring nasal thermal variable patterns continuously and ii) a novel set of thermal variability metrics to capture a richness of the dynamic information. We evaluated our approach in a series of studies including laboratory-based psychosocial stress-induction tasks and real-world factory settings. We demonstrate our approach has the potential for assessing stress responses beyond controlled laboratory settings.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8925453/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8925453/</guid>
</item>
<item>
<title>Passive Activity Classification Using Just WiFi Probe Response Signals</title>
<description><p><span><big>Passive Activity Classification Using Just WiFi Probe Response Signals</big></span><br></p><p><span><small><i>Fangzhan Shi; Kevin Chetty; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RADAR.2019.8835660">https://doi.org/10.1109/RADAR.2019.8835660</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Passive WiFi radar shows significant promise for a wide range of applications in both security and healthcare owing to its detection, tracking and recognition capabilities. However, studies examining micro-Doppler classification using passive WiFi radar have relied on manually stimulating WiFi access points to increase the bandwidths and duty-cycles of transmissions; either through file-downloads to generate high data-rate signals, or increasing the repetition frequency of the WiFi beacon signal from its default setting. In real-world scenarios, both these approaches would require user access to the WiFi network or WiFi access point through password authentication, and therefore involve a level of cooperation which cannot always be relied upon e.g. in law-enforcement applications. In this research, we investigate WiFi activity classification using just WiFi probe response signals which can be generated using a low-cost off-the-shelf secondary device (Raspberry Pi) eliminating the requirement to actually connect to the WiFi network. This removes the need to have continuous data traffic in the network or to modify the firmware configuration to manipulate the beacon signal interval, making the technology deployable in all situations. An activity recognition model based on a convolutional neural network resulted in an overall classification accuracy of 75% when trained from scratch using 300 measured WiFi probe-response samples across 6 classes. This value is then increased to 82%, with significantly less training when adopting a transfer learning approach: initial training using WiFi data traffic signals, followed by fine-tuning using probe response signals.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8835660/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8835660/</guid>
</item>
<item>
<title>NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning</title>
<description><p><span><big>NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning</big></span><br></p><p><span><small><i>Moustafa Alzantot; Amy Widdicombe; Simon Julier; Mani Srivastava</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/SMARTCOMP.2019.00033">https://doi.org/10.1109/SMARTCOMP.2019.00033</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8784063/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8784063/</guid>
</item>
</channel>
</rss> |
Successfully generated as following: http://localhost:1200/ieee/author/37264968900/newest/20 - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Simon Julier on IEEE Xplore</title>
<link>https://ieeexplore.ieee.org/author/37264968900</link>
<atom:link href="http://localhost:1200/ieee/author/37264968900/newest/20" rel="self" type="application/rss+xml"></atom:link>
<description>Simon J. Julier (M’93) is currently a Senior Lecturer with the Vision, Imaging and Virtual Environments Group, Department of Computer Science, University College London (UCL), London, U.K. Before joining UCL, he worked for nine years with the 3D Mixed and Virtual Environments Laboratory, Naval Research Laboratory, Washington, DC, USA. He has worked on a number of projects, including the development of systems for sports training, coordinated search, and rescue with swarms of UAVs, remote collaboration systems, enhanced security management systems for refugee camps, and sea border surveillance in the presence of small targets. His research interests include distributed data fusion, multitarget tracking, nonlinear estimation, object recognition, and simultaneous localization and mapping. - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>[email protected] (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Sat, 23 Nov 2024 09:41:12 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</title>
<description><p><span><big>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</big></span><br></p><p><span><small><i>Zhaozhong Chen; Harel Biggie; Nisar Ahmed; Simon Julier; Christoffer Heckman</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2024.3350587">https://doi.org/10.1109/TAES.2024.3350587</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>The nonlinear and stochastic relationship between noise covariance parameter values and state estimator performance makes optimal filter tuning a very challenging problem. Popular optimization-based tuning approaches can easily get trapped in local minima, leading to poor noise parameter identification and suboptimal state estimation. Recently, black box techniques based on Bayesian optimization with Gaussian processes (GPBO) have been shown to overcome many of these issues, using normalized estimation error squared and normalized innovation error statistics to derive cost functions for Kalman filter auto-tuning. While reliable noise parameter estimates are obtained in many cases, GPBO solutions obtained with these conventional cost functions do not always converge to optimal filter noise parameters and lack robustness to parameter ambiguities in time-discretized system models. This article addresses these issues by making two main contributions. First, new cost functions are developed to determine if an estimator has been tuned correctly. It is shown that traditional chi-square tests are inadequate for correct auto-tuning because they do not accurately model the distribution of innovations when the estimator is incorrectly tuned. Second, the new metrics (formulated over multiple time discretization intervals) is combined with a student-t processes Bayesian optimization to achieve robust estimator performance for time discretized state space models. The robustness, accuracy, and reliability of our approach are illustrated on classical state estimation problems.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10382621/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10382621/</guid>
</item>
<item>
<title>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</title>
<description><p><span><big>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2023.3256973">https://doi.org/10.1109/TAES.2023.3256973</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Motion tracking systems based on optical sensors typically suffer from poor lighting, occlusion, limited coverage, and may raise privacy concerns. Recently, radio-frequency (RF) based approaches using WiFi have emerged which offer low-cost ubiquitous sensing whilst preserving privacy. However, output range-Doppler or time-frequency spectrograms cannot represent human motion intuitively and usually requires further processing. In this study, we propose MDPose, a novel framework for human skeletal motion reconstruction based on WiFi micro-Doppler. MDPose provides an effective solution to represent human activity by reconstructing skeleton models with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way. Specifically, MDPose is implemented over three sequential stages to address various challenges: First, a denoising algorithm is employed to remove any unwanted noise that may affect feature extraction and enhance weak Doppler measurements. Second, a convolutional neural network (CNN)-recurrent neural network (RNN) architecture is applied to learn temporal-spatial dependency from clean micro-Doppler and restore velocity information to key points under the supervision of the motion capture (Mocap) system. Finally, a pose optimisation mechanism based on learning optimisation vectors is employed to estimate the initial skeletal state and to eliminate additional errors. We have conducted comprehensive evaluations in a variety of environments using numerous subjects with a single receiver radar system to demonstrate the performance of MDPose, and report 29.4mm mean absolute error over key points positions on several common daily activities, which has performance comparable to that of state-of-the-art RF-based pose estimation systems.11For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10068751/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10068751/</guid>
</item>
<item>
<title>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</title>
<description><p><span><big>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</big></span><br></p><p><span><small><i>Ziwen Lu; Jingyi Zhang; Kalila Shapiro; Nels Numan; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181">https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Augmented Reality (AR) and Virtual Reality (VR) users have distinct capabilities and experiences during Extended Reality (XR) collaborations: while AR users benefit from real-time contextual information due to physical presence, VR users enjoy the flexibility to transition between locations rapidly, unconstrained by physical space.Our research aims to utilize these spatial differences to facilitate engaging, shared XR experiences. Using Google Geospatial Creator, we enable large-scale outdoor authoring and precise localization to create a unified environment. We integrated Ubiq to allow simultaneous voice communication, avatar-based interaction and shared object manipulation across platforms.We apply AR and VR technologies in cultural heritage exploration. We selected the Euston Arch as our case study due to its dramatic architectural transformations over time. We enriched the co-exploration experience by integrating historical photos, a 3D model of the Euston Arch, and immersive audio narratives into the shared AR/VR environment.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10322275/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10322275/</guid>
</item>
<item>
<title>Revisiting Distribution-Based Registration Methods</title>
<description><p><span><big>Revisiting Distribution-Based Registration Methods</big></span><br></p><p><span><small><i>Himanshu Gupta; Henrik Andreasson; Martin Magnusson; Simon Julier; Achim J. Lilientha</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ECMR59166.2023.10256416">https://doi.org/10.1109/ECMR59166.2023.10256416</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions “heavily broadened likelihood NDT” (HBL- NDT) (34.7% success rate) and “over-lapping grid cells NDT” (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10256416/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10256416/</guid>
</item>
<item>
<title>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</title>
<description><p><span><big>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</big></span><br></p><p><span><small><i>Nels Numan; Ziwen Lu; Benjamin Congdon; Daniele Giunchi; Alexandros Rotsidis; Andreas Lernis; Kyriakos Larmos; Tereza Kourra; Panayiotis Charalambous; Yiorgos Chrysanthou; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VRW58643.2023.00029">https://doi.org/10.1109/VRW58643.2023.00029</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Most research on collaborative mixed reality (CMR) has focused on indoor spaces. In this paper, we present our ongoing work aimed at investigating the potential of CMR in outdoor spaces. These spaces present unique challenges due to their larger and more com-plex nature, particularly in terms of reconstruction, tracking, and interaction. Our prototype system utilises a photorealistic model to facilitate collaboration between remote virtual reality (VR) users and a local augmented reality (AR) user. We discuss our design considerations, lessons learnt, and areas for future work.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10108714/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10108714/</guid>
</item>
<item>
<title>Autonomous Mobile 3D Printing of Large-Scale Trajectories</title>
<description><p><span><big>Autonomous Mobile 3D Printing of Large-Scale Trajectories</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS47612.2022.9982274">https://doi.org/10.1109/IROS47612.2022.9982274</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Mobile 3D Printing (M3DP), using printing-in-motion, is a powerful paradigm for automated construction. A mobile robot, equipped with its own power, materials and an arm-mounted extruder, simultaneously navigates and creates its environment. Such systems can be highly scalable, parallelizable and flexible. However, planning and controlling the motion of the arm and base at the same time is challenging and most deployments either avoid robot-base motion entirely or use human prescribed robot-base paths. In a previous paper, we developed a high-level planning algorithm to automate M3DP given a print task. The generated robot-base paths avoid collisions and maintain task reachability. In this paper, we extend this work to robot control. We develop and compare three different ways to integrate the long-duration planned path with a short horizon Model Predictive Controller. Experiments are carried out via a new M3DP system - Armstone. We evaluate and demonstrate our algorithm in a 250 m long multi-layer print which is about 5 times longer than any previous physical printing-in-motion system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9982274/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9982274/</guid>
</item>
<item>
<title>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</title>
<description><p><span><big>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</big></span><br></p><p><span><small><i>Katherine Wang; Simon J. Julier; Youngjun Cho</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACCESS.2022.3147726">https://doi.org/10.1109/ACCESS.2022.3147726</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>With the rising prevalence of autism diagnoses, it is essential for research to understand how to leverage technology to support the diverse nature of autistic traits. While traditional interventions focused on technology for medical cure and rehabilitation, recent research aims to understand how technology can accommodate each unique situation in an efficient and engaging way. Extended reality (XR) technology has been shown to be effective in improving attention in autistic users given that it is more engaging and motivating than other traditional mediums. Here, we conducted a systematic review of 59 research articles that explored the role of attention in XR interventions for autistic users. We systematically analyzed demographics, study design and findings, including autism screening and attention measurement methods. Furthermore, given methodological inconsistencies in the literature, we systematically synthesize methods and protocols including screening tools, physiological and behavioral cues of autism and XR tasks. While there is substantial evidence for the effectiveness of using XR in attention-based interventions for autism to support autistic traits, we have identified three principal research gaps that provide promising research directions to examine how autistic populations interact with XR. First, our findings highlight the disproportionate geographic locations of autism studies and underrepresentation of autistic adults, evidence of gender disparity, and presence of individuals diagnosed with co-occurring conditions across studies. Second, many studies used an assortment of standardized and novel tasks and self-report assessments with limited tested reliability. Lastly, the research lacks evidence of performance maintenance and transferability. Based on these challenges, this paper discusses inclusive future research directions considering greater diversification of participant recruitment, robust objective evaluations using physiological measurements (e.g., eye-tracking), and follow-up maintenance sessions that promote transferrable skills. Pursuing these opportunities would lead to more effective therapy solutions, improved accessible interfaces, and engaging interactions.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9697342/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9697342/</guid>
</item>
<item>
<title>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</title>
<description><p><span><big>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon J. Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TGRS.2021.3121211">https://doi.org/10.1109/TGRS.2021.3121211</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Micro-Doppler signatures contain considerable information about target dynamics. However, the radar sensing systems are easily affected by noisy surroundings, resulting in uninterpretable motion patterns on the micro-Doppler spectrogram (
$\mu $
-DS). Meanwhile, radar returns often suffer from multipath, clutter, and interference. These issues lead to difficulty in, for example, motion feature extraction and activity classification using micro-Doppler signatures. In this article, we propose a latent feature-wise mapping strategy, called feature mapping network (FMNet), to transform measured spectrograms so that they more closely resemble the output from a simulation under the same conditions. Based on measured spectrogram and the matched simulated data, our framework contains three parts: an encoder which is used to extract latent representations/features, a decoder outputs reconstructed spectrogram according to the latent features, and a discriminator minimizes the distance of latent features of measured and simulated data. We demonstrate the FMNet with six activities data and two experimental scenarios, and final results show strong enhanced patterns and can keep actual motion information to the greatest extent. On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet. From final classification results, we can see significant improvements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9583945/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9583945/</guid>
</item>
<item>
<title>Consensus Based Networking of Distributed Virtual Environments</title>
<description><p><span><big>Consensus Based Networking of Distributed Virtual Environments</big></span><br></p><p><span><small><i>Sebastian Friston; Elias Griffith; David Swapp; Simon Julier; Caleb Irondi; Fred Jjunju; Ryan Ward; Alan Marshall; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TVCG.2021.3052580">https://doi.org/10.1109/TVCG.2021.3052580</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Distributed virtual environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN’s support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000’s of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9328611/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9328611/</guid>
</item>
<item>
<title>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</title>
<description><p><span><big>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</big></span><br></p><p><span><small><i>Sebastian A. Kay; Simon Julier; Vijay M. Pawar</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9636352">https://doi.org/10.1109/IROS51168.2021.9636352</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>To capture the geometry of an object by an autonomous system, next best view (NBV) planning can be used to determine the path a robot will take. However, current NBV planning algorithms do not distinguish between objects that need to be mapped and everything else in the environment; leading to inefficient search strategies. In this paper we present a novel approach for NBV planning that accounts for the importance of objects in the environment to inform navigation. Using weighted entropy to encode object utilities computed via semantic segmentation, we evaluate our approach over a set of virtual Gazebo environments comparable to construction scales. Our results show that using semantic information reduces the time required to capture a target object by at least 40 percent.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9636352/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9636352/</guid>
</item>
<item>
<title>Task-Consistent Path Planning for Mobile 3D Printing</title>
<description><p><span><big>Task-Consistent Path Planning for Mobile 3D Printing</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9635916">https://doi.org/10.1109/IROS51168.2021.9635916</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we explore the problem of task-consistent path planning for printing-in-motion via Mobile Manipulators (MM). MM offer a potentially unlimited planar workspace and flexibility for print operations. However, most existing methods have only mobility to relocate an arm which then prints while stationary. In this paper we present a new fully autonomous path planning approach for mobile material deposition. We use a modified version of Rapidly-exploring Random Tree Star (RRT*) algorithm, which is informed by a constrained Inverse Reachability Map (IRM) to ensure task consistency. Collision avoidance and end-effector reachability are respected in our approach. Our method also detects when a print path cannot be completed in a single execution. In this case it will decompose the path into several segments and reposition the base accordingly.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9635916/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9635916/</guid>
</item>
<item>
<title>Time Dependence in Kalman Filter Tuning</title>
<description><p><span><big>Time Dependence in Kalman Filter Tuning</big></span><br></p><p><span><small><i>Zhaozhong Chen; Christoffer Heckman; Simon Julier; Nisar Ahmed</i></small></span><br><span><small><i><a href="https://doi.org/10.23919/FUSION49465.2021.9626864">https://doi.org/10.23919/FUSION49465.2021.9626864</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose an approach to address the problems with ambiguity in tuning the process and observation noises for a discrete-time linear Kalman filter. Conventional approaches to tuning (e.g. using normalized estimation error squared and covariance minimization) compute empirical measures of filter performance. The parameters are selected, either manually or by some kind of optimization algorithm, to maximize these measures of performance. However, there are two challenges with this approach. First, in theory, many of these measures do not guarantee a unique solution due to observability issues. Second, in practice, empirically computed statistical quantities can be very noisy due to a finite number of samples. We propose a method to overcome these limitations. Our method has two main parts to it. The first is to ensure that the tuning problem has a single unique solution. We achieve this by simultaneously tuning the filter over multiple different prediction intervals. Although this yields a unique solution, practical issues (such as sampling noise) mean that it cannot be directly applied. Therefore, we use Bayesian Optimization. This technique handles noisy data and the local minima that it introduces. We demonstrate our results in a reference example and demonstrate that we are able to obtain good results. We share the source code for the benefit of the community
1
.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9626864/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9626864/</guid>
</item>
<item>
<title>Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring</title>
<description><p><span><big>Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring</big></span><br></p><p><span><small><i>Chong Tang; Shelly Vishwakarma; Wenda Li; Raviraj Adve; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RadarConf2147009.2021.9455314">https://doi.org/10.1109/RadarConf2147009.2021.9455314</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Human micro-Doppler signatures in most passive WiFi radar (PWR) scenarios are captured through real-world measurements using various hardware platforms. However, gathering large volumes of high quality and diverse real radar datasets has always been an expensive and laborious task. This work presents an open-source motion capture data-driven simulation tool SimHumalator that is able to generate human micro-Doppler radar data in PWR scenarios. We qualitatively compare the micro-Doppler signatures generated through SimHumalator with the measured real signatures. Here, we present the use of SimHumalator to simulate a set of human actions. We demonstrate that augmenting a measurement database with simulated data, using SimHumalator, results in an 8% improvement in classification accuracy. Our results suggest that simulation data can be used to augment experimental datasets of limited volume to address the cold-start problem typically encountered in radar research.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9455314/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9455314/</guid>
</item>
<item>
<title>Misclassification Risk and Uncertainty Quantification in Deep Classifiers</title>
<description><p><span><big>Misclassification Risk and Uncertainty Quantification in Deep Classifiers</big></span><br></p><p><span><small><i>Murat Sensoy; Maryam Saleki; Simon Julier; Reyhan Aydogan; John Reid</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/WACV48630.2021.00253">https://doi.org/10.1109/WACV48630.2021.00253</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose risk-calibrated evidential deep classifiers to reduce the costs associated with classification errors. We use two main approaches. The first is to develop methods to quantify the uncertainty of a classifier’s predictions and reduce the likelihood of acting on erroneous predictions. The second is a novel way to train the classifier such that erroneous classifications are biased towards less risky categories. We combine these two approaches in a principled way. While doing this, we extend evidential deep learning with pignistic probabilities, which are used to quantify uncertainty of classification predictions and model rational decision making under uncertainty.We evaluate the performance of our approach on several image classification tasks. We demonstrate that our approach allows to (i) incorporate misclassification cost while training deep classifiers, (ii) accurately quantify the uncertainty of classification predictions, and (iii) simultaneously learn how to make classification decisions to minimize expected cost of classification errors.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9423198/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9423198/</guid>
</item>
<item>
<title>Exploiting Semantic and Public Prior Information in MonoSLAM</title>
<description><p><span><big>Exploiting Semantic and Public Prior Information in MonoSLAM</big></span><br></p><p><span><small><i>Chenxi Ye; Yiduo Wang; Ziwen Lu; Igor Gilitschenski; Martin Parsley; Simon J. Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS45743.2020.9340845">https://doi.org/10.1109/IROS45743.2020.9340845</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose a method to use semantic information to improve the use of map priors in a sparse, feature-based MonoSLAM system. To incorporate the priors, the features in the prior and SLAM maps must be associated with one another. Most existing systems build a map using SLAM and then align it with the prior map. However, this approach assumes that the local map is accurate, and the majority of the features within it can be constrained by the prior. We use the intuition that many prior maps are created to provide semantic information. Therefore, valid associations only exist if the features in the SLAM map arise from the same kind of semantic object as the prior map. Using this intuition, we extend ORB-SLAM2 using an open source pre-trained semantic segmentation network (DeepLabV3+) to incorporate prior information from Open Street Map building footprint data. We show that the amount of drift, before loop closing, is significantly smaller than that for original ORB-SLAM2. Furthermore, we show that when ORB-SLAM2 is used as a prior-aided visual odometry system, the tracking accuracy is equal to or better than the full ORB-SLAM2 system without the need for global mapping or loop closure.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9340845/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9340845/</guid>
</item>
<item>
<title>Occupancy Detection and People Counting Using WiFi Passive Radar</title>
<description><p><span><big>Occupancy Detection and People Counting Using WiFi Passive Radar</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Kevin Chetty; Simon Julier; Karl Woodbridge</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RadarConf2043947.2020.9266493">https://doi.org/10.1109/RadarConf2043947.2020.9266493</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Occupancy detection and people counting technologies have important uses in many scenarios ranging from management of human resources, optimising energy use in intelligent buildings and improving public services in future smart cities. Wi-Fi based sensing approaches for these applications have attracted significant attention in recent years because of their ubiquitous nature, and ability to preserve the privacy of individuals being counted. In this paper, we present a Passive Wi-Fi Radar (PWR) technique for occupancy detection and people counting. Unlike systems which exploit the Wi-Fi Received Signal Strength (RSS) and Channel State Information (CSI), PWR systems can directly be applied in any environment covered by an existing WiFi local area network without special modifications to the Wi-Fi access point. Specifically, we apply Cross Ambiguity Function (CAF) processing to generate Range-Doppler maps, then we use Time-Frequency transforms to generate Doppler spectrograms, and finally employ a CLEAN algorithm to remove the direct signal interference. A Convolutional Neural Network (CNN) and sliding-window based feature selection scheme is then used for classification. Experimental results collected from a typical office environment are used to validate the proposed PWR system for accurately determining room occupancy, and correctly predict the number of people when using four test subjects in experimental measurements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9266493/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9266493/</guid>
</item>
<item>
<title>Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos</title>
<description><p><span><big>Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos</big></span><br></p><p><span><small><i>Anastasia Schmitz; Andrew MacQuarrie; Simon Julier; Nicola Binetti; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VR46266.2020.00024">https://doi.org/10.1109/VR46266.2020.00024</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Filmmakers of panoramic videos frequently struggle to guide attention to Regions of Interest (ROIs) due to consumers’ freedom to explore. Some researchers hypothesize that peripheral cues attract reflexive/involuntary attention whereas cues within central vision engage and direct voluntary attention. This mixed-methods study evaluated the effectiveness of using central arrows and peripheral flickers to guide and focus attention in panoramic videos. Twenty-five adults wore a head-mounted display with an eye tracker and were guided to 14 ROIs in two panoramic videos. No significant differences emerged in regard to the number of followed cues, the time taken to reach and observe ROIs, ROI-related memory and user engagement. However, participants’ gaze travelled a significantly greater distance toward ROIs within the first 500 ms after flicker-onsets compared to arrow-onsets. Nevertheless, most users preferred the arrow and perceived it as significantly more rewarding than the flicker. The findings imply that traditional attention paradigms are not entirely applicable to panoramic videos, as peripheral cues appear to engage both involuntary and voluntary attention. Theoretical and practical implications as well as limitations are discussed.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9089479/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9089479/</guid>
</item>
<item>
<title>Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging</title>
<description><p><span><big>Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging</big></span><br></p><p><span><small><i>Youngjun Cho; Nadia Bianchi-Berthouze; Manuel Oliveira; Catherine Holloway; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACII.2019.8925453">https://doi.org/10.1109/ACII.2019.8925453</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Automatically monitoring and quantifying stress-induced thermal dynamic information in real-world settings is an extremely important but challenging problem. In this paper, we explore whether we can use mobile thermal imaging to measure the rich physiological cues of mental stress that can be deduced from a person's nose temperature. To answer this question we build i) a framework for monitoring nasal thermal variable patterns continuously and ii) a novel set of thermal variability metrics to capture a richness of the dynamic information. We evaluated our approach in a series of studies including laboratory-based psychosocial stress-induction tasks and real-world factory settings. We demonstrate our approach has the potential for assessing stress responses beyond controlled laboratory settings.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8925453/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8925453/</guid>
</item>
<item>
<title>Passive Activity Classification Using Just WiFi Probe Response Signals</title>
<description><p><span><big>Passive Activity Classification Using Just WiFi Probe Response Signals</big></span><br></p><p><span><small><i>Fangzhan Shi; Kevin Chetty; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RADAR.2019.8835660">https://doi.org/10.1109/RADAR.2019.8835660</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Passive WiFi radar shows significant promise for a wide range of applications in both security and healthcare owing to its detection, tracking and recognition capabilities. However, studies examining micro-Doppler classification using passive WiFi radar have relied on manually stimulating WiFi access points to increase the bandwidths and duty-cycles of transmissions; either through file-downloads to generate high data-rate signals, or increasing the repetition frequency of the WiFi beacon signal from its default setting. In real-world scenarios, both these approaches would require user access to the WiFi network or WiFi access point through password authentication, and therefore involve a level of cooperation which cannot always be relied upon e.g. in law-enforcement applications. In this research, we investigate WiFi activity classification using just WiFi probe response signals which can be generated using a low-cost off-the-shelf secondary device (Raspberry Pi) eliminating the requirement to actually connect to the WiFi network. This removes the need to have continuous data traffic in the network or to modify the firmware configuration to manipulate the beacon signal interval, making the technology deployable in all situations. An activity recognition model based on a convolutional neural network resulted in an overall classification accuracy of 75% when trained from scratch using 300 measured WiFi probe-response samples across 6 classes. This value is then increased to 82%, with significantly less training when adopting a transfer learning approach: initial training using WiFi data traffic signals, followed by fine-tuning using probe response signals.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8835660/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8835660/</guid>
</item>
<item>
<title>NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning</title>
<description><p><span><big>NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning</big></span><br></p><p><span><small><i>Moustafa Alzantot; Amy Widdicombe; Simon Julier; Mani Srivastava</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/SMARTCOMP.2019.00033">https://doi.org/10.1109/SMARTCOMP.2019.00033</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8784063/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8784063/</guid>
</item>
</channel>
</rss> |
Successfully generated as following: http://localhost:1200/ieee/author/37264968900/newest/20 - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Simon Julier on IEEE Xplore</title>
<link>https://ieeexplore.ieee.org/author/37264968900</link>
<atom:link href="http://localhost:1200/ieee/author/37264968900/newest/20" rel="self" type="application/rss+xml"></atom:link>
<description>Simon J. Julier (M’93) is currently a Senior Lecturer with the Vision, Imaging and Virtual Environments Group, Department of Computer Science, University College London (UCL), London, U.K. Before joining UCL, he worked for nine years with the 3D Mixed and Virtual Environments Laboratory, Naval Research Laboratory, Washington, DC, USA. He has worked on a number of projects, including the development of systems for sports training, coordinated search, and rescue with swarms of UAVs, remote collaboration systems, enhanced security management systems for refugee camps, and sea border surveillance in the presence of small targets. His research interests include distributed data fusion, multitarget tracking, nonlinear estimation, object recognition, and simultaneous localization and mapping. - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>[email protected] (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Sat, 23 Nov 2024 09:48:29 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</title>
<description><p><span><big>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</big></span><br></p><p><span><small><i>Zhaozhong Chen; Harel Biggie; Nisar Ahmed; Simon Julier; Christoffer Heckman</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2024.3350587">https://doi.org/10.1109/TAES.2024.3350587</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>The nonlinear and stochastic relationship between noise covariance parameter values and state estimator performance makes optimal filter tuning a very challenging problem. Popular optimization-based tuning approaches can easily get trapped in local minima, leading to poor noise parameter identification and suboptimal state estimation. Recently, black box techniques based on Bayesian optimization with Gaussian processes (GPBO) have been shown to overcome many of these issues, using normalized estimation error squared and normalized innovation error statistics to derive cost functions for Kalman filter auto-tuning. While reliable noise parameter estimates are obtained in many cases, GPBO solutions obtained with these conventional cost functions do not always converge to optimal filter noise parameters and lack robustness to parameter ambiguities in time-discretized system models. This article addresses these issues by making two main contributions. First, new cost functions are developed to determine if an estimator has been tuned correctly. It is shown that traditional chi-square tests are inadequate for correct auto-tuning because they do not accurately model the distribution of innovations when the estimator is incorrectly tuned. Second, the new metrics (formulated over multiple time discretization intervals) is combined with a student-t processes Bayesian optimization to achieve robust estimator performance for time discretized state space models. The robustness, accuracy, and reliability of our approach are illustrated on classical state estimation problems.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10382621/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10382621/</guid>
</item>
<item>
<title>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</title>
<description><p><span><big>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2023.3256973">https://doi.org/10.1109/TAES.2023.3256973</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Motion tracking systems based on optical sensors typically suffer from poor lighting, occlusion, limited coverage, and may raise privacy concerns. Recently, radio-frequency (RF) based approaches using WiFi have emerged which offer low-cost ubiquitous sensing whilst preserving privacy. However, output range-Doppler or time-frequency spectrograms cannot represent human motion intuitively and usually requires further processing. In this study, we propose MDPose, a novel framework for human skeletal motion reconstruction based on WiFi micro-Doppler. MDPose provides an effective solution to represent human activity by reconstructing skeleton models with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way. Specifically, MDPose is implemented over three sequential stages to address various challenges: First, a denoising algorithm is employed to remove any unwanted noise that may affect feature extraction and enhance weak Doppler measurements. Second, a convolutional neural network (CNN)-recurrent neural network (RNN) architecture is applied to learn temporal-spatial dependency from clean micro-Doppler and restore velocity information to key points under the supervision of the motion capture (Mocap) system. Finally, a pose optimisation mechanism based on learning optimisation vectors is employed to estimate the initial skeletal state and to eliminate additional errors. We have conducted comprehensive evaluations in a variety of environments using numerous subjects with a single receiver radar system to demonstrate the performance of MDPose, and report 29.4mm mean absolute error over key points positions on several common daily activities, which has performance comparable to that of state-of-the-art RF-based pose estimation systems.11For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10068751/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10068751/</guid>
</item>
<item>
<title>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</title>
<description><p><span><big>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</big></span><br></p><p><span><small><i>Ziwen Lu; Jingyi Zhang; Kalila Shapiro; Nels Numan; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181">https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Augmented Reality (AR) and Virtual Reality (VR) users have distinct capabilities and experiences during Extended Reality (XR) collaborations: while AR users benefit from real-time contextual information due to physical presence, VR users enjoy the flexibility to transition between locations rapidly, unconstrained by physical space.Our research aims to utilize these spatial differences to facilitate engaging, shared XR experiences. Using Google Geospatial Creator, we enable large-scale outdoor authoring and precise localization to create a unified environment. We integrated Ubiq to allow simultaneous voice communication, avatar-based interaction and shared object manipulation across platforms.We apply AR and VR technologies in cultural heritage exploration. We selected the Euston Arch as our case study due to its dramatic architectural transformations over time. We enriched the co-exploration experience by integrating historical photos, a 3D model of the Euston Arch, and immersive audio narratives into the shared AR/VR environment.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10322275/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10322275/</guid>
</item>
<item>
<title>Revisiting Distribution-Based Registration Methods</title>
<description><p><span><big>Revisiting Distribution-Based Registration Methods</big></span><br></p><p><span><small><i>Himanshu Gupta; Henrik Andreasson; Martin Magnusson; Simon Julier; Achim J. Lilientha</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ECMR59166.2023.10256416">https://doi.org/10.1109/ECMR59166.2023.10256416</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions “heavily broadened likelihood NDT” (HBL- NDT) (34.7% success rate) and “over-lapping grid cells NDT” (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10256416/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10256416/</guid>
</item>
<item>
<title>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</title>
<description><p><span><big>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</big></span><br></p><p><span><small><i>Nels Numan; Ziwen Lu; Benjamin Congdon; Daniele Giunchi; Alexandros Rotsidis; Andreas Lernis; Kyriakos Larmos; Tereza Kourra; Panayiotis Charalambous; Yiorgos Chrysanthou; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VRW58643.2023.00029">https://doi.org/10.1109/VRW58643.2023.00029</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Most research on collaborative mixed reality (CMR) has focused on indoor spaces. In this paper, we present our ongoing work aimed at investigating the potential of CMR in outdoor spaces. These spaces present unique challenges due to their larger and more com-plex nature, particularly in terms of reconstruction, tracking, and interaction. Our prototype system utilises a photorealistic model to facilitate collaboration between remote virtual reality (VR) users and a local augmented reality (AR) user. We discuss our design considerations, lessons learnt, and areas for future work.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10108714/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10108714/</guid>
</item>
<item>
<title>Autonomous Mobile 3D Printing of Large-Scale Trajectories</title>
<description><p><span><big>Autonomous Mobile 3D Printing of Large-Scale Trajectories</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS47612.2022.9982274">https://doi.org/10.1109/IROS47612.2022.9982274</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Mobile 3D Printing (M3DP), using printing-in-motion, is a powerful paradigm for automated construction. A mobile robot, equipped with its own power, materials and an arm-mounted extruder, simultaneously navigates and creates its environment. Such systems can be highly scalable, parallelizable and flexible. However, planning and controlling the motion of the arm and base at the same time is challenging and most deployments either avoid robot-base motion entirely or use human prescribed robot-base paths. In a previous paper, we developed a high-level planning algorithm to automate M3DP given a print task. The generated robot-base paths avoid collisions and maintain task reachability. In this paper, we extend this work to robot control. We develop and compare three different ways to integrate the long-duration planned path with a short horizon Model Predictive Controller. Experiments are carried out via a new M3DP system - Armstone. We evaluate and demonstrate our algorithm in a 250 m long multi-layer print which is about 5 times longer than any previous physical printing-in-motion system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9982274/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9982274/</guid>
</item>
<item>
<title>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</title>
<description><p><span><big>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</big></span><br></p><p><span><small><i>Katherine Wang; Simon J. Julier; Youngjun Cho</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACCESS.2022.3147726">https://doi.org/10.1109/ACCESS.2022.3147726</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>With the rising prevalence of autism diagnoses, it is essential for research to understand how to leverage technology to support the diverse nature of autistic traits. While traditional interventions focused on technology for medical cure and rehabilitation, recent research aims to understand how technology can accommodate each unique situation in an efficient and engaging way. Extended reality (XR) technology has been shown to be effective in improving attention in autistic users given that it is more engaging and motivating than other traditional mediums. Here, we conducted a systematic review of 59 research articles that explored the role of attention in XR interventions for autistic users. We systematically analyzed demographics, study design and findings, including autism screening and attention measurement methods. Furthermore, given methodological inconsistencies in the literature, we systematically synthesize methods and protocols including screening tools, physiological and behavioral cues of autism and XR tasks. While there is substantial evidence for the effectiveness of using XR in attention-based interventions for autism to support autistic traits, we have identified three principal research gaps that provide promising research directions to examine how autistic populations interact with XR. First, our findings highlight the disproportionate geographic locations of autism studies and underrepresentation of autistic adults, evidence of gender disparity, and presence of individuals diagnosed with co-occurring conditions across studies. Second, many studies used an assortment of standardized and novel tasks and self-report assessments with limited tested reliability. Lastly, the research lacks evidence of performance maintenance and transferability. Based on these challenges, this paper discusses inclusive future research directions considering greater diversification of participant recruitment, robust objective evaluations using physiological measurements (e.g., eye-tracking), and follow-up maintenance sessions that promote transferrable skills. Pursuing these opportunities would lead to more effective therapy solutions, improved accessible interfaces, and engaging interactions.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9697342/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9697342/</guid>
</item>
<item>
<title>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</title>
<description><p><span><big>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon J. Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TGRS.2021.3121211">https://doi.org/10.1109/TGRS.2021.3121211</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Micro-Doppler signatures contain considerable information about target dynamics. However, the radar sensing systems are easily affected by noisy surroundings, resulting in uninterpretable motion patterns on the micro-Doppler spectrogram (
$\mu $
-DS). Meanwhile, radar returns often suffer from multipath, clutter, and interference. These issues lead to difficulty in, for example, motion feature extraction and activity classification using micro-Doppler signatures. In this article, we propose a latent feature-wise mapping strategy, called feature mapping network (FMNet), to transform measured spectrograms so that they more closely resemble the output from a simulation under the same conditions. Based on measured spectrogram and the matched simulated data, our framework contains three parts: an encoder which is used to extract latent representations/features, a decoder outputs reconstructed spectrogram according to the latent features, and a discriminator minimizes the distance of latent features of measured and simulated data. We demonstrate the FMNet with six activities data and two experimental scenarios, and final results show strong enhanced patterns and can keep actual motion information to the greatest extent. On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet. From final classification results, we can see significant improvements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9583945/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9583945/</guid>
</item>
<item>
<title>Consensus Based Networking of Distributed Virtual Environments</title>
<description><p><span><big>Consensus Based Networking of Distributed Virtual Environments</big></span><br></p><p><span><small><i>Sebastian Friston; Elias Griffith; David Swapp; Simon Julier; Caleb Irondi; Fred Jjunju; Ryan Ward; Alan Marshall; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TVCG.2021.3052580">https://doi.org/10.1109/TVCG.2021.3052580</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Distributed virtual environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN’s support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000’s of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9328611/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9328611/</guid>
</item>
<item>
<title>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</title>
<description><p><span><big>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</big></span><br></p><p><span><small><i>Sebastian A. Kay; Simon Julier; Vijay M. Pawar</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9636352">https://doi.org/10.1109/IROS51168.2021.9636352</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>To capture the geometry of an object by an autonomous system, next best view (NBV) planning can be used to determine the path a robot will take. However, current NBV planning algorithms do not distinguish between objects that need to be mapped and everything else in the environment; leading to inefficient search strategies. In this paper we present a novel approach for NBV planning that accounts for the importance of objects in the environment to inform navigation. Using weighted entropy to encode object utilities computed via semantic segmentation, we evaluate our approach over a set of virtual Gazebo environments comparable to construction scales. Our results show that using semantic information reduces the time required to capture a target object by at least 40 percent.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9636352/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9636352/</guid>
</item>
<item>
<title>Task-Consistent Path Planning for Mobile 3D Printing</title>
<description><p><span><big>Task-Consistent Path Planning for Mobile 3D Printing</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9635916">https://doi.org/10.1109/IROS51168.2021.9635916</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we explore the problem of task-consistent path planning for printing-in-motion via Mobile Manipulators (MM). MM offer a potentially unlimited planar workspace and flexibility for print operations. However, most existing methods have only mobility to relocate an arm which then prints while stationary. In this paper we present a new fully autonomous path planning approach for mobile material deposition. We use a modified version of Rapidly-exploring Random Tree Star (RRT*) algorithm, which is informed by a constrained Inverse Reachability Map (IRM) to ensure task consistency. Collision avoidance and end-effector reachability are respected in our approach. Our method also detects when a print path cannot be completed in a single execution. In this case it will decompose the path into several segments and reposition the base accordingly.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9635916/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9635916/</guid>
</item>
<item>
<title>Time Dependence in Kalman Filter Tuning</title>
<description><p><span><big>Time Dependence in Kalman Filter Tuning</big></span><br></p><p><span><small><i>Zhaozhong Chen; Christoffer Heckman; Simon Julier; Nisar Ahmed</i></small></span><br><span><small><i><a href="https://doi.org/10.23919/FUSION49465.2021.9626864">https://doi.org/10.23919/FUSION49465.2021.9626864</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose an approach to address the problems with ambiguity in tuning the process and observation noises for a discrete-time linear Kalman filter. Conventional approaches to tuning (e.g. using normalized estimation error squared and covariance minimization) compute empirical measures of filter performance. The parameters are selected, either manually or by some kind of optimization algorithm, to maximize these measures of performance. However, there are two challenges with this approach. First, in theory, many of these measures do not guarantee a unique solution due to observability issues. Second, in practice, empirically computed statistical quantities can be very noisy due to a finite number of samples. We propose a method to overcome these limitations. Our method has two main parts to it. The first is to ensure that the tuning problem has a single unique solution. We achieve this by simultaneously tuning the filter over multiple different prediction intervals. Although this yields a unique solution, practical issues (such as sampling noise) mean that it cannot be directly applied. Therefore, we use Bayesian Optimization. This technique handles noisy data and the local minima that it introduces. We demonstrate our results in a reference example and demonstrate that we are able to obtain good results. We share the source code for the benefit of the community
1
.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9626864/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9626864/</guid>
</item>
<item>
<title>Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring</title>
<description><p><span><big>Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring</big></span><br></p><p><span><small><i>Chong Tang; Shelly Vishwakarma; Wenda Li; Raviraj Adve; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RadarConf2147009.2021.9455314">https://doi.org/10.1109/RadarConf2147009.2021.9455314</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Human micro-Doppler signatures in most passive WiFi radar (PWR) scenarios are captured through real-world measurements using various hardware platforms. However, gathering large volumes of high quality and diverse real radar datasets has always been an expensive and laborious task. This work presents an open-source motion capture data-driven simulation tool SimHumalator that is able to generate human micro-Doppler radar data in PWR scenarios. We qualitatively compare the micro-Doppler signatures generated through SimHumalator with the measured real signatures. Here, we present the use of SimHumalator to simulate a set of human actions. We demonstrate that augmenting a measurement database with simulated data, using SimHumalator, results in an 8% improvement in classification accuracy. Our results suggest that simulation data can be used to augment experimental datasets of limited volume to address the cold-start problem typically encountered in radar research.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9455314/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9455314/</guid>
</item>
<item>
<title>Misclassification Risk and Uncertainty Quantification in Deep Classifiers</title>
<description><p><span><big>Misclassification Risk and Uncertainty Quantification in Deep Classifiers</big></span><br></p><p><span><small><i>Murat Sensoy; Maryam Saleki; Simon Julier; Reyhan Aydogan; John Reid</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/WACV48630.2021.00253">https://doi.org/10.1109/WACV48630.2021.00253</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose risk-calibrated evidential deep classifiers to reduce the costs associated with classification errors. We use two main approaches. The first is to develop methods to quantify the uncertainty of a classifier’s predictions and reduce the likelihood of acting on erroneous predictions. The second is a novel way to train the classifier such that erroneous classifications are biased towards less risky categories. We combine these two approaches in a principled way. While doing this, we extend evidential deep learning with pignistic probabilities, which are used to quantify uncertainty of classification predictions and model rational decision making under uncertainty.We evaluate the performance of our approach on several image classification tasks. We demonstrate that our approach allows to (i) incorporate misclassification cost while training deep classifiers, (ii) accurately quantify the uncertainty of classification predictions, and (iii) simultaneously learn how to make classification decisions to minimize expected cost of classification errors.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9423198/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9423198/</guid>
</item>
<item>
<title>Exploiting Semantic and Public Prior Information in MonoSLAM</title>
<description><p><span><big>Exploiting Semantic and Public Prior Information in MonoSLAM</big></span><br></p><p><span><small><i>Chenxi Ye; Yiduo Wang; Ziwen Lu; Igor Gilitschenski; Martin Parsley; Simon J. Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS45743.2020.9340845">https://doi.org/10.1109/IROS45743.2020.9340845</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>In this paper, we propose a method to use semantic information to improve the use of map priors in a sparse, feature-based MonoSLAM system. To incorporate the priors, the features in the prior and SLAM maps must be associated with one another. Most existing systems build a map using SLAM and then align it with the prior map. However, this approach assumes that the local map is accurate, and the majority of the features within it can be constrained by the prior. We use the intuition that many prior maps are created to provide semantic information. Therefore, valid associations only exist if the features in the SLAM map arise from the same kind of semantic object as the prior map. Using this intuition, we extend ORB-SLAM2 using an open source pre-trained semantic segmentation network (DeepLabV3+) to incorporate prior information from Open Street Map building footprint data. We show that the amount of drift, before loop closing, is significantly smaller than that for original ORB-SLAM2. Furthermore, we show that when ORB-SLAM2 is used as a prior-aided visual odometry system, the tracking accuracy is equal to or better than the full ORB-SLAM2 system without the need for global mapping or loop closure.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9340845/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9340845/</guid>
</item>
<item>
<title>Occupancy Detection and People Counting Using WiFi Passive Radar</title>
<description><p><span><big>Occupancy Detection and People Counting Using WiFi Passive Radar</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Kevin Chetty; Simon Julier; Karl Woodbridge</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RadarConf2043947.2020.9266493">https://doi.org/10.1109/RadarConf2043947.2020.9266493</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Occupancy detection and people counting technologies have important uses in many scenarios ranging from management of human resources, optimising energy use in intelligent buildings and improving public services in future smart cities. Wi-Fi based sensing approaches for these applications have attracted significant attention in recent years because of their ubiquitous nature, and ability to preserve the privacy of individuals being counted. In this paper, we present a Passive Wi-Fi Radar (PWR) technique for occupancy detection and people counting. Unlike systems which exploit the Wi-Fi Received Signal Strength (RSS) and Channel State Information (CSI), PWR systems can directly be applied in any environment covered by an existing WiFi local area network without special modifications to the Wi-Fi access point. Specifically, we apply Cross Ambiguity Function (CAF) processing to generate Range-Doppler maps, then we use Time-Frequency transforms to generate Doppler spectrograms, and finally employ a CLEAN algorithm to remove the direct signal interference. A Convolutional Neural Network (CNN) and sliding-window based feature selection scheme is then used for classification. Experimental results collected from a typical office environment are used to validate the proposed PWR system for accurately determining room occupancy, and correctly predict the number of people when using four test subjects in experimental measurements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9266493/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9266493/</guid>
</item>
<item>
<title>Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos</title>
<description><p><span><big>Directing versus Attracting Attention: Exploring the Effectiveness of Central and Peripheral Cues in Panoramic Videos</big></span><br></p><p><span><small><i>Anastasia Schmitz; Andrew MacQuarrie; Simon Julier; Nicola Binetti; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VR46266.2020.00024">https://doi.org/10.1109/VR46266.2020.00024</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Filmmakers of panoramic videos frequently struggle to guide attention to Regions of Interest (ROIs) due to consumers’ freedom to explore. Some researchers hypothesize that peripheral cues attract reflexive/involuntary attention whereas cues within central vision engage and direct voluntary attention. This mixed-methods study evaluated the effectiveness of using central arrows and peripheral flickers to guide and focus attention in panoramic videos. Twenty-five adults wore a head-mounted display with an eye tracker and were guided to 14 ROIs in two panoramic videos. No significant differences emerged in regard to the number of followed cues, the time taken to reach and observe ROIs, ROI-related memory and user engagement. However, participants’ gaze travelled a significantly greater distance toward ROIs within the first 500 ms after flicker-onsets compared to arrow-onsets. Nevertheless, most users preferred the arrow and perceived it as significantly more rewarding than the flicker. The findings imply that traditional attention paradigms are not entirely applicable to panoramic videos, as peripheral cues appear to engage both involuntary and voluntary attention. Theoretical and practical implications as well as limitations are discussed.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9089479/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9089479/</guid>
</item>
<item>
<title>Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging</title>
<description><p><span><big>Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging</big></span><br></p><p><span><small><i>Youngjun Cho; Nadia Bianchi-Berthouze; Manuel Oliveira; Catherine Holloway; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACII.2019.8925453">https://doi.org/10.1109/ACII.2019.8925453</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Automatically monitoring and quantifying stress-induced thermal dynamic information in real-world settings is an extremely important but challenging problem. In this paper, we explore whether we can use mobile thermal imaging to measure the rich physiological cues of mental stress that can be deduced from a person's nose temperature. To answer this question we build i) a framework for monitoring nasal thermal variable patterns continuously and ii) a novel set of thermal variability metrics to capture a richness of the dynamic information. We evaluated our approach in a series of studies including laboratory-based psychosocial stress-induction tasks and real-world factory settings. We demonstrate our approach has the potential for assessing stress responses beyond controlled laboratory settings.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8925453/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8925453/</guid>
</item>
<item>
<title>Passive Activity Classification Using Just WiFi Probe Response Signals</title>
<description><p><span><big>Passive Activity Classification Using Just WiFi Probe Response Signals</big></span><br></p><p><span><small><i>Fangzhan Shi; Kevin Chetty; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/RADAR.2019.8835660">https://doi.org/10.1109/RADAR.2019.8835660</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Passive WiFi radar shows significant promise for a wide range of applications in both security and healthcare owing to its detection, tracking and recognition capabilities. However, studies examining micro-Doppler classification using passive WiFi radar have relied on manually stimulating WiFi access points to increase the bandwidths and duty-cycles of transmissions; either through file-downloads to generate high data-rate signals, or increasing the repetition frequency of the WiFi beacon signal from its default setting. In real-world scenarios, both these approaches would require user access to the WiFi network or WiFi access point through password authentication, and therefore involve a level of cooperation which cannot always be relied upon e.g. in law-enforcement applications. In this research, we investigate WiFi activity classification using just WiFi probe response signals which can be generated using a low-cost off-the-shelf secondary device (Raspberry Pi) eliminating the requirement to actually connect to the WiFi network. This removes the need to have continuous data traffic in the network or to modify the firmware configuration to manipulate the beacon signal interval, making the technology deployable in all situations. An activity recognition model based on a convolutional neural network resulted in an overall classification accuracy of 75% when trained from scratch using 300 measured WiFi probe-response samples across 6 classes. This value is then increased to 82%, with significantly less training when adopting a transfer learning approach: initial training using WiFi data traffic signals, followed by fine-tuning using probe response signals.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8835660/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8835660/</guid>
</item>
<item>
<title>NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning</title>
<description><p><span><big>NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning</big></span><br></p><p><span><small><i>Moustafa Alzantot; Amy Widdicombe; Simon Julier; Mani Srivastava</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/SMARTCOMP.2019.00033">https://doi.org/10.1109/SMARTCOMP.2019.00033</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/8784063/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/8784063/</guid>
</item>
</channel>
</rss> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 3 out of 3 changed files in this pull request and generated no suggestions.
Comments skipped due to low confidence (2)
lib/routes/ieee/author.ts:55
- [nitpick] The variable name 'itemAuth' is ambiguous. It should be renamed to 'author'.
authors: 'authors' in item ? item.authors.map((itemAuth) => itemAuth.preferredName).join('; ') : 'Do not have author',
lib/routes/ieee/author.ts:55
- The error message 'Do not have author' is unclear. It should be changed to 'No author information available'.
authors: 'authors' in item ? item.authors.map((itemAuth) => itemAuth.preferredName).join('; ') : 'Do not have author',
Successfully generated as following: http://localhost:1200/ieee/author/37264968900/newest/20 - Failed ❌
|
Successfully generated as following: http://localhost:1200/ieee/author/37264968900/newest - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Simon Julier on IEEE Xplore</title>
<link>https://ieeexplore.ieee.org/author/37264968900</link>
<atom:link href="http://localhost:1200/ieee/author/37264968900/newest" rel="self" type="application/rss+xml"></atom:link>
<description>Simon J. Julier (M’93) is currently a Senior Lecturer with the Vision, Imaging and Virtual Environments Group, Department of Computer Science, University College London (UCL), London, U.K. Before joining UCL, he worked for nine years with the 3D Mixed and Virtual Environments Laboratory, Naval Research Laboratory, Washington, DC, USA. He has worked on a number of projects, including the development of systems for sports training, coordinated search, and rescue with swarms of UAVs, remote collaboration systems, enhanced security management systems for refugee camps, and sea border surveillance in the presence of small targets. His research interests include distributed data fusion, multitarget tracking, nonlinear estimation, object recognition, and simultaneous localization and mapping. - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>[email protected] (RSSHub)</webMaster>
<language>en</language>

<lastBuildDate>Sun, 24 Nov 2024 15:00:48 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</title>
<description><p><span><big>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</big></span><br></p><p><span><small><i>Zhaozhong Chen; Harel Biggie; Nisar Ahmed; Simon Julier; Christoffer Heckman</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2024.3350587">https://doi.org/10.1109/TAES.2024.3350587</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>The nonlinear and stochastic relationship between noise covariance parameter values and state estimator performance makes optimal filter tuning a very challenging problem. Popular optimization-based tuning approaches can easily get trapped in local minima, leading to poor noise parameter identification and suboptimal state estimation. Recently, black box techniques based on Bayesian optimization with Gaussian processes (GPBO) have been shown to overcome many of these issues, using normalized estimation error squared and normalized innovation error statistics to derive cost functions for Kalman filter auto-tuning. While reliable noise parameter estimates are obtained in many cases, GPBO solutions obtained with these conventional cost functions do not always converge to optimal filter noise parameters and lack robustness to parameter ambiguities in time-discretized system models. This article addresses these issues by making two main contributions. First, new cost functions are developed to determine if an estimator has been tuned correctly. It is shown that traditional chi-square tests are inadequate for correct auto-tuning because they do not accurately model the distribution of innovations when the estimator is incorrectly tuned. Second, the new metrics (formulated over multiple time discretization intervals) is combined with a student-t processes Bayesian optimization to achieve robust estimator performance for time discretized state space models. The robustness, accuracy, and reliability of our approach are illustrated on classical state estimation problems.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10382621/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10382621/</guid>
<pubDate>Sun, 07 Jan 2024 16:00:00 GMT</pubDate>
</item>
<item>
<title>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</title>
<description><p><span><big>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</big></span><br></p><p><span><small><i>Ziwen Lu; Jingyi Zhang; Kalila Shapiro; Nels Numan; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181">https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Augmented Reality (AR) and Virtual Reality (VR) users have distinct capabilities and experiences during Extended Reality (XR) collaborations: while AR users benefit from real-time contextual information due to physical presence, VR users enjoy the flexibility to transition between locations rapidly, unconstrained by physical space.Our research aims to utilize these spatial differences to facilitate engaging, shared XR experiences. Using Google Geospatial Creator, we enable large-scale outdoor authoring and precise localization to create a unified environment. We integrated Ubiq to allow simultaneous voice communication, avatar-based interaction and shared object manipulation across platforms.We apply AR and VR technologies in cultural heritage exploration. We selected the Euston Arch as our case study due to its dramatic architectural transformations over time. We enriched the co-exploration experience by integrating historical photos, a 3D model of the Euston Arch, and immersive audio narratives into the shared AR/VR environment.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10322275/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10322275/</guid>
<pubDate>Sun, 03 Dec 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>Revisiting Distribution-Based Registration Methods</title>
<description><p><span><big>Revisiting Distribution-Based Registration Methods</big></span><br></p><p><span><small><i>Himanshu Gupta; Henrik Andreasson; Martin Magnusson; Simon Julier; Achim J. Lilientha</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ECMR59166.2023.10256416">https://doi.org/10.1109/ECMR59166.2023.10256416</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions “heavily broadened likelihood NDT” (HBL- NDT) (34.7% success rate) and “over-lapping grid cells NDT” (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10256416/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10256416/</guid>
<pubDate>Tue, 26 Sep 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</title>
<description><p><span><big>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</big></span><br></p><p><span><small><i>Nels Numan; Ziwen Lu; Benjamin Congdon; Daniele Giunchi; Alexandros Rotsidis; Andreas Lernis; Kyriakos Larmos; Tereza Kourra; Panayiotis Charalambous; Yiorgos Chrysanthou; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VRW58643.2023.00029">https://doi.org/10.1109/VRW58643.2023.00029</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Most research on collaborative mixed reality (CMR) has focused on indoor spaces. In this paper, we present our ongoing work aimed at investigating the potential of CMR in outdoor spaces. These spaces present unique challenges due to their larger and more com-plex nature, particularly in terms of reconstruction, tracking, and interaction. Our prototype system utilises a photorealistic model to facilitate collaboration between remote virtual reality (VR) users and a local augmented reality (AR) user. We discuss our design considerations, lessons learnt, and areas for future work.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10108714/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10108714/</guid>
<pubDate>Sun, 30 Apr 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</title>
<description><p><span><big>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2023.3256973">https://doi.org/10.1109/TAES.2023.3256973</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Motion tracking systems based on optical sensors typically suffer from poor lighting, occlusion, limited coverage, and may raise privacy concerns. Recently, radio-frequency (RF) based approaches using WiFi have emerged which offer low-cost ubiquitous sensing whilst preserving privacy. However, output range-Doppler or time-frequency spectrograms cannot represent human motion intuitively and usually requires further processing. In this study, we propose MDPose, a novel framework for human skeletal motion reconstruction based on WiFi micro-Doppler. MDPose provides an effective solution to represent human activity by reconstructing skeleton models with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way. Specifically, MDPose is implemented over three sequential stages to address various challenges: First, a denoising algorithm is employed to remove any unwanted noise that may affect feature extraction and enhance weak Doppler measurements. Second, a convolutional neural network (CNN)-recurrent neural network (RNN) architecture is applied to learn temporal-spatial dependency from clean micro-Doppler and restore velocity information to key points under the supervision of the motion capture (Mocap) system. Finally, a pose optimisation mechanism based on learning optimisation vectors is employed to estimate the initial skeletal state and to eliminate additional errors. We have conducted comprehensive evaluations in a variety of environments using numerous subjects with a single receiver radar system to demonstrate the performance of MDPose, and report 29.4mm mean absolute error over key points positions on several common daily activities, which has performance comparable to that of state-of-the-art RF-based pose estimation systems.11For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10068751/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10068751/</guid>
<pubDate>Mon, 13 Mar 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>Autonomous Mobile 3D Printing of Large-Scale Trajectories</title>
<description><p><span><big>Autonomous Mobile 3D Printing of Large-Scale Trajectories</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS47612.2022.9982274">https://doi.org/10.1109/IROS47612.2022.9982274</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Mobile 3D Printing (M3DP), using printing-in-motion, is a powerful paradigm for automated construction. A mobile robot, equipped with its own power, materials and an arm-mounted extruder, simultaneously navigates and creates its environment. Such systems can be highly scalable, parallelizable and flexible. However, planning and controlling the motion of the arm and base at the same time is challenging and most deployments either avoid robot-base motion entirely or use human prescribed robot-base paths. In a previous paper, we developed a high-level planning algorithm to automate M3DP given a print task. The generated robot-base paths avoid collisions and maintain task reachability. In this paper, we extend this work to robot control. We develop and compare three different ways to integrate the long-duration planned path with a short horizon Model Predictive Controller. Experiments are carried out via a new M3DP system - Armstone. We evaluate and demonstrate our algorithm in a 250 m long multi-layer print which is about 5 times longer than any previous physical printing-in-motion system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9982274/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9982274/</guid>
<pubDate>Sun, 25 Dec 2022 16:00:00 GMT</pubDate>
</item>
<item>
<title>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</title>
<description><p><span><big>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</big></span><br></p><p><span><small><i>Katherine Wang; Simon J. Julier; Youngjun Cho</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACCESS.2022.3147726">https://doi.org/10.1109/ACCESS.2022.3147726</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>With the rising prevalence of autism diagnoses, it is essential for research to understand how to leverage technology to support the diverse nature of autistic traits. While traditional interventions focused on technology for medical cure and rehabilitation, recent research aims to understand how technology can accommodate each unique situation in an efficient and engaging way. Extended reality (XR) technology has been shown to be effective in improving attention in autistic users given that it is more engaging and motivating than other traditional mediums. Here, we conducted a systematic review of 59 research articles that explored the role of attention in XR interventions for autistic users. We systematically analyzed demographics, study design and findings, including autism screening and attention measurement methods. Furthermore, given methodological inconsistencies in the literature, we systematically synthesize methods and protocols including screening tools, physiological and behavioral cues of autism and XR tasks. While there is substantial evidence for the effectiveness of using XR in attention-based interventions for autism to support autistic traits, we have identified three principal research gaps that provide promising research directions to examine how autistic populations interact with XR. First, our findings highlight the disproportionate geographic locations of autism studies and underrepresentation of autistic adults, evidence of gender disparity, and presence of individuals diagnosed with co-occurring conditions across studies. Second, many studies used an assortment of standardized and novel tasks and self-report assessments with limited tested reliability. Lastly, the research lacks evidence of performance maintenance and transferability. Based on these challenges, this paper discusses inclusive future research directions considering greater diversification of participant recruitment, robust objective evaluations using physiological measurements (e.g., eye-tracking), and follow-up maintenance sessions that promote transferrable skills. Pursuing these opportunities would lead to more effective therapy solutions, improved accessible interfaces, and engaging interactions.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9697342/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9697342/</guid>
<pubDate>Sun, 30 Jan 2022 16:00:00 GMT</pubDate>
</item>
<item>
<title>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</title>
<description><p><span><big>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</big></span><br></p><p><span><small><i>Sebastian A. Kay; Simon Julier; Vijay M. Pawar</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9636352">https://doi.org/10.1109/IROS51168.2021.9636352</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>To capture the geometry of an object by an autonomous system, next best view (NBV) planning can be used to determine the path a robot will take. However, current NBV planning algorithms do not distinguish between objects that need to be mapped and everything else in the environment; leading to inefficient search strategies. In this paper we present a novel approach for NBV planning that accounts for the importance of objects in the environment to inform navigation. Using weighted entropy to encode object utilities computed via semantic segmentation, we evaluate our approach over a set of virtual Gazebo environments comparable to construction scales. Our results show that using semantic information reduces the time required to capture a target object by at least 40 percent.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9636352/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9636352/</guid>
<pubDate>Wed, 15 Dec 2021 16:00:00 GMT</pubDate>
</item>
<item>
<title>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</title>
<description><p><span><big>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon J. Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TGRS.2021.3121211">https://doi.org/10.1109/TGRS.2021.3121211</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Micro-Doppler signatures contain considerable information about target dynamics. However, the radar sensing systems are easily affected by noisy surroundings, resulting in uninterpretable motion patterns on the micro-Doppler spectrogram (
$\mu $
-DS). Meanwhile, radar returns often suffer from multipath, clutter, and interference. These issues lead to difficulty in, for example, motion feature extraction and activity classification using micro-Doppler signatures. In this article, we propose a latent feature-wise mapping strategy, called feature mapping network (FMNet), to transform measured spectrograms so that they more closely resemble the output from a simulation under the same conditions. Based on measured spectrogram and the matched simulated data, our framework contains three parts: an encoder which is used to extract latent representations/features, a decoder outputs reconstructed spectrogram according to the latent features, and a discriminator minimizes the distance of latent features of measured and simulated data. We demonstrate the FMNet with six activities data and two experimental scenarios, and final results show strong enhanced patterns and can keep actual motion information to the greatest extent. On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet. From final classification results, we can see significant improvements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9583945/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9583945/</guid>
<pubDate>Wed, 20 Oct 2021 16:00:00 GMT</pubDate>
</item>
<item>
<title>Consensus Based Networking of Distributed Virtual Environments</title>
<description><p><span><big>Consensus Based Networking of Distributed Virtual Environments</big></span><br></p><p><span><small><i>Sebastian Friston; Elias Griffith; David Swapp; Simon Julier; Caleb Irondi; Fred Jjunju; Ryan Ward; Alan Marshall; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TVCG.2021.3052580">https://doi.org/10.1109/TVCG.2021.3052580</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Distributed virtual environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN’s support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000’s of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9328611/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9328611/</guid>
<pubDate>Mon, 18 Jan 2021 16:00:00 GMT</pubDate>
</item>
</channel>
</rss> |
Co-authored-by: Tony <[email protected]>
Successfully generated as following: http://localhost:1200/ieee/author/37264968900/newest - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Simon Julier on IEEE Xplore</title>
<link>https://ieeexplore.ieee.org/author/37264968900</link>
<atom:link href="http://localhost:1200/ieee/author/37264968900/newest" rel="self" type="application/rss+xml"></atom:link>
<description>Simon J. Julier (M’93) is currently a Senior Lecturer with the Vision, Imaging and Virtual Environments Group, Department of Computer Science, University College London (UCL), London, U.K. Before joining UCL, he worked for nine years with the 3D Mixed and Virtual Environments Laboratory, Naval Research Laboratory, Washington, DC, USA. He has worked on a number of projects, including the development of systems for sports training, coordinated search, and rescue with swarms of UAVs, remote collaboration systems, enhanced security management systems for refugee camps, and sea border surveillance in the presence of small targets. His research interests include distributed data fusion, multitarget tracking, nonlinear estimation, object recognition, and simultaneous localization and mapping. - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>[email protected] (RSSHub)</webMaster>
<language>en</language>

<lastBuildDate>Mon, 25 Nov 2024 02:35:09 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</title>
<description><p><span><big>Kalman Filter Auto-Tuning With Consistent and Robust Bayesian Optimization</big></span><br></p><p><span><small><i>Zhaozhong Chen; Harel Biggie; Nisar Ahmed; Simon Julier; Christoffer Heckman</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2024.3350587">https://doi.org/10.1109/TAES.2024.3350587</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>The nonlinear and stochastic relationship between noise covariance parameter values and state estimator performance makes optimal filter tuning a very challenging problem. Popular optimization-based tuning approaches can easily get trapped in local minima, leading to poor noise parameter identification and suboptimal state estimation. Recently, black box techniques based on Bayesian optimization with Gaussian processes (GPBO) have been shown to overcome many of these issues, using normalized estimation error squared and normalized innovation error statistics to derive cost functions for Kalman filter auto-tuning. While reliable noise parameter estimates are obtained in many cases, GPBO solutions obtained with these conventional cost functions do not always converge to optimal filter noise parameters and lack robustness to parameter ambiguities in time-discretized system models. This article addresses these issues by making two main contributions. First, new cost functions are developed to determine if an estimator has been tuned correctly. It is shown that traditional chi-square tests are inadequate for correct auto-tuning because they do not accurately model the distribution of innovations when the estimator is incorrectly tuned. Second, the new metrics (formulated over multiple time discretization intervals) is combined with a student-t processes Bayesian optimization to achieve robust estimator performance for time discretized state space models. The robustness, accuracy, and reliability of our approach are illustrated on classical state estimation problems.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10382621/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10382621/</guid>
<pubDate>Sun, 07 Jan 2024 16:00:00 GMT</pubDate>
</item>
<item>
<title>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</title>
<description><p><span><big>Reviving the Euston Arch: A Mixed Reality Approach to Cultural Heritage Tours</big></span><br></p><p><span><small><i>Ziwen Lu; Jingyi Zhang; Kalila Shapiro; Nels Numan; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181">https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00181</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Augmented Reality (AR) and Virtual Reality (VR) users have distinct capabilities and experiences during Extended Reality (XR) collaborations: while AR users benefit from real-time contextual information due to physical presence, VR users enjoy the flexibility to transition between locations rapidly, unconstrained by physical space.Our research aims to utilize these spatial differences to facilitate engaging, shared XR experiences. Using Google Geospatial Creator, we enable large-scale outdoor authoring and precise localization to create a unified environment. We integrated Ubiq to allow simultaneous voice communication, avatar-based interaction and shared object manipulation across platforms.We apply AR and VR technologies in cultural heritage exploration. We selected the Euston Arch as our case study due to its dramatic architectural transformations over time. We enriched the co-exploration experience by integrating historical photos, a 3D model of the Euston Arch, and immersive audio narratives into the shared AR/VR environment.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10322275/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10322275/</guid>
<pubDate>Sun, 03 Dec 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>Revisiting Distribution-Based Registration Methods</title>
<description><p><span><big>Revisiting Distribution-Based Registration Methods</big></span><br></p><p><span><small><i>Himanshu Gupta; Henrik Andreasson; Martin Magnusson; Simon Julier; Achim J. Lilientha</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ECMR59166.2023.10256416">https://doi.org/10.1109/ECMR59166.2023.10256416</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Normal Distribution Transformation (NDT) registration is a fast, learning-free point cloud registration algorithm that works well in diverse environments. It uses the compact NDT representation to represent point clouds or maps as a spatial probability function that models the occupancy likelihood in an environment. However, because of the grid discretization in NDT maps, the global minima of the registration cost function do not always correlate to ground truth, particularly for rotational alignment. In this study, we examined the NDT registration cost function in-depth. We evaluated three modifications (Student-t likelihood function, inflated covariance/heavily broadened likelihood curve, and overlapping grid cells) that aim to reduce the negative impact of discretization in classical NDT registration. The first NDT modification improves likelihood estimates for matching the distributions of small population sizes; the second modification reduces discretization artifacts by broadening the likelihood tails through covariance inflation; and the third modification achieves continuity by creating the NDT representations with overlapping grid cells (without increasing the total number of cells). We used the Pomerleau Dataset evaluation protocol for our experiments and found significant improvements compared to the classic NDT D2D registration approach (27.7% success rate) using the registration cost functions “heavily broadened likelihood NDT” (HBL- NDT) (34.7% success rate) and “over-lapping grid cells NDT” (OGC-NDT) (33.5% success rate). However, we could not observe a consistent improvement using the Student-t likelihood-based registration cost function (22.2% success rate) over the NDT P2D registration cost function (23.7% success rate). A comparative analysis with other state-of-art registration algorithms is also presented in this work. We found that HBL-NDT worked best for easy initial pose difficulties scenarios making it suitable for consecutive point cloud registration in SLAM application.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10256416/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10256416/</guid>
<pubDate>Tue, 26 Sep 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</title>
<description><p><span><big>Towards Outdoor Collaborative Mixed Reality: Lessons Learnt from a Prototype System</big></span><br></p><p><span><small><i>Nels Numan; Ziwen Lu; Benjamin Congdon; Daniele Giunchi; Alexandros Rotsidis; Andreas Lernis; Kyriakos Larmos; Tereza Kourra; Panayiotis Charalambous; Yiorgos Chrysanthou; Simon Julier; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/VRW58643.2023.00029">https://doi.org/10.1109/VRW58643.2023.00029</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Most research on collaborative mixed reality (CMR) has focused on indoor spaces. In this paper, we present our ongoing work aimed at investigating the potential of CMR in outdoor spaces. These spaces present unique challenges due to their larger and more com-plex nature, particularly in terms of reconstruction, tracking, and interaction. Our prototype system utilises a photorealistic model to facilitate collaboration between remote virtual reality (VR) users and a local augmented reality (AR) user. We discuss our design considerations, lessons learnt, and areas for future work.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10108714/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10108714/</guid>
<pubDate>Sun, 30 Apr 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</title>
<description><p><span><big>MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TAES.2023.3256973">https://doi.org/10.1109/TAES.2023.3256973</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Motion tracking systems based on optical sensors typically suffer from poor lighting, occlusion, limited coverage, and may raise privacy concerns. Recently, radio-frequency (RF) based approaches using WiFi have emerged which offer low-cost ubiquitous sensing whilst preserving privacy. However, output range-Doppler or time-frequency spectrograms cannot represent human motion intuitively and usually requires further processing. In this study, we propose MDPose, a novel framework for human skeletal motion reconstruction based on WiFi micro-Doppler. MDPose provides an effective solution to represent human activity by reconstructing skeleton models with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way. Specifically, MDPose is implemented over three sequential stages to address various challenges: First, a denoising algorithm is employed to remove any unwanted noise that may affect feature extraction and enhance weak Doppler measurements. Second, a convolutional neural network (CNN)-recurrent neural network (RNN) architecture is applied to learn temporal-spatial dependency from clean micro-Doppler and restore velocity information to key points under the supervision of the motion capture (Mocap) system. Finally, a pose optimisation mechanism based on learning optimisation vectors is employed to estimate the initial skeletal state and to eliminate additional errors. We have conducted comprehensive evaluations in a variety of environments using numerous subjects with a single receiver radar system to demonstrate the performance of MDPose, and report 29.4mm mean absolute error over key points positions on several common daily activities, which has performance comparable to that of state-of-the-art RF-based pose estimation systems.11For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/10068751/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/10068751/</guid>
<pubDate>Mon, 13 Mar 2023 16:00:00 GMT</pubDate>
</item>
<item>
<title>Autonomous Mobile 3D Printing of Large-Scale Trajectories</title>
<description><p><span><big>Autonomous Mobile 3D Printing of Large-Scale Trajectories</big></span><br></p><p><span><small><i>Julius Sustarevas; Dimitrios Kanoulas; Simon Julier</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS47612.2022.9982274">https://doi.org/10.1109/IROS47612.2022.9982274</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Mobile 3D Printing (M3DP), using printing-in-motion, is a powerful paradigm for automated construction. A mobile robot, equipped with its own power, materials and an arm-mounted extruder, simultaneously navigates and creates its environment. Such systems can be highly scalable, parallelizable and flexible. However, planning and controlling the motion of the arm and base at the same time is challenging and most deployments either avoid robot-base motion entirely or use human prescribed robot-base paths. In a previous paper, we developed a high-level planning algorithm to automate M3DP given a print task. The generated robot-base paths avoid collisions and maintain task reachability. In this paper, we extend this work to robot control. We develop and compare three different ways to integrate the long-duration planned path with a short horizon Model Predictive Controller. Experiments are carried out via a new M3DP system - Armstone. We evaluate and demonstrate our algorithm in a 250 m long multi-layer print which is about 5 times longer than any previous physical printing-in-motion system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9982274/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9982274/</guid>
<pubDate>Sun, 25 Dec 2022 16:00:00 GMT</pubDate>
</item>
<item>
<title>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</title>
<description><p><span><big>Attention-Based Applications in Extended Reality to Support Autistic Users: A Systematic Review</big></span><br></p><p><span><small><i>Katherine Wang; Simon J. Julier; Youngjun Cho</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/ACCESS.2022.3147726">https://doi.org/10.1109/ACCESS.2022.3147726</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>With the rising prevalence of autism diagnoses, it is essential for research to understand how to leverage technology to support the diverse nature of autistic traits. While traditional interventions focused on technology for medical cure and rehabilitation, recent research aims to understand how technology can accommodate each unique situation in an efficient and engaging way. Extended reality (XR) technology has been shown to be effective in improving attention in autistic users given that it is more engaging and motivating than other traditional mediums. Here, we conducted a systematic review of 59 research articles that explored the role of attention in XR interventions for autistic users. We systematically analyzed demographics, study design and findings, including autism screening and attention measurement methods. Furthermore, given methodological inconsistencies in the literature, we systematically synthesize methods and protocols including screening tools, physiological and behavioral cues of autism and XR tasks. While there is substantial evidence for the effectiveness of using XR in attention-based interventions for autism to support autistic traits, we have identified three principal research gaps that provide promising research directions to examine how autistic populations interact with XR. First, our findings highlight the disproportionate geographic locations of autism studies and underrepresentation of autistic adults, evidence of gender disparity, and presence of individuals diagnosed with co-occurring conditions across studies. Second, many studies used an assortment of standardized and novel tasks and self-report assessments with limited tested reliability. Lastly, the research lacks evidence of performance maintenance and transferability. Based on these challenges, this paper discusses inclusive future research directions considering greater diversification of participant recruitment, robust objective evaluations using physiological measurements (e.g., eye-tracking), and follow-up maintenance sessions that promote transferrable skills. Pursuing these opportunities would lead to more effective therapy solutions, improved accessible interfaces, and engaging interactions.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9697342/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9697342/</guid>
<pubDate>Sun, 30 Jan 2022 16:00:00 GMT</pubDate>
</item>
<item>
<title>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</title>
<description><p><span><big>Semantically Informed Next Best View Planning for Autonomous Aerial 3D Reconstruction</big></span><br></p><p><span><small><i>Sebastian A. Kay; Simon Julier; Vijay M. Pawar</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/IROS51168.2021.9636352">https://doi.org/10.1109/IROS51168.2021.9636352</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>To capture the geometry of an object by an autonomous system, next best view (NBV) planning can be used to determine the path a robot will take. However, current NBV planning algorithms do not distinguish between objects that need to be mapped and everything else in the environment; leading to inefficient search strategies. In this paper we present a novel approach for NBV planning that accounts for the importance of objects in the environment to inform navigation. Using weighted entropy to encode object utilities computed via semantic segmentation, we evaluate our approach over a set of virtual Gazebo environments comparable to construction scales. Our results show that using semantic information reduces the time required to capture a target object by at least 40 percent.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9636352/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9636352/</guid>
<pubDate>Wed, 15 Dec 2021 16:00:00 GMT</pubDate>
</item>
<item>
<title>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</title>
<description><p><span><big>FMNet: Latent Feature-Wise Mapping Network for Cleaning Up Noisy Micro-Doppler Spectrogram</big></span><br></p><p><span><small><i>Chong Tang; Wenda Li; Shelly Vishwakarma; Fangzhan Shi; Simon J. Julier; Kevin Chetty</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TGRS.2021.3121211">https://doi.org/10.1109/TGRS.2021.3121211</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Micro-Doppler signatures contain considerable information about target dynamics. However, the radar sensing systems are easily affected by noisy surroundings, resulting in uninterpretable motion patterns on the micro-Doppler spectrogram (
$\mu $
-DS). Meanwhile, radar returns often suffer from multipath, clutter, and interference. These issues lead to difficulty in, for example, motion feature extraction and activity classification using micro-Doppler signatures. In this article, we propose a latent feature-wise mapping strategy, called feature mapping network (FMNet), to transform measured spectrograms so that they more closely resemble the output from a simulation under the same conditions. Based on measured spectrogram and the matched simulated data, our framework contains three parts: an encoder which is used to extract latent representations/features, a decoder outputs reconstructed spectrogram according to the latent features, and a discriminator minimizes the distance of latent features of measured and simulated data. We demonstrate the FMNet with six activities data and two experimental scenarios, and final results show strong enhanced patterns and can keep actual motion information to the greatest extent. On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet. From final classification results, we can see significant improvements.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9583945/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9583945/</guid>
<pubDate>Wed, 20 Oct 2021 16:00:00 GMT</pubDate>
</item>
<item>
<title>Consensus Based Networking of Distributed Virtual Environments</title>
<description><p><span><big>Consensus Based Networking of Distributed Virtual Environments</big></span><br></p><p><span><small><i>Sebastian Friston; Elias Griffith; David Swapp; Simon Julier; Caleb Irondi; Fred Jjunju; Ryan Ward; Alan Marshall; Anthony Steed</i></small></span><br><span><small><i><a href="https://doi.org/10.1109/TVCG.2021.3052580">https://doi.org/10.1109/TVCG.2021.3052580</a></i></small></span><br><span><small><i>Volume </i></small></span><br></p><p><span>Distributed virtual environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN’s support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000’s of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system.</span><br></p></description>
<link>https://ieeexplore.ieee.org/document/9328611/</link>
<guid isPermaLink="false">https://ieeexplore.ieee.org/document/9328611/</guid>
<pubDate>Mon, 18 Jan 2021 16:00:00 GMT</pubDate>
</item>
</channel>
</rss> |
* style: auto format * feat(route/furaffinity): Add routes for furaffinity as a substitute for deprecated routes (#17314) * feat(route): Init furaffinity namespace, add status route * feat(route): Add browse, home, search, user routes for FA * fix(route): fix wrong url in search route * feat(route): Add gallery, scraps, favorites as art route for furaffinity * style: fix example url of art route * feat(route): Add watcher,watching route * feat(route): Add shouts, journals, commissions route * fix(route): Allow empty gallery and search result * style: Follow eslint rules * style: Fix UNUSED_VAR_ASSIGN * Fixes issues based on review feedback * Remove deprecated furaffinity routes * Update lib/routes/furaffinity/commissions.ts * Update lib/routes/furaffinity/namespace.ts --------- * fix(twitter): await set cookie (#17545) * feat(route/pixiv): refactor novels api and add series support (#17532) * feat(route/pixiv): refactor novels api and add series support * chore: cleanup * Update lib/routes/pixiv/novel-api/series/nsfw.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/pixiv/novel-api/series/sfw.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/pixiv/novel-api/user-novels/nsfw.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/pixiv/novel-api/user-novels/nsfw.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/pixiv/novels.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/pixiv/series.ts Co-authored-by: Tony <[email protected]> * refactor: rename pixiv/series to pixiv/novel/series --------- * feat(route): afr (#17547) * feat(route): afr * feat(route/afr): add image support to latest and navigation endpoints * chore(deps-dev): bump @typescript-eslint/parser from 8.13.0 to 8.14.0 (#17548) Bumps [@typescript-eslint/parser](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser) from 8.13.0 to 8.14.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/parser/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.14.0/packages/parser) --- updated-dependencies: - dependency-name: "@typescript-eslint/parser" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix: redirection in router handler * fix: redirect old bilibili ranking route (#17553) * chore(deps-dev): bump @typescript-eslint/eslint-plugin (#17549) Bumps [@typescript-eslint/eslint-plugin](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin) from 8.13.0 to 8.14.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.14.0/packages/eslint-plugin) --- updated-dependencies: - dependency-name: "@typescript-eslint/eslint-plugin" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(route/mittrchina): update api (#17552) * fix(route): hellogithub 月刊路由增加 pubDate (#17555) * feat: add route for HRBEU School of Naval Architecture(哈尔滨工程大学船舶工程学院) (#17513) * feat: add route for HRBEU School of Naval Architecture(哈尔滨工程大学船舶工程学院) * feat: add route for HRBEU School of Naval Architecture(哈尔滨工程大学船舶工程学院) by Chi-hong22 * remove the position of .toArray() * Modify bug to make local validation successful * use .toArray() instead. * 测试白天外网许可进入情况 * use .toArray() before .map() * update the code by the suggestions with Collaborator TonyRL * update the code by the suggestions with Collaborator TonyRL * update the code by the suggestions with Collaborator TonyRL * fix(route): 修复 部分情况下 url.expanded_url 可能为 undefined 的问题 (#17560) fix #17382 Co-authored-by: CaoMeiYouRen <[email protected]> * feat(route): vertikal (#17561) * feat(route): vertikal * fix(route/vertikal): standardize title string quotes in latest.ts * fix(route/natgeo): replace got with ofetch for content loading and im… (#17562) * fix(route/natgeo): replace got with ofetch for content loading and improve data extraction * fix(route/natgeo): include image source in content loading * feat(route): add Science Tokyo News 東京科学大学ニュース (#17550) * chore(deps): bump telegram from 2.26.2 to 2.26.6 (#17384) * chore(deps): bump telegram from 2.26.2 to 2.26.6 Bumps [telegram](https://github.com/gram-js/gramjs) from 2.26.2 to 2.26.6. - [Release notes](https://github.com/gram-js/gramjs/releases) - [Commits](https://github.com/gram-js/gramjs/commits) --- updated-dependencies: - dependency-name: telegram dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * chore: fix pnpm install --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * 基础功能实现 * Update lib/routes/isct/namespace.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/isct/news.ts Co-authored-by: Tony <[email protected]> --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(route/oschina): improve cookie handling (#17564) * chore(deps): bump hono from 4.6.9 to 4.6.10 (#17568) Bumps [hono](https://github.com/honojs/hono) from 4.6.9 to 4.6.10. - [Release notes](https://github.com/honojs/hono/releases) - [Commits](https://github.com/honojs/hono/compare/v4.6.9...v4.6.10) --- updated-dependencies: - dependency-name: hono dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @hono/node-server from 1.13.6 to 1.13.7 (#17569) Bumps [@hono/node-server](https://github.com/honojs/node-server) from 1.13.6 to 1.13.7. - [Release notes](https://github.com/honojs/node-server/releases) - [Commits](https://github.com/honojs/node-server/compare/v1.13.6...v1.13.7) --- updated-dependencies: - dependency-name: "@hono/node-server" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route/caixin): Add support for photos channel. Example: https://photos.caixin.com/2024-11-02/102252287.html (#17566) * feat(route/caixin): Add support for photos channel. Example: https://photos.caixin.com/2024-11-02/102252287.html * Update utils-fulltext.ts * . * fix: Use renote ID for cross-instance notes (#17572) * fix(route/inspirehep): fix getAuthorById custom accept header (#17574) * chore(deps): bump tldts from 6.1.60 to 6.1.61 (#17579) Bumps [tldts](https://github.com/remusao/tldts) from 6.1.60 to 6.1.61. - [Release notes](https://github.com/remusao/tldts/releases) - [Changelog](https://github.com/remusao/tldts/blob/master/CHANGELOG.md) - [Commits](https://github.com/remusao/tldts/compare/v6.1.60...v6.1.61) --- updated-dependencies: - dependency-name: tldts dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump undici from 6.20.1 to 6.21.0 (#17578) Bumps [undici](https://github.com/nodejs/undici) from 6.20.1 to 6.21.0. - [Release notes](https://github.com/nodejs/undici/releases) - [Commits](https://github.com/nodejs/undici/compare/v6.20.1...v6.21.0) --- updated-dependencies: - dependency-name: undici dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat: new router logrocket (#17533) * feat: new router logrocket * fix: router * fix:article edit * fix:remove unused file --------- Co-authored-by: 钱巍 <[email protected]> * fix: example edit or authentication source (#17583) * feat: new router logrocket * fix: router * fix:article edit * fix:remove unused file * fix: example edit or authentication source * fix:example --------- Co-authored-by: 钱巍 <[email protected]> * chore(deps-dev): bump eslint-plugin-n from 17.13.1 to 17.13.2 (#17588) Bumps [eslint-plugin-n](https://github.com/eslint-community/eslint-plugin-n) from 17.13.1 to 17.13.2. - [Release notes](https://github.com/eslint-community/eslint-plugin-n/releases) - [Changelog](https://github.com/eslint-community/eslint-plugin-n/blob/master/CHANGELOG.md) - [Commits](https://github.com/eslint-community/eslint-plugin-n/compare/v17.13.1...v17.13.2) --- updated-dependencies: - dependency-name: eslint-plugin-n dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump discord-api-types from 0.37.104 to 0.37.105 (#17589) Bumps [discord-api-types](https://github.com/discordjs/discord-api-types) from 0.37.104 to 0.37.105. - [Release notes](https://github.com/discordjs/discord-api-types/releases) - [Changelog](https://github.com/discordjs/discord-api-types/blob/main/CHANGELOG.md) - [Commits](https://github.com/discordjs/discord-api-types/compare/0.37.104...0.37.105) --- updated-dependencies: - dependency-name: discord-api-types dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump codecov/codecov-action from 4 to 5 (#17587) Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5. - [Release notes](https://github.com/codecov/codecov-action/releases) - [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md) - [Commits](https://github.com/codecov/codecov-action/compare/v4...v5) --- updated-dependencies: - dependency-name: codecov/codecov-action dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route): javtrailers (#17590) * feat(route): javtrailers * feat(casts): enhance description with castWiki data * feat(mastodon): add 'fosstodon.org' to allowed site list * feat(route): qstheory magazine (#17591) * chore: escape HTML entity in route test URL display (#17592) * fix(twitter)!: 修复Twitter 长文本显示不全 (#17596) * Try fix * apply fix for mobile api * feat(route): wallstreetcn (#17597) * feat(route): wallstreetcn * fix(route): correct country_id reference in link generation * fix(bilibili)!: update article api (#17586) * fix(bilibili): update article api * fix(bilibili): use default ua --------- * feat(route): add dw route (#17575) * feat(route): add dw route * fix * Apply suggestions from code review Co-authored-by: Tony <[email protected]> * Apply suggestions from code review * Apply suggestions with code review * add mp4 video src * fix: preload metadata -------- * style: auto format * chore(deps): bump @eslint/plugin-kit from 0.2.2 to 0.2.3 (#17599) Bumps [@eslint/plugin-kit](https://github.com/eslint/rewrite) from 0.2.2 to 0.2.3. - [Release notes](https://github.com/eslint/rewrite/releases) - [Changelog](https://github.com/eslint/rewrite/blob/main/release-please-config.json) - [Commits](https://github.com/eslint/rewrite/compare/plugin-kit-v0.2.2...plugin-kit-v0.2.3) --- updated-dependencies: - dependency-name: "@eslint/plugin-kit" dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route): add scu jwc notice (#17014) * 添加SCU教务处通知公告路由 * 空提交 * feat(route): cache tzgg * Update lib/routes/scu/jwc/tzgg.ts * feat: add icon for /scu/jwc (#17603) * docs: update maintainer github id #4083 * chore(deps): bump @hono/zod-openapi from 0.17.0 to 0.17.1 (#17614) Bumps [@hono/zod-openapi](https://github.com/honojs/middleware) from 0.17.0 to 0.17.1. - [Release notes](https://github.com/honojs/middleware/releases) - [Commits](https://github.com/honojs/middleware/compare/@hono/[email protected]...@hono/[email protected]) --- updated-dependencies: - dependency-name: "@hono/zod-openapi" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump proxy-chain from 2.5.4 to 2.5.5 (#17616) Bumps [proxy-chain](https://github.com/apify/proxy-chain) from 2.5.4 to 2.5.5. - [Release notes](https://github.com/apify/proxy-chain/releases) - [Changelog](https://github.com/apify/proxy-chain/blob/master/CHANGELOG.md) - [Commits](https://github.com/apify/proxy-chain/compare/v2.5.4...v2.5.5) --- updated-dependencies: - dependency-name: proxy-chain dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @eslint/eslintrc from 3.1.0 to 3.2.0 (#17612) Bumps [@eslint/eslintrc](https://github.com/eslint/eslintrc) from 3.1.0 to 3.2.0. - [Release notes](https://github.com/eslint/eslintrc/releases) - [Changelog](https://github.com/eslint/eslintrc/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint/eslintrc/compare/v3.1.0...v3.2.0) --- updated-dependencies: - dependency-name: "@eslint/eslintrc" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @eslint/js from 9.14.0 to 9.15.0 (#17613) Bumps [@eslint/js](https://github.com/eslint/eslint/tree/HEAD/packages/js) from 9.14.0 to 9.15.0. - [Release notes](https://github.com/eslint/eslint/releases) - [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint/eslint/commits/v9.15.0/packages/js) --- updated-dependencies: - dependency-name: "@eslint/js" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint from 9.14.0 to 9.15.0 (#17615) Bumps [eslint](https://github.com/eslint/eslint) from 9.14.0 to 9.15.0. - [Release notes](https://github.com/eslint/eslint/releases) - [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint/eslint/compare/v9.14.0...v9.15.0) --- updated-dependencies: - dependency-name: eslint dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * revert: "chore(deps-dev): bump eslint from 9.14.0 to 9.15.0 (#17615)" This reverts commit f6a6627f8d8231455bf0cf43889e57a38e51be2a. * feat(route): patreon (#17621) * feat(route): patreon * fix: typo * fix: typo * fix(route): 78动漫 (#17598) * fix(route): 78动漫 * fix typo * feat: add new route about air-level 空气质量 (#17594) * 123 * 空气质量 * 重复代码,无用代码删除 * 格式化代码-注释重新生成 * resolve pr problem * 1. Using String() on a string is redundant. 2. example should start with / and the namespace 3.Do not start the description with line breaks. * fix: add category --------- Co-authored-by: DESKTOP-EMU7G44\randomtree <[email protected]> * chore(deps): bump @hono/zod-openapi from 0.17.1 to 0.18.0 (#17632) Bumps [@hono/zod-openapi](https://github.com/honojs/middleware) from 0.17.1 to 0.18.0. - [Release notes](https://github.com/honojs/middleware/releases) - [Commits](https://github.com/honojs/middleware/compare/@hono/[email protected]...@hono/[email protected]) --- updated-dependencies: - dependency-name: "@hono/zod-openapi" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump husky from 9.1.6 to 9.1.7 (#17624) Bumps [husky](https://github.com/typicode/husky) from 9.1.6 to 9.1.7. - [Release notes](https://github.com/typicode/husky/releases) - [Commits](https://github.com/typicode/husky/compare/v9.1.6...v9.1.7) --- updated-dependencies: - dependency-name: husky dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @typescript-eslint/parser from 8.14.0 to 8.15.0 (#17628) Bumps [@typescript-eslint/parser](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser) from 8.14.0 to 8.15.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/parser/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.15.0/packages/parser) --- updated-dependencies: - dependency-name: "@typescript-eslint/parser" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @opentelemetry/resources from 1.27.0 to 1.28.0 (#17627) Bumps [@opentelemetry/resources](https://github.com/open-telemetry/opentelemetry-js) from 1.27.0 to 1.28.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-js/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-js/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-js/compare/v1.27.0...v1.28.0) --- updated-dependencies: - dependency-name: "@opentelemetry/resources" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @stylistic/eslint-plugin from 2.10.1 to 2.11.0 (#17629) Bumps [@stylistic/eslint-plugin](https://github.com/eslint-stylistic/eslint-stylistic/tree/HEAD/packages/eslint-plugin) from 2.10.1 to 2.11.0. - [Release notes](https://github.com/eslint-stylistic/eslint-stylistic/releases) - [Changelog](https://github.com/eslint-stylistic/eslint-stylistic/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint-stylistic/eslint-stylistic/commits/v2.11.0/packages/eslint-plugin) --- updated-dependencies: - dependency-name: "@stylistic/eslint-plugin" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @typescript-eslint/eslint-plugin (#17631) Bumps [@typescript-eslint/eslint-plugin](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin) from 8.14.0 to 8.15.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.15.0/packages/eslint-plugin) --- updated-dependencies: - dependency-name: "@typescript-eslint/eslint-plugin" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @opentelemetry/sdk-trace-base from 1.27.0 to 1.28.0 (#17623) Bumps [@opentelemetry/sdk-trace-base](https://github.com/open-telemetry/opentelemetry-js) from 1.27.0 to 1.28.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-js/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-js/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-js/compare/v1.27.0...v1.28.0) --- updated-dependencies: - dependency-name: "@opentelemetry/sdk-trace-base" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @opentelemetry/exporter-trace-otlp-http (#17626) Bumps [@opentelemetry/exporter-trace-otlp-http](https://github.com/open-telemetry/opentelemetry-js) from 0.54.2 to 0.55.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-js/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-js/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-js/compare/experimental/v0.54.2...experimental/v0.55.0) --- updated-dependencies: - dependency-name: "@opentelemetry/exporter-trace-otlp-http" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @opentelemetry/exporter-prometheus (#17622) Bumps [@opentelemetry/exporter-prometheus](https://github.com/open-telemetry/opentelemetry-js) from 0.54.2 to 0.55.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-js/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-js/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-js/compare/experimental/v0.54.2...experimental/v0.55.0) --- updated-dependencies: - dependency-name: "@opentelemetry/exporter-prometheus" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @opentelemetry/sdk-metrics from 1.27.0 to 1.28.0 (#17625) Bumps [@opentelemetry/sdk-metrics](https://github.com/open-telemetry/opentelemetry-js) from 1.27.0 to 1.28.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-js/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-js/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-js/compare/v1.27.0...v1.28.0) --- updated-dependencies: - dependency-name: "@opentelemetry/sdk-metrics" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint-plugin-unicorn from 56.0.0 to 56.0.1 (#17635) Bumps [eslint-plugin-unicorn](https://github.com/sindresorhus/eslint-plugin-unicorn) from 56.0.0 to 56.0.1. - [Release notes](https://github.com/sindresorhus/eslint-plugin-unicorn/releases) - [Commits](https://github.com/sindresorhus/eslint-plugin-unicorn/compare/v56.0.0...v56.0.1) --- updated-dependencies: - dependency-name: eslint-plugin-unicorn dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint from 9.14.0 to 9.15.0 (#17630) Bumps [eslint](https://github.com/eslint/eslint) from 9.14.0 to 9.15.0. - [Release notes](https://github.com/eslint/eslint/releases) - [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint/eslint/compare/v9.14.0...v9.15.0) --- updated-dependencies: - dependency-name: eslint dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(youtube): handle empty channel (#17633) * fix(route/xiaohongshu): add current time as pubDate * feat(route/steam/search): add thumbnails to steam search items (#17638) Co-authored-by: dandersch <[email protected]> * style: auto format * fix: radar rules (#17639) - remove search parameters in source - maintain the same hostname in `source` of each radar rule * feat(route): add idolmaster news (#17619) * feat(route): add idolmaster * fix toUpperCase * fix indent * Update lib/routes/idolmaster/news.ts * fix var name --------- * chore(deps-dev): bump vite-tsconfig-paths from 5.1.2 to 5.1.3 (#17641) Bumps [vite-tsconfig-paths](https://github.com/aleclarson/vite-tsconfig-paths) from 5.1.2 to 5.1.3. - [Release notes](https://github.com/aleclarson/vite-tsconfig-paths/releases) - [Commits](https://github.com/aleclarson/vite-tsconfig-paths/compare/v5.1.2...v5.1.3) --- updated-dependencies: - dependency-name: vite-tsconfig-paths dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump telegram from 2.26.6 to 2.26.8 (#17642) Bumps [telegram](https://github.com/gram-js/gramjs) from 2.26.6 to 2.26.8. - [Release notes](https://github.com/gram-js/gramjs/releases) - [Commits](https://github.com/gram-js/gramjs/commits) --- updated-dependencies: - dependency-name: telegram dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump re2js from 0.4.2 to 0.4.3 (#17643) Bumps [re2js](https://github.com/le0pard/re2js) from 0.4.2 to 0.4.3. - [Release notes](https://github.com/le0pard/re2js/releases) - [Commits](https://github.com/le0pard/re2js/compare/0.4.2...0.4.3) --- updated-dependencies: - dependency-name: re2js dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump hono from 4.6.10 to 4.6.11 (#17646) Bumps [hono](https://github.com/honojs/hono) from 4.6.10 to 4.6.11. - [Release notes](https://github.com/honojs/hono/releases) - [Commits](https://github.com/honojs/hono/compare/v4.6.10...v4.6.11) --- updated-dependencies: - dependency-name: hono dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route): add 「ONE · 一个」http://wufazhuce.com (#17637) * Add my new route for http://wufazhuce.com * build it with got totally * use spread operator over Array#concat(...) * use namespace wufazhuce instead of one, and correcte some habits. * use .tab-content instead of #main-container * add category in item * chore(deps-dev): bump @types/node from 22.9.0 to 22.9.1 (#17644) Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 22.9.0 to 22.9.1. - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases) - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node) --- updated-dependencies: - dependency-name: "@types/node" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump xxhash-wasm from 1.0.2 to 1.1.0 (#17647) Bumps [xxhash-wasm](https://github.com/jungomi/xxhash-wasm) from 1.0.2 to 1.1.0. - [Release notes](https://github.com/jungomi/xxhash-wasm/releases) - [Changelog](https://github.com/jungomi/xxhash-wasm/blob/main/CHANGELOG.md) - [Commits](https://github.com/jungomi/xxhash-wasm/compare/v1.0.2...v1.1.0) --- updated-dependencies: - dependency-name: xxhash-wasm dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @scalar/hono-api-reference from 0.5.159 to 0.5.160 (#17648) Bumps [@scalar/hono-api-reference](https://github.com/scalar/scalar/tree/HEAD/packages/hono-api-reference) from 0.5.159 to 0.5.160. - [Changelog](https://github.com/scalar/scalar/blob/main/packages/hono-api-reference/CHANGELOG.md) - [Commits](https://github.com/scalar/scalar/commits/HEAD/packages/hono-api-reference) --- updated-dependencies: - dependency-name: "@scalar/hono-api-reference" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump title from 3.5.3 to 4.0.0 (#17645) Bumps [title](https://github.com/vercel/title) from 3.5.3 to 4.0.0. - [Release notes](https://github.com/vercel/title/releases) - [Commits](https://github.com/vercel/title/compare/3.5.3...4.0.0) --- updated-dependencies: - dependency-name: title dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(route/idolmaster): fix doc (#17640) * feat(route): 円谷ステーション (#17650) * feat(route): m-78 * add updated * fix error tips * Update lib/routes/m-78/news.ts --------- * fix(route): add radar rules for ‘哈尔滨理工大学教务公告’ (#17657) * fix(/scu/scupi): optimize the layout (#17653) * chore(deps): bump tldts from 6.1.61 to 6.1.62 (#17659) Bumps [tldts](https://github.com/remusao/tldts) from 6.1.61 to 6.1.62. - [Release notes](https://github.com/remusao/tldts/releases) - [Changelog](https://github.com/remusao/tldts/blob/master/CHANGELOG.md) - [Commits](https://github.com/remusao/tldts/compare/v6.1.61...v6.1.62) --- updated-dependencies: - dependency-name: tldts dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump title from 4.0.0 to 4.0.1 (#17660) Bumps [title](https://github.com/vercel/title) from 4.0.0 to 4.0.1. - [Release notes](https://github.com/vercel/title/releases) - [Commits](https://github.com/vercel/title/compare/4.0.0...4.0.1) --- updated-dependencies: - dependency-name: title dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @opentelemetry/semantic-conventions (#17658) Bumps [@opentelemetry/semantic-conventions](https://github.com/open-telemetry/opentelemetry-js) from 1.27.0 to 1.28.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-js/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-js/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-js/compare/v1.27.0...v1.28.0) --- updated-dependencies: - dependency-name: "@opentelemetry/semantic-conventions" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(/tongji/sem): Add the correct icon (#17662) * revert(route/xiaohongshu): add current time as pubDate (#17665) Reason for revert: The change did not meet the expected behavior or caused issues. * fix(route): xiaohongshu fulltext add cookie authentication (#17228) * fix(route/xiaohongshu) add cookie authentication * fix(route/xiaohongshu) add cookie authentication * fix(route/xiaohongshu) add cookie authentication * fix(route/xiaohongshu) add method annotation --------- Co-authored-by: Tony <[email protected]> * feat(route/xiaohongshu): merge notes route to user and enable cookie * feat(route/xiaohongshu): add fallback get notes logics * feat(route): thepaper user (#17666) * fix(core/cache): update cache key generation to include query limit (#17674) * fix(route/xueqiu): fix getting cookie logic (#17675) * fix(route/xueqiu): fix getting cookie logic * fix(route/xueqiu): fix according to review * refactor(route/xiaohongshu): merge helper methods to util * chore(deps): bump tldts from 6.1.62 to 6.1.63 (#17679) Bumps [tldts](https://github.com/remusao/tldts) from 6.1.62 to 6.1.63. - [Release notes](https://github.com/remusao/tldts/releases) - [Changelog](https://github.com/remusao/tldts/blob/master/CHANGELOG.md) - [Commits](https://github.com/remusao/tldts/compare/v6.1.62...v6.1.63) --- updated-dependencies: - dependency-name: tldts dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump discord-api-types from 0.37.105 to 0.37.107 (#17680) Bumps [discord-api-types](https://github.com/discordjs/discord-api-types) from 0.37.105 to 0.37.107. - [Release notes](https://github.com/discordjs/discord-api-types/releases) - [Changelog](https://github.com/discordjs/discord-api-types/blob/main/CHANGELOG.md) - [Commits](https://github.com/discordjs/discord-api-types/compare/0.37.105...0.37.107) --- updated-dependencies: - dependency-name: discord-api-types dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint-plugin-n from 17.13.2 to 17.14.0 (#17678) Bumps [eslint-plugin-n](https://github.com/eslint-community/eslint-plugin-n) from 17.13.2 to 17.14.0. - [Release notes](https://github.com/eslint-community/eslint-plugin-n/releases) - [Changelog](https://github.com/eslint-community/eslint-plugin-n/blob/master/CHANGELOG.md) - [Commits](https://github.com/eslint-community/eslint-plugin-n/compare/v17.13.2...v17.14.0) --- updated-dependencies: - dependency-name: eslint-plugin-n dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route): add route for social science journals (#17656) * Sociology Studies Jounal * Sociology Stuides Journal * delete error description * fix category name --------- Co-authored-by: CNYoki <[email protected]> * feat(route/pixiv): add language attributes for novels (#17667) * feat(route/pixiv): add language tags for novels * rm redundant elements * feat(route): cybersecurityventures (#17677) * feat(route): cybersecurityventures * update feeds title * fix(twitter): set title to author (#17673) * feat(route): fix syosetu & add more routes (#17500) * fix(route): syosetu * feat: add search route & narou package * chore: cleanup * feat: cache search & art template * fix: __dirname * chore: cleanup * feat: add dev route * chore: cleanup * refactor: improve syosetu route params and search handling * feat: add ranking routes * feat: add radar items for syosetu rankings - Add BEST5 radar items for rankings - Standardize title format - Improve params naming * chore: add space * chore: cleanup * chore: add ranking options type hint & cleanup * refactor: change route parameters from path params to query strings - Change optional parameters from path params (/:params) to query strings (?limit=N) - Remove unnecessary cache - Extract ranking-related types and constants to separate files - Extract isekai ranking handling to a separate file - Enhance code structure and readability * feat: improve usability and short novel update handling - Add chapter display support - Fix chapter route radar - Optimize ranking and search limits - Use novelupdated_at as pubDate for short novels * fix: request more items to handle tensei/tenni duplicates * fix: limit ranking items to maximum of 300 * fix(route): linkresearcher (#17681) * fix(route): linkresearcher * Update lib/routes/linkresearcher/index.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/linkresearcher/index.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/linkresearcher/index.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/linkresearcher/namespace.ts Co-authored-by: Tony <[email protected]> * feat: bilingual support * feat: add author and doi --------- * docs(route/syosetu): add URLs and improve ranking docs (#17686) * docs: add URLs and improve ranking docs * docs: capitalize Syosetu namespace * chore: use some lighter dependencies (#17685) * nolyfill * pin overrides version * pnpm install --no-frozen-lockfile * fix(cnki): deprecate author articles with `:code`, now use `:name` and `:company` instead (#17682) * router: fix author * format * Update lib/routes/cnki/author.ts * update --------- * chore: remove thunder client from devcontainer and gitpod configurations (#17692) * feat(dockerhub): 添加 DockerHub 仓库路由 (#17691) * feat(dockerhub): 添加 DockerHub 仓库路由 - 新增 DockerHub 仓库路由,支持获取指定用户的仓库列表 - 支持分页获取仓库信息,默认每页10条记录 * feat(dockerhub): add description for DockerHub repositories route - 添加 DockerHub 仓库路由的描述信息 * refactor(dockerhub): 优化 DockerHub 仓库路由配置 - 修改路由名称和示例路径以提高可读性 - 将 owner 参数转换为小写以确保一致性 - 从查询参数中解析 limit 并设置默认值为 10 * fix(route) ikea/cn/low-price (#17697) * fix: fix dataguidance news feed (#17695) * update dataguidance feed * fix link * Apply suggestions from code review --------- * fix(route/kcna): Remove juche date parsing (#17694) * fix(route/kcna): Remove juche date parsing * Update news.ts * Update news.ts * Update news.ts * fix(ieee): Restore author.ts (#17688) * fix(ieee): Restore author.ts * change function * I have addressed and implemented all suggestions and recommendations. * docs * Update lib/routes/ieee/author.ts * chore(deps-dev): bump got from 14.4.4 to 14.4.5 (#17700) Bumps [got](https://github.com/sindresorhus/got) from 14.4.4 to 14.4.5. - [Release notes](https://github.com/sindresorhus/got/releases) - [Commits](https://github.com/sindresorhus/got/compare/v14.4.4...v14.4.5) --- updated-dependencies: - dependency-name: got dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @types/node from 22.9.1 to 22.9.3 (#17702) Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 22.9.1 to 22.9.3. - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases) - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node) --- updated-dependencies: - dependency-name: "@types/node" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump tldts from 6.1.63 to 6.1.64 (#17704) Bumps [tldts](https://github.com/remusao/tldts) from 6.1.63 to 6.1.64. - [Release notes](https://github.com/remusao/tldts/releases) - [Changelog](https://github.com/remusao/tldts/blob/master/CHANGELOG.md) - [Commits](https://github.com/remusao/tldts/compare/v6.1.63...v6.1.64) --- updated-dependencies: - dependency-name: tldts dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @scalar/hono-api-reference from 0.5.160 to 0.5.161 (#17706) Bumps [@scalar/hono-api-reference](https://github.com/scalar/scalar/tree/HEAD/packages/hono-api-reference) from 0.5.160 to 0.5.161. - [Changelog](https://github.com/scalar/scalar/blob/main/packages/hono-api-reference/CHANGELOG.md) - [Commits](https://github.com/scalar/scalar/commits/HEAD/packages/hono-api-reference) --- updated-dependencies: - dependency-name: "@scalar/hono-api-reference" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump hono from 4.6.11 to 4.6.12 (#17705) Bumps [hono](https://github.com/honojs/hono) from 4.6.11 to 4.6.12. - [Release notes](https://github.com/honojs/hono/releases) - [Commits](https://github.com/honojs/hono/compare/v4.6.11...v4.6.12) --- updated-dependencies: - dependency-name: hono dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route/pixiv): add more precise datetime and author name for NSFW… (#17698) * feat(route/pixiv): add more precise datetime and author name for NSFW novels * refactor code * add maintainer * update types * refactor code * fix!: revert #17667 This reduce the no. of HTTP requests by half. HTTP requests should be spent on fetching the most essential data like title and description. Doubling the no. of HTTP requests to serve one minor property is not elegant. --------- * fix(route/syosetu): HTML escaping in novel description & some minor changes (#17710) * fix(route/bilibili): fix manga updates (#17711) Closes https://github.com/DIYgod/RSSHub/issues/17690 * chore(route): add more popular routes * chore(route): add more social media popular routes * chore(route): add more new media routes * fix(route): daily.ts 骨朵日榜修复 (#17652) * Update daily.ts fix guduo data * Update daily.ts * Update lib/routes/guduodata/daily.ts 原地址已经被官方废弃,新地址是可以正常被访问的 * chore(route): add more new media routes * fix(route): aeon and bjp url * chore(deps-dev): bump discord-api-types from 0.37.107 to 0.37.108 (#17713) Bumps [discord-api-types](https://github.com/discordjs/discord-api-types) from 0.37.107 to 0.37.108. - [Release notes](https://github.com/discordjs/discord-api-types/releases) - [Changelog](https://github.com/discordjs/discord-api-types/blob/main/CHANGELOG.md) - [Commits](https://github.com/discordjs/discord-api-types/compare/0.37.107...0.37.108) --- updated-dependencies: - dependency-name: discord-api-types dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(route/xiaohongshu): set note as default type * chore(deps-dev): bump @types/node from 22.9.3 to 22.10.0 (#17717) Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 22.9.3 to 22.10.0. - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases) - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node) --- updated-dependencies: - dependency-name: "@types/node" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @typescript-eslint/parser from 8.15.0 to 8.16.0 (#17714) Bumps [@typescript-eslint/parser](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser) from 8.15.0 to 8.16.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/parser/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.16.0/packages/parser) --- updated-dependencies: - dependency-name: "@typescript-eslint/parser" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @typescript-eslint/eslint-plugin (#17715) Bumps [@typescript-eslint/eslint-plugin](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin) from 8.15.0 to 8.16.0. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.16.0/packages/eslint-plugin) --- updated-dependencies: - dependency-name: "@typescript-eslint/eslint-plugin" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump typescript from 5.6.3 to 5.7.2 (#17701) Bumps [typescript](https://github.com/microsoft/TypeScript) from 5.6.3 to 5.7.2. - [Release notes](https://github.com/microsoft/TypeScript/releases) - [Changelog](https://github.com/microsoft/TypeScript/blob/main/azure-pipelines.release.yml) - [Commits](https://github.com/microsoft/TypeScript/compare/v5.6.3...v5.7.2) --- updated-dependencies: - dependency-name: typescript dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump prettier from 3.3.3 to 3.4.0 (#17716) Bumps [prettier](https://github.com/prettier/prettier) from 3.3.3 to 3.4.0. - [Release notes](https://github.com/prettier/prettier/releases) - [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md) - [Commits](https://github.com/prettier/prettier/compare/3.3.3...3.4.0) --- updated-dependencies: - dependency-name: prettier dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(route): mwm namespace close #14590 * fix(route/dockerhub): new tag route * fix(api): rss3 network format * chore(deps-dev): bump prettier from 3.4.0 to 3.4.1 (#17724) Bumps [prettier](https://github.com/prettier/prettier) from 3.4.0 to 3.4.1. - [Release notes](https://github.com/prettier/prettier/releases) - [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md) - [Commits](https://github.com/prettier/prettier/compare/3.4.0...3.4.1) --- updated-dependencies: - dependency-name: prettier dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump discord-api-types from 0.37.108 to 0.37.109 (#17728) Bumps [discord-api-types](https://github.com/discordjs/discord-api-types) from 0.37.108 to 0.37.109. - [Release notes](https://github.com/discordjs/discord-api-types/releases) - [Changelog](https://github.com/discordjs/discord-api-types/blob/main/CHANGELOG.md) - [Commits](https://github.com/discordjs/discord-api-types/compare/0.37.108...0.37.109) --- updated-dependencies: - dependency-name: discord-api-types dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @bbob/types from 4.1.1 to 4.2.0 (#17726) Bumps [@bbob/types](https://github.com/JiLiZART/bbob) from 4.1.1 to 4.2.0. - [Release notes](https://github.com/JiLiZART/bbob/releases) - [Changelog](https://github.com/JiLiZART/BBob/blob/master/CHANGELOG.md) - [Commits](https://github.com/JiLiZART/bbob/commits) --- updated-dependencies: - dependency-name: "@bbob/types" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @bbob/preset-html5 from 4.1.1 to 4.2.0 (#17727) Bumps [@bbob/preset-html5](https://github.com/JiLiZART/bbob) from 4.1.1 to 4.2.0. - [Release notes](https://github.com/JiLiZART/bbob/releases) - [Changelog](https://github.com/JiLiZART/BBob/blob/master/CHANGELOG.md) - [Commits](https://github.com/JiLiZART/bbob/commits) --- updated-dependencies: - dependency-name: "@bbob/preset-html5" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @bbob/html from 4.1.1 to 4.2.0 (#17725) Bumps [@bbob/html](https://github.com/JiLiZART/bbob) from 4.1.1 to 4.2.0. - [Release notes](https://github.com/JiLiZART/bbob/releases) - [Changelog](https://github.com/JiLiZART/BBob/blob/master/CHANGELOG.md) - [Commits](https://github.com/JiLiZART/bbob/commits) --- updated-dependencies: - dependency-name: "@bbob/html" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(route): agirls topic list (#17731) * feat(routes/shu): add routes for SHU's Int'l Dept, Grad School, and Campus Highlights. (#17730) * feat(routes/shu): add routes for SHU's Int'l Dept, Grad School, and Campus Highlights - Corrected the root URL in `index.ts`. - Added routes for: - SHU's International Department (Int'l Dept). - Graduate School (Grad School). - Campus Highlights. - Noted the unavailability of the policy in `jwb.ts` with a comment in `index.ts`. * Update lib/routes/shu/index.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/shu/jwb.ts Co-authored-by: Tony <[email protected]> * Apply camelCase to variable names across the project. * Refactor: change to use detailed request format for GET request. * feat: refine content extraction and fix gs.shu.edu.cn issues - Refactored content extraction to focus on specific descriptions. - Added exception handling for inaccessible gs1.shu.edu.cn links. - Fixed bug where gs.shu.edu.cn content could not be retrieved. - Fixed Code scanning/ESLint warning: replaced disallowed syntax with .toArray(). * fix: Resolve ESLint warnings and errors * Update lib/routes/shu/xykd.ts Co-authored-by: Tony <[email protected]> * fix: Resolve ESLint warnings and errors again * fix: Resolve ESLint warnings and errors --------- * style: auto format * fix(route): taptap (#17732) * fix(route): taptap * fix(route): correct language code formatting in TapTap routes * chore(route/thepetcity): update namespace language * chore(route/theverge): add popular new media routes * chore(deps): bump @hono/zod-openapi from 0.18.0 to 0.18.1 (#17737) Bumps [@hono/zod-openapi](https://github.com/honojs/middleware) from 0.18.0 to 0.18.1. - [Release notes](https://github.com/honojs/middleware/releases) - [Commits](https://github.com/honojs/middleware/compare/@hono/[email protected]...@hono/[email protected]) --- updated-dependencies: - dependency-name: "@hono/zod-openapi" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @scalar/hono-api-reference from 0.5.161 to 0.5.162 (#17739) Bumps [@scalar/hono-api-reference](https://github.com/scalar/scalar/tree/HEAD/packages/hono-api-reference) from 0.5.161 to 0.5.162. - [Changelog](https://github.com/scalar/scalar/blob/main/packages/hono-api-reference/CHANGELOG.md) - [Commits](https://github.com/scalar/scalar/commits/HEAD/packages/hono-api-reference) --- updated-dependencies: - dependency-name: "@scalar/hono-api-reference" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(api/rss3): change platform to RSSHub * chore(deps-dev): bump @types/node from 22.10.0 to 22.10.1 (#17738) Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 22.10.0 to 22.10.1. - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases) - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node) --- updated-dependencies: - dependency-name: "@types/node" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix(wallstreetcn): crash when article delete (#17734) * fix(wallstreetcn): crash when article delete TypeError: Cannot read properties of undefined (reading 'display_name') * fix(style): camelCase * feat(route/apple/podcast): add optional region parameter (#17741) * feat(route/apple/podcast): add optional region parameter * fix(route/apple/podcast): fix lint * feat(aeon): enhance category and type routes with detailed parameters and improved data fetching (#17745) * feat: Added <category> for Isct news. (#17744) * 新增category * fix: follow camelCase fix: /isct/news/en * feat(pinterest): add pinterest (#17747) * feat(pinterest): add Pinterest * Update lib/routes/pinterest/user.ts * Update lib/routes/pinterest/types.ts --------- Co-authored-by: Copilot <[email protected]> * feat(route/twitter): add third-party twitter api support * refactor(route/twitter): keep twitter graphql endpoints consistent * fix(route/twitter): add enable thirdparty api switcher * chore(deps-dev): bump discord-api-types from 0.37.109 to 0.37.110 (#17752) Bumps [discord-api-types](https://github.com/discordjs/discord-api-types) from 0.37.109 to 0.37.110. - [Release notes](https://github.com/discordjs/discord-api-types/releases) - [Changelog](https://github.com/discordjs/discord-api-types/blob/main/CHANGELOG.md) - [Commits](https://github.com/discordjs/discord-api-types/compare/0.37.109...0.37.110) --- updated-dependencies: - dependency-name: discord-api-types dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @hono/zod-openapi from 0.18.1 to 0.18.2 (#17753) Bumps [@hono/zod-openapi](https://github.com/honojs/middleware) from 0.18.1 to 0.18.2. - [Release notes](https://github.com/honojs/middleware/releases) - [Commits](https://github.com/honojs/middleware/compare/@hono/[email protected]...@hono/[email protected]) --- updated-dependencies: - dependency-name: "@hono/zod-openapi" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route): add UK Parliament Petitions (#17746) * feat(route): add UK Parliament Petitions * fix typo * fix(route/newrank): wechat route error * fix(route): bluesky allow empty (#17751) * feat: Feature/foodtalks (#17718) * 24/11/28 fix: remove uncessary files * 24/11/28 fix: return promise * 24/11/28 refactor: return promises * 24/11/28 refactor: remove await in the function * 24/11/29 fix: pass Promises to items? * feat(picnob): cache user metadata & video playback in img_multi (#17756) * feat: add param :limit? to set article numbers (#17755) * 24/11/28 fix: remove uncessary files * 24/11/28 fix: return promise * 24/11/28 refactor: return promises * 24/11/28 refactor: remove await in the function * 24/11/29 fix: pass Promises to items? * 24/11/29 feature: page size 15 -> 30; add source authentication code * feat: add param :limit? * fix: change description * 24/11/30 feat: use ctx.req.query('limit') * Update lib/routes/foodtalks/index.ts --------- * fix(latepost): TypeError: Cannot read properties of undefined (reading 'title') (#17759) TypeError: Cannot read properties of undefined (reading 'title') * fix(route/fastbull): Use another site (#17765) * Update news.ts * fix(fastbull): update domain --------- * feat(route): add 趣集盐选故事 (#17761) * feat(route/qingting): return first page program instead of 10 * feat(route): 添加小红书 LivePhoto 视频支持 (#17760) * chore(deps): bump mailparser from 3.7.1 to 3.7.2 (#17769) Bumps [mailparser](https://github.com/nodemailer/mailparser) from 3.7.1 to 3.7.2. - [Release notes](https://github.com/nodemailer/mailparser/releases) - [Changelog](https://github.com/nodemailer/mailparser/blob/master/CHANGELOG.md) - [Commits](https://github.com/nodemailer/mailparser/compare/v3.7.1...v3.7.2) --- updated-dependencies: - dependency-name: mailparser dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @vercel/nft from 0.27.6 to 0.27.7 (#17775) Bumps [@vercel/nft](https://github.com/vercel/nft) from 0.27.6 to 0.27.7. - [Release notes](https://github.com/vercel/nft/releases) - [Commits](https://github.com/vercel/nft/compare/0.27.6...0.27.7) --- updated-dependencies: - dependency-name: "@vercel/nft" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint-plugin-yml from 1.15.0 to 1.16.0 (#17768) Bumps [eslint-plugin-yml](https://github.com/ota-meshi/eslint-plugin-yml) from 1.15.0 to 1.16.0. - [Release notes](https://github.com/ota-meshi/eslint-plugin-yml/releases) - [Changelog](https://github.com/ota-meshi/eslint-plugin-yml/blob/master/CHANGELOG.md) - [Commits](https://github.com/ota-meshi/eslint-plugin-yml/compare/v1.15.0...v1.16.0) --- updated-dependencies: - dependency-name: eslint-plugin-yml dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump dawidd6/action-download-artifact from 6 to 7 (#17776) Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 6 to 7. - [Release notes](https://github.com/dawidd6/action-download-artifact/releases) - [Commits](https://github.com/dawidd6/action-download-artifact/compare/v6...v7) --- updated-dependencies: - dependency-name: dawidd6/action-download-artifact dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @hono/zod-openapi from 0.18.2 to 0.18.3 (#17773) Bumps [@hono/zod-openapi](https://github.com/honojs/middleware) from 0.18.2 to 0.18.3. - [Release notes](https://github.com/honojs/middleware/releases) - [Commits](https://github.com/honojs/middleware/compare/@hono/[email protected]...@hono/[email protected]) --- updated-dependencies: - dependency-name: "@hono/zod-openapi" dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump globals from 15.12.0 to 15.13.0 (#17774) Bumps [globals](https://github.com/sindresorhus/globals) from 15.12.0 to 15.13.0. - [Release notes](https://github.com/sindresorhus/globals/releases) - [Commits](https://github.com/sindresorhus/globals/compare/v15.12.0...v15.13.0) --- updated-dependencies: - dependency-name: globals dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump tldts from 6.1.64 to 6.1.65 (#17772) Bumps [tldts](https://github.com/remusao/tldts) from 6.1.64 to 6.1.65. - [Release notes](https://github.com/remusao/tldts/releases) - [Changelog](https://github.com/remusao/tldts/blob/master/CHANGELOG.md) - [Commits](https://github.com/remusao/tldts/compare/v6.1.64...v6.1.65) --- updated-dependencies: - dependency-name: tldts dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @eslint/js from 9.15.0 to 9.16.0 (#17771) Bumps [@eslint/js](https://github.com/eslint/eslint/tree/HEAD/packages/js) from 9.15.0 to 9.16.0. - [Release notes](https://github.com/eslint/eslint/releases) - [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint/eslint/commits/v9.16.0/packages/js) --- updated-dependencies: - dependency-name: "@eslint/js" dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint from 9.15.0 to 9.16.0 (#17770) Bumps [eslint](https://github.com/eslint/eslint) from 9.15.0 to 9.16.0. - [Release notes](https://github.com/eslint/eslint/releases) - [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md) - [Commits](https://github.com/eslint/eslint/compare/v9.15.0...v9.16.0) --- updated-dependencies: - dependency-name: eslint dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat(route): add taiwanmobile rate-plans (#17766) * feat(route): add taiwanmobile rate-plans * fix: remove title and date labels * fix: use .toArray() before .map() * Update lib/routes/taiwanmobile/rate-plans.ts Co-authored-by: Tony <[email protected]> * Update lib/routes/taiwanmobile/rate-plans.ts Co-autho…
* fix(ieee): Restore author.ts * change function * I have addressed and implemented all suggestions and recommendations. * docs * Update lib/routes/ieee/author.ts
Involved Issue / 该 PR 相关 Issue
Close #
Example for the Proposed Route(s) / 路由地址示例
New RSS Route Checklist / 新 RSS 路由检查表
Puppeteer
Note / 说明