Neural Computing-Driven Signal Processing Frameworks for IoT-Enabled AR/VR and Robotic Systems: A VLSI-Centric Perspective
DOI:
https://doi.org/10.31838/JVCS/07.01.26Keywords:
Neural signal processing, IoT, AR/VR, robotics, VLSI implementation, edge computing, adaptive hardwareAbstract
The article presents a VLSI-based neural signal processing system that is meant to provide high-performance, low-latency, and energy-efficient computations to realise next-generation IoT-enabled augmented/virtual reality (AR/VR) and robotics. The architecture suggested combines the hardware/software co-design, neural model compression and scalable VLSI implementation to facilitate real-time on-device intelligence. Fundamentally, the architecture has an adaptive multi-stage pipeline that integrates multimodal sensor data vision, motion and environmental streams via a hybrid neural signal processing stack composed of convolutional, recurrent, and spiking neural modules. In contrast to traditional DSP or entirely algorithmic accelerators, the system is based on VLSI-conscious neural mapping, dataflow scheduling and precision-adaptive arithmetic to reduce the computation latency and power consumption with a rigid set of edge resources. Designed on a reconfigurable FPGA-VLSI platform, the design has shown significant benefits in a variety of AR/VR and robotic metrics with the lowest system latency, throughput, and energy consumption of up to 3.2x, 2.7x, and 58 percent, respectively, over initial DSP and classical processing designs. These findings affirm the framework as a single, extensible platform of real-time signal-driven intelligence, which can be developed to enhance immersive, autonomous and edge-sensitive computing platforms in smart robotics, wearable systems and cyber-physical environments.



