https://vlsijournal.com/index.php/vlsi/issue/feedJournal of VLSI Circuits and Systems2026-01-06T07:03:31+03:00Dr.A.Yaminivlsi@sccts.orgOpen Journal Systems<p>The <em>Journal of VLSI Circuits and Systems</em> is a peer-reviewed journal committed to publishing high-impact research in the field of Very-Large-Scale Integration (VLSI) design and systems engineering. The journal provides a platform for disseminating cutting-edge innovations that span the full spectrum of theoretical advances, simulation models, architecture design, physical implementations, and system-level integration in VLSI technology. (ISSN - 2582-1458)</p> <p>The journal invites original research papers, reviews, and application-driven studies that explore novel methodologies, tools, and trends across digital, analog, mixed-signal, and RF integrated circuits, as well as embedded and neuromorphic systems.</p> <p><strong>The journal covers a broad spectrum of topics related to VLSI circuits and systems, including but not limited to:</strong></p> <ol> <li><strong> VLSI Circuit Design</strong></li> </ol> <ul> <li>Low-power, high-speed digital circuit design methodologies.</li> <li>Analog and mixed-signal integrated circuits (ADC/DACs, PLLs, oscillators).</li> <li>Emerging logic families: adiabatic, quantum-dot cellular automata (QCA), reversible logic.</li> <li>Radiation-hardened and fault-tolerant circuit design.</li> <li>Clocking strategies, synchronization circuits, and time-interleaved designs.</li> </ul> <ol start="2"> <li><strong> Design Automation and EDA Tools</strong></li> </ol> <ul> <li>Hardware Description Languages (HDL), High-Level Synthesis (HLS), and Register Transfer Level (RTL) design.</li> <li>Placement, routing, and layout optimization.</li> <li>Logic and physical synthesis for power, performance, and area (PPA).</li> <li>AI/ML-driven EDA and design space exploration.</li> <li>Formal verification, equivalence checking, and constraint-driven simulation.</li> </ul> <ol start="3"> <li><strong> VLSI System Architectures</strong></li> </ol> <ul> <li>System-on-Chip (SoC), Network-on-Chip (NoC), and Chiplet-based modular architectures.</li> <li>Hardware/software co-design and hardware accelerators for edge and cloud computing.</li> <li>Memory subsystems: SRAM, DRAM, eNVM, MRAM, ReRAM integration.</li> <li>Application-specific architectures for AI, DSP, cryptography, and bioinformatics.</li> </ul> <ol start="4"> <li><strong> Emerging Trends and Technologies</strong></li> </ol> <ul> <li>3D ICs, Through-Silicon Vias (TSVs), and heterogeneous integration.</li> <li>Neuromorphic, brain-inspired, and spiking neural network hardware.</li> <li>Quantum VLSI circuits and cryo-CMOS design challenges.</li> <li>Photonic and plasmonic interconnects and optical VLSI.</li> <li>Approximate computing and in-memory computation (IMC).</li> </ul> <ol start="5"> <li><strong> Hardware Security and Reliability</strong></li> </ol> <ul> <li>Secure VLSI design, side-channel attack mitigation, and logic obfuscation.</li> <li>Hardware Trojans, counterfeit detection, and Physically Unclonable Functions (PUFs).</li> <li>Process variation analysis, aging-aware design, and reliability enhancement techniques.</li> <li>Design-for-testability (DFT), built-in self-test (BIST), and fault modeling.</li> </ul> <ol start="6"> <li><strong> AI and Reconfigurable VLSI Systems</strong></li> </ol> <ul> <li>FPGA/ASIC implementations of deep neural networks, transformers, and edge-AI.</li> <li>Real-time processing using dynamic partial reconfiguration.</li> <li>Hardware-aware neural architecture search (NAS) and pruning techniques.</li> <li>Custom tensor processors and systolic arrays for AI/ML inference and training.</li> </ul> <ol start="7"> <li><strong> Applications and Benchmarking</strong></li> </ol> <ul> <li>VLSI solutions for biomedical implants, autonomous vehicles, IoT, AR/VR, and robotics.</li> <li>Edge-computing accelerators with ultra-low power constraints.</li> <li>Energy-harvesting and battery-less VLSI systems.</li> <li>Benchmarking methodologies for performance, energy-efficiency, and silicon area.</li> </ul> <p>The journal targets academic researchers, VLSI designers, industry professionals, and students, aiming to advance VLSI circuit and system design through high-quality research.<br /><br /><strong>Frequency</strong> - 2 issue Per Year<br /><strong>ISSN</strong> - 2582-1458</p>https://vlsijournal.com/index.php/vlsi/article/view/252Unified Multimodal 64-Bit Arithmetic Logic Unit for High-Performance Computing Architectures2025-10-06T20:13:33+03:00Shilpi Birlashilpi.birla@jaipur.manipal.eduNeha Singhneha.singh@jaipur.manipal.eduJeevani G.S.Ng.229202080@muj.manipal.eduRenu Kumawatrenu.kumawat@jaipur.manipal.eduAvireni Srinivasuluavireni@ieee.org<p>An Arithmetic Logic Unit (ALU) is the core component for any processing unit to perform arithmetical and logical operations for modern computing. The design for ALUs for specific tasks including integer and floating-point arithmetic, logical operations, data movement, and control functions influences the CPU architecture and digital system design. This work presents a unified ALU implemented in Verilog hardware description language (HDL), capable of performing arithmetic and logic operations across diverse numerical representations. The ALU is engineered to integrate logic unit, signed arithmetic processor, unsigned arithmetic processor, and floating-point arithmetic processor. The selection of operation is governed by select line signals to facilitate versatile user-driven functionality at the hardware level. To enhance the computational efficiency for handling multiplication of 64-bit signed and unsigned operands which generates 128-bit result, the proposed ALU architecture employs parallelism by processing the most significant and least significant bits simultaneously. A flexible mechanism for selective output enables the user to extract the desired segment of the result. Consolidating floating-point and fixed-point computations within a single ALU instance, the proposed architecture reduces the silicon area with reduced power dissipation and computational latency while streamlining routing complexities. This multi-modal ALU design with high-throughput is particularly suited for deployment in heterogeneous computing environments such as general-purpose processors, cryptographic accelerators, and machine intelligence hardware, where the rapid processing of heterogeneous data types is essential for workload optimization and energy-efficient system operation.</p>2025-11-05T00:00:00+03:00Copyright (c) 2025 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/281VLSI Circuits–Oriented Gate-Length-Dependent DC–RF Compact Modeling of AlInN/AlN/GaN MISHEMTs with SSEC Extraction2025-12-22T12:52:52+03:00K. Nagabushanambhushanam18@gmail.comSriadibhatla Sridevisridevi@vit.ac.in<p>This work develops a compact analytical DC-RF model based on VLSI-circuit design that accounts for gate-length scaling impacts in RF circuit designs using . In order to effectively simulate the drain current density, transconductance and the gate charge for use in simulating RF and microwaves circuits using technology, a two-dimensional electron gas (2-D) sheet-charge based formulation has been developed to account for flat-band voltage and polarization charge impacts. This model was validated with both TCAD simulations and experimental data for a range of gate lengths (from 0.1 to 0.3 μm), resulting in a maximum drain current density of 2.35 A/mm and an estimated cut of frequency of125 GHz when the gate length was 0.1 μm. In addition, a refined small-signal equivalent circuit (SSEC) extraction methodology, integrating conventional and gradient-based optimization techniques, is introduced to improve parasitic de-embedding accuracy. Extracted S-parameters enable robust frequency-domain characterization, yielding f<sub>T</sub>=170 GHz and f<sub>max</sub>=183 GHz. The proposed compact model demonstrates scalable, bias-consistent DC and RF prediction, making it well suited for VLSI RF and microwave circuit design and simulation using GaN MISHEMT technologies.</p>2025-12-26T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/274AI-Optimized Low-Power VLSI Solutions for Implantable Biomedical Devices Integrating Neural Networks and Bio-Signal DSP2025-12-16T07:46:13+03:00S.Aarthicse.aarthi@gmail.comRustamov Ilhomidinirustamov4444@gmail.comKuchkarov Voxid Alisherovichvoxidkuchkarov@gmail.comKhakimov Zaynobiddinzzaynnobiddin@gmail.comDilfuza Sadikovad.sadikova@kiut.uzSayfullayev Mekhroj Sayfullayevichm.sayfullayev@tsue.uzM. Manjumanjuct21@gmail.com<p>Biomedical devices that are implantable are increasingly based on on-chip intelligence and low-power computing as well as secure processing of physiological signals to facilitate continuous monitoring and closed-loop intervention. Conventional VLSI design systems are unable to satisfy the high power, latency, and reliability requirements of any long-term implantable system, particularly with neural-network inference and bio-signal DSP pipelines becoming the new norm with next-generation medical implants. The following paper describes an AI-based low-power VLSI design system with neural inference engines, real time physiological DSP, and adaptive power optimization designed specifically to be used in implantable systems. To reduce dynamic switching energy, optimise arithmetic precision and speed up convolutional bio-signal processing, machine learning is incorporated throughout the design process. An optimization framework with multiple objectives is used in order to meet the biomedical requirements of thermal safety, battery life, and energy restrictions that are biocompatible. The physics of simulation with 65 nm and 28 nm low-leakage CMOS nodes show that they can significantly reduce energy usage, increase the level of classification of neural and ECG signals, and become more resistant to signal artefacts. The suggested architecture proves to be highly suitable in pacemakers, neural prosthetics, wearable-implant hybrids and intelligent drug-delivery implantable devices that need continuous low-power AI-supported functionality. The work sets a common ground towards the combination of neural inference, DSP, and biomedical safety aspects to the next-generation implantable VLSI platforms.</p>2025-12-14T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/275Machine Learning-Assisted Automated VLSI Design for Bioinformatics Hardware Accelerators with Embedded Cryptographic Security2025-12-16T07:54:57+03:00Gajraj Singhgajrajsingh@ignou.ac.inUmarov Shukhratjonsht00357@gmail.comKamilova Sabohat Kavuljonovnasabohatkamilova176@gmail.comIbrokhimjon N. Abdullayevabdullayevibrohimjon108@gmail.comNayimov Shokhrukhsh.nayimov@kiut.uzSandeep DongreSandeep.dongre@sibmnagpur.edu.inEnoch Arulprakashenocharulprakash03@gmail.com<p>The growing need of high-throughput bioinformatics computation, and the strict data privacy demands has further triggered the need to have hardware accelerators with both sophisticated processing and in-built cryptographic protection. The given paper introduces a machine learning-aided automated VLSI design system that should be used to create next-generation bioinformatics accelerators with embedded security primitives. The suggested approach capitalizes on the design-space exploration based on the learning approach, adaptive hardware synthesis, and on-board encryption to facilitate genomic alignment, protein structure modelling, and multi-omics signal analysis. Reinforcement learning and trained prediction models are auto-generate architectural choices which can be used to optimize datapaths, memory subsystems, and cryptographic blocks. Lightweight AES, hash and PUF authentication units embedded are used to guarantee confidentiality and integrity in biomedical processes where compliance with regulation is paramount. The given framework would be beneficial to both the edge and cloud-connected biomedical system because it allows increasing design scalability and decreasing the amount of manual engineering overhead. The experimental tests show that it is more efficient in design, has less power consumption, and is more efficient in computational throughput than traditional VLSI techniques. This is further enhanced by the fact that the ML-directed optimization further minimises development cycles, and guarantees security-performance balance of various bioinformatics kernels. This work introduces a single design paradigm, which is a unification of automated VLSI synthesis, machine intelligence, and cryptographic protection of secure biomedical hardware acceleration.</p>2025-12-15T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/276Privacy-Preserving Access Control for IoT Smart Homes Using Hyperledger Fabric Consortium Blockchain and Edge Computing on Raspberry Pi2025-12-16T08:04:22+03:00Aurangjeb Khanaurangazeb.k@cmr.edu.inK. Jayasudhajayasudhakaliannan@gmail.comM. Jagadeesanjagadeesankec@gmail.com<p>The problem of privacy and security continues to be a challenge in smart home Internet of Things (IoT) environments, in which heterogeneous devices share sensitive information and execute autonomous behaviours. The paper offers a privacy-protecting access control architecture implemented using hardware acceleration (Raspberry Pi gateway) by means of Hyperledger Fabric consortium blockchain and a VLSI-based edge security unit implemented on a Raspberry Pi gateway. The proposed co-design, unlike the traditional cloud-based authentication, includes the low-power cryptographic accelerator, a secure identity engine designed in Physical Unclonable Function (PUF) and a hardware access-control pipeline synthesized with the 65 nm CMOS technology. These hardware components enhance blockchain-intensive functions such as SHA-256 hashing, AES-GCM encryption, certificate verification, and policy analysis and minimise processing overhead that is commonly linked to blockchain-based IoT systems. The Hyperledger Fabric offers tamper-evident, decentralized access registration, and Raspberry Pi links with the custom accelerator through a hybrid software-hardware software flow of execution. Experimental testing shows that it has lower authentication latency, throughput efficiency and energy consumption compared to pure software-based blockchain validation. The hardware accelerator has a maximum of 61% lessening in transaction validation jolt and 47 percent power decrease when cryptographic tasks are performed. It improves privacy in that both the identity of the user, the control commands and the records of authorization are stored and authenticated using hardware bound cryptographic primitives and a distributed register. Hardware acceleration, blockchain consensus, and edge intelligence can ensure a high-quality solution to smart home access control, which is scalable.</p>2025-12-11T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/271AI/ML-Driven Electronic Design Automation Framework for Quantum-Aware VLSI Circuit Synthesis and Optimization in High-Performance Computing Applications2025-12-15T12:29:24+03:00R.Shanthisha.raju2003@gmail.comSevinov Jasur Usmonovichsevinovjasur@gmail.comNurmatov Mirzaakbar MirzaaliyevichNurmatovm1986@gmail.comMatkurbanov Tulkin Alimboevichtulkinmatkurbanov2020@gmail.comSapaev Bayramdurdib.sapayev@afu.uzJurayev Khusanxusan_jurayev@tues.uzLola Abduraximoval.abduraximova@kiut.uz<p>Increased development of high-performance computing (HPC) has increased the pressure on new paradigms in VLSI circuit design, as device scaling is approaching scaling limits where quantum effects become important. Conventional electronic design automation (EDA) processes have difficulties in dealing with nonlinear interactions that occur when quantum tunnelling, leakage currents, and probabilistic switching are also present in the deeply scaled technology. To overcome them, this paper suggests an all-encompassing AI/ML-based EDA architecture, which incorporates quantum-aware modelling, predictive synthesis, and adaptive optimization of future HPC-oriented next-generation VLSI systems. The framework has incorporated machine learning-based parametric estimation, reinforcement learning on layout exploration and physics-guided neural models: non-classical effects in nanoscale transistors. Furthermore, the system uses generative learning algorithms to create multi-objective design trade-offs in terms of timing, power, area and quantum reliability. The hybrid digital-quantum design flow is presented, allowing to easily interchange the classical EDA operations and quantum-inspired device tests. Nanometer-scale benchmark circuit confirmation the efficiency of synthesis, accuracy of leakage prediction and rate of convergence of optimization are shown to be much enhanced with respect to more traditional EDA pipelines. The given methodology puts the emphasis on the role of intelligent automation as the means of guiding the VLSI research towards the point of quantum-awareness as the key to ensuring reliability, scalability, and energy efficiency in an HPC system.</p>2026-01-23T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/272Energy-Efficient Neural Network Accelerator Design for Real-Time DSP and Cryptographic Processing Using Advanced VLSI Architectures2025-12-16T07:27:33+03:00Bobomurodov Nasriddin Hasanovich5850200@mail.ruMatyokubov Utkir Karimovichotkir_matyokubov89@mail.ruA.R.Ismailovazizbek-uz@mail.ruI.B. Sapaevsapaevibrokhim@gmail.comFarrukh Sulaymonovakbar_toyirov@tues.uzAkbar Toyirovakbar_toyirov@tues.uzIsayev Fakhriddinf.isayev@tsue.uz<p>The development of artificial intelligence, real-time digital signal processing (DSP) and cryptographic workloads have been fueling the need to have highly efficient neural network accelerators embedded within state-of-the-art VLSI architectures. Traditional accelerators that are optimised to either DSP or cryptography are no longer capable of supporting the power, latency, and throughput requirements of the current embedded and high-performance computing systems. With the increasing complexity of neural workloads and the increasing security assurances needed with cryptographic operations, energy efficiency is becoming more important than ever. It is a unified, energy efficient accelerator design which combines the use of neural processing, DSP kernels, and cryptographic primitives, in a single VLSI system. The framework suggested utilizes hardware-aware quantization, systolic TA, reconfigurable DSP pipeline, and low-power cryptographic cores designed with the aid of machine learning-driven design frameworks. Learning is performed to search through architectural designs via reinforcement learning, and the single-objective bi-objective-optimization is directed by hardware-aware optimization of performance in terms of area, power and latency. Improved throughput-per-watt, low-latency processing and secure execution is experimentally proven using 7 nm and 5 nm design nodes compared to current accelerator designs. The findings indicate that the suggested architecture can be effectively applied to AI-based embedded systems, secure IoT systems, and real time edge intelligence that requires co-location of DSP and cryptographic operation. This paper adds a scalable energy aware VLSI accelerator design that can address the increased computational and security requirements of the next generation intelligent systems.</p>2026-01-23T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/277AI-Optimized Design Automation and Quantum-Inspired Secure VLSI Architectures for Edge and Autonomous Computing 2025-12-17T12:57:12+03:00P. Aravindanaravindan@ksrct.ac.inE. Mariappanedev_mari@rediffmail.comK.Sathiyasekarsathiyasekark@ksrce.ac.in<p>The high rate of growth of edge systems and autonomous systems requires real-time optimised, energy efficient and secure hardware architectures. The conventional VLSI design flows cannot accommodate such requirements because design complexity is on the rise, security threats are increasing, and also, high performance computing has to be done under stringent power limitations. A unified system of AI-based design automation, quantum-inspired logic optimization, and hardware security co-design of next-generation VLSI systems is described in this paper. The suggested approach allows the study of design space faster, increases the security level, and minimises power and delay, as well as, optimises the workload performance of edge and autonomous applications. The experiments show that there is a considerable improvement in the PPA (Power, Performance, Area), attack resistance and inference efficiency. The methodology is close to the current tendencies of VLSI and moves towards real-life applicability of secure and optimised architectures towards embedded intelligence.</p>2026-01-23T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/259Implementation of an Efficient RISC-V Processor Featuring a Novel Gshare Branch Prediction Technique2025-11-18T12:32:23+03:00Tri-Duc Taductt@uit.edu.vnThanh-Phat Nguyen Nguyenphatnt@uit.edu.vnQuoc-Thinh Tran Tranthinhtq@uit.edu.vn<p>RISC-V, characterized by its straightforward and open-source instruction set design, is becoming a compelling platform for contemporary IoT devices. Dynamic branch prediction, especially two-level methods and history/address hashing approaches such as Gshare, has demonstrated significant efficacy in alleviating control hazards in pipelined processors. This study introduces a 5-stage (IF–ID–EX–MEM–WB) RISC-V core that incorporates a Branch Prediction Unit (BPU), which merges a Branch Target Buffer (BTB) and a Pattern History Table (PHT) featuring 256 entries with 2-bit saturating counters. The PHT index is derived from XOR(GHR, PC[9:2]), while the BTB is refreshed utilizing the lower 8 bits of the PC. The design was executed from RTL to GDSII with Cadence Genus, Conformal, and Innovus on GPDK045 (45 nm) technology, illustrating feasibility beyond research confined to RTL or FPGA, such as RVCoreP. RTL simulation verified the accurate execution of all 37 RV32I instructions and attained roughly 90.4% accuracy in branch prediction for branch-intensive tasks. Post-layout results indicate that the design attained the target frequency (exceeding 50 megahertz), with a recorded maximum frequency of roughly 75 MHz. The overall power consumption of the core is around 15.078 mW across an area of roughly 0.69 mm², resulting in a core density of nearly 70 percent. These data validate the feasibility of employing two-level branch prediction in lightweight RISC-V microcontrollers.</p>2026-01-23T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systemshttps://vlsijournal.com/index.php/vlsi/article/view/273Quantum-Inspired VLSI Architectures for Secure Cryptographic Signal Processing in Next-Generation AI-enabled hardware systems2025-12-16T07:37:48+03:00N. Shanmugapriyaspriyanatrajan@gmail.comSotvoldiev Xusniddin IbragimovichSotvoldiyevxusniddin82@gmail.comJumaboyeva Marhabo Rustamboyevnajumaboyevamarhabo@gmail.comRakhmatullaev Ilkhom Rakhmatullayevichilhom9001@gmail.comAzim Khalilovazimxj1981@gmail.comShakhboz Meylikulovshaxboz_meyliqulov@tues.uzIbragimkhodjaev Bakhodirb.ibragimxodjayev@afu.uz<p>The sudden advent of AI-driven platforms and smart cyber-physical systems have rendered more and more demands on hardware frameworks that are capable of delivering quantum-inspired processing, secure cryptographic processing, and efficient signal-processing aptitudes all at once. The conventional VLSI architectures are constrained by the deterministic logic models which do not reflect the probabilistic characteristic of the quantum-inspired algorithms and the advanced cryptographic models. In order to fill this gap, this work introduces a single quantum-inspired VLSI that is optimised to implement secure signal processing in AI hardware of the next generation. The suggested system will combine approximate probabilistic computing blocks, reversible-logic embedded datapaths, and lightweight quantum-state emulation units to aid in increasing security, decreasing power usage, supporting parallel cryptographic transformations. The optimization methods based on machine learning are applied to explore architectures, allowing reconfiguration of cryptographic workloads and neural inference workloads and signal-processing workloads dynamically. The use of 5 nm and 7 nm technology nodes leads to simulation studies which show great improvement in throughput-per-watt, encryption latency and resistance to side-channel vulnerabilities. The architecture also minimises signal-processing overhead and signification of cryptographic diffusion properties which are important to edge AI deployments. The given work provides an addition of a scaleable quantum-inspired design model that can bridge the gap between traditional VLSI and new post-quantum computing requirements, and provide secure, energy-efficient, AI-operated hardware systems that can support defense, autonomous infrastructures, and future Internet of Things.</p>2026-01-24T00:00:00+03:00Copyright (c) 2026 Journal of VLSI Circuits and Systems