There has been a significant increase in the production of images on the internet in recent years, necessitating the development of automated content management systems. The content-based image retrieval (CBIR) model has been developed to lessen reliance on the textual annotations-based picture retrieval model. A range of features-classifier combinations-based CBIR techniques are available for analyzing query image content and retrieving appropriate images. While these techniques improve retrieval performance in single-class scenarios, semantic similarity between images of various classes causes a considerable loss in performance in multi-class search contexts. This research proposes a novel deep learning-based technique for a content-based image retrieval system. Initially, the input/query images were taken from three publicly available datasets: Kvasir, CIFAR-10, and Corel-1k. Then, noise reduction contrast enhancement and normalization in pre-processing were performed to better understand the image features. After that, the picture's image texture, color, and shape attributes were retrieved for classification and similarity matching using the SE-ResNeXt-101 technique. Then, from the query picture and database pictures, these attribute vectors were utilized to categorize the dataset pictures using the Improved ShuffleNetV2 method; this method measures the similarities between pictures to retrieve the most alike DB pictures given as the query. The Improved Mayfly Optimization Algorithm (IMOA) was used to increase retrieval performance. The outcomes of the experiments show that the approach performed better in terms of picture retrieval than the most advanced CBIR techniques currently in use.
Gulla, R. ., Rani, S. S. ., & Uppada, R. . (2025). OptISNV2: An Effective Deep Learning-Based Approach to Content-Based Image Retrieval Systems. CURRENT APPLIED SCIENCE AND TECHNOLOGY, e0263395. https://doi.org/10.55003/cast.2025.263395

