Classification of 12 Thai Fabric Patterns Using Deep Learning Techniques
Keywords:
deep learning , Convolutional Neural Networks , textile patternsAbstract
Background and Objectives: Thai woven textiles are not merely garments but are a vital component of the nation's tangible cultural heritage, weaving together the stories, history, wisdom, and reflecting the identity of local communities over a long period. The history of weaving in Thailand dates back to prehistoric times, using natural materials such as cotton and silk. In each region, textile patterns have been developed with unique characteristics that reflect the environment, religious beliefs, and social structures. These patterns are thus like non-textual historical records, serving to pass down wisdom from generation to generation, making Thai textiles an invaluable cultural asset. However, the conservation, storage, and classification of the diverse and complex patterns of these textiles remain a significant challenge. The knowledge and expertise in recognizing complex patterns are often limited to a group of elderly weavers, who are at risk of this knowledge being lost as their numbers decline. Furthermore, the variety and similarity among patterns make accurate classification difficult for the general public. These obstacles are not only a limitation in creating a standardized database but also impact education and commercial development. To address these challenges, this research presents a feasibility study on the application of Artificial Intelligence (AI) technology, particularly Deep Learning, to develop a classification system for Thai textile patterns. Deep learning technology and convolutional neural networks have outstanding capabilities in recognizing and analyzing visual patterns, which is highly suitable for classifying the complex details of patterns on fabric. The application of this technology aligns with the international trend of using AI for the conservation of cultural heritage and lays an important technological foundation for creating a digital database, preserving heritage, and disseminating knowledge more widely.
Methodology: The research utilized a dataset of 12 Thai textile patterns, comprising Lai Krajap, Lai Kha Pia, Lai Khom Ha, Lai Dok Rak Ratchakanya, Lai Nok Yung, Lai Fong Nam, Lai Mun Si Khram, Lai Met Khao Phasom Dok Daoruang, Lai Wachiraphak, Lai Mi Khan Nak Boran, Lai Hae, and Lai Sarong. There were 120 images for each pattern, totaling 1,440 images. The entire dataset underwent a data preparation process to ensure its suitability and to enhance model robustness. This began with resizing all images to 224x224 pixels. Contrast was enhanced using the Histogram Equalization technique on the luma (Y) channel to make the textile patterns more prominent. Noise was reduced using a bilateral filter to preserve the sharpness of the pattern edges. Edge sharpness was enhanced using a kernel filter to help the model detect specific features more easily. Normalization was performed by adjusting the pixel color values to a range of 0.0 to 1.0. Additionally, data augmentation techniques such as image rotation and flipping were used to prevent overfitting. The dataset was then split into a training and validation set of 80% (1,152 images) and a test set of 20% (288 images). This research evaluated the performance of six different deep learning models was evaluated Custom_CNN: A model designed specifically for this research. VGG16: A model with a simple structure but with a depth of 16 layers. ResNet50: A deep 50-layer model that uses Residual Connections to solve the vanishing gradient problem. MobileNetV2: A lightweight model designed for mobile devices. InceptionV3: A model that can extract features from multiple scales within the same layer. DenseNet121: A model distinguished by its dense connectivity to promote feature reuse.
Main Results: The experimental results on the independent test set indicate that the Custom_CNN model had the highest performance, with an overall accuracy of 99.65%, followed by VGG16 at 99.31%. In contrast, the ResNet50 model yielded the lowest performance, with a significantly lower accuracy of 81.25%, making 54 incorrect predictions.
Conclusions: These research findings demonstrate the importance of selecting a model that is appropriate for the characteristics of the dataset. The fact that the Custom_CNN model, which has a less complex structure, could achieve the highest performance can be explained by the highly specialized and high-quality nature of the Thai fabric dataset. This caused overly complex models like ResNet50, which was designed for larger and more diverse datasets like ImageNet, to potentially suffer from overfitting. That is, an overly complex model may start to memorize the details of the training data instead of learning the general features of the patterns. The excessive depth and high-level features learned from other datasets may not be suitable for the delicate texture-based features of the fabric patterns, causing the model to adapt poorly to the new task. This research not only demonstrates the successful creation of a highly accurate classification system for 12 types of Thai textile patterns but also lays an important technological foundation for creating a digital database for cultural heritage preservation and wider knowledge dissemination. This finding also emphasizes that the best model is not necessarily the most complex one, but rather the model that is most suitable for the nature of the problem and the dataset.
References
Albelwi, S. A. (2022). Deep Architecture based on DenseNet-121 Model for Weather Image Recognition. International Journal of Advanced Computer Science and Applications, 13(10), 559-565.
Chaichana, C., Wongkalasin, S., & Jiamram, C. (2022). A Development of Information Retrieval System of Indigo Fabric Pattern with Deep Learning Techniques. UTK RESEARCH JOURNAL, 16(1), 68–83. (in Thai)
Cychnerski, J., Brzeski, A., & Trojanowicz, M. (2017). Clothes detection and classification using convolutional neural networks. IEEE International Conference on Image Processing (ICIP).
Ermatita, Noprisson, H., & Abdiansah. (2024). Palembang songket fabric motif image detection with data augmentation based on ResNet using dropout. Bulletin of Electrical Engineering and Informatics, 13(3), 1991-1999.
Ghosh, J., & Gupta, S. (2023). ADAM Optimizer and CATEGORICAL CROSSENTROPY Loss Function-Based CNN Method for Diagnosing Colorectal Cancer. In 2023 International Conference on Computational Intelligence and Sustainable Engineering Solutions (CISES). (pp. 470–474). IEEE.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. (pp. 770-778).
Hussain, M., Khan, S. D., & Ahmad, J. (2020). Woven fabric pattern recognition and classification based on deep convolutional neural networks. Electronics, 9(6), 1048.
Ingo, N., Phatichaikiart, B., & Tongleamnak, S. (2023). Thai Silk Patterns Classification with Deep Neural Networks. Journal of Applied Informatics and Technology, 5(2), 116-129.(in Thai)
Mbonu, C. E., Anigbogu, K., Asogwa, D., & Belonwu, T. (2025). An Explorative Analysis of SVM Classifier and ResNet50 Architecture on African Food Classification. arXiv preprint arXiv:2505.13923.
Muangkarn, P., Suanoom, C., & Phopan, W. (2025). Designing Cotton Weaving Patterns Using Mathematical Knowledge of Boolean Operations. Asian Creative Architecture, Art and Design: ACAAD, 38(1), 1-15. (in Thai)
Poonsilp, K. (2018). The Development of Thai Local Woven Fabric Pattern Recognition. Veridian E-Journal,Silpakorn University, 11(3), 2056-2068. (in Thai)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). MobileNetV2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE conference on computer vision and pattern recognition, 4510-4520.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Viswanath. C. Burkapalli, & Priyadarshini. C. Patil. (2020). An efficient food image classification by inception-V3 based cnns. International Journal of Scientific & Technology Research, 9(3), 6987–6992.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Faculty of Science, Burapha University

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Burapha Science Journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence, unless otherwise stated. Please read our Policies page for more information
