Research on the application of computer-aided deep learning model in natural language processing
DOI: 10.23977/jeis.2024.090320 | Downloads: 8 | Views: 210
Author(s)
Xiaokai Jiang 1, Xuewen Ding 1, Chunyu Liu 1, Xinyi Li 1, Yuan Zhang 1, Shaosai Wang 1
Affiliation(s)
1 School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, China
Corresponding Author
Xuewen DingABSTRACT
With the advent of the big data era and significant advancements in computing power, deep learning (DL) technology has achieved breakthrough progress in natural language processing (NLP), particularly in language understanding and generation. This study focuses on the application of neural networks and their variants, such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short Term Memory Network (LSTM), and Gated Recurrent Unit (GRU), in NLP tasks, as well as how the Transformer model leverages self-attention mechanisms to enable parallel processing and effectively capture long-distance dependencies. Additionally, the research explores the importance of pre-training and self-supervised learning in enhancing model generalization and reducing overfitting. In terms of specific applications, this paper provides a detailed analysis of DL models in text classification, sentiment analysis, machine translation, dialogue systems, and question-answering systems. It demonstrates how these models significantly improve the efficiency and effectiveness of NLP tasks through automatic learning of complex features, strong generalization capabilities, and end-to-end learning processes.
KEYWORDS
Natural language processing; computer-aided; deep learningCITE THIS PAPER
Xiaokai Jiang, Xuewen Ding, Chunyu Liu, Xinyi Li, Yuan Zhang, Shaosai Wang, Research on the application of computer-aided deep learning model in natural language processing. Journal of Electronics and Information Science (2024) Vol. 9: 153-159. DOI: http://dx.doi.org/10.23977/10.23977/jeis.2024.090320.
REFERENCES
[1] Dhopavkar, G., Welekar, R. R., & Ingole, Piyush K.Vaidya, ChanduWankhade, Shalini VaibhavVasgi, Bharati P. (2023). Optimizing resource allocation in cloud for large-scale deep learning models in natural language processing. journal of electrical systems, 19(3), 62-77.
[2] Shahin, N., & Ismail, L. (2024). From rule-based models to deep learning transformers architectures for natural language processing and sign language translation systems: survey, taxonomy and performance evaluation. Artificial Intelligence Review, 57(10), 1-51.
[3] Al-Makhadmeh, Z., & Tolba, A. (2020). Automatic hate speech detection using killer natural language processing optimizing ensemble deep learning approach. Computing, 102(2), 501-522.
[4] Guo, J. (2022). Deep learning approach to text analysis for human emotion detection from big data. Journal of Intelligent Systems, 31(1), 113-126.
[5] Boulieris, P., Pavlopoulos, J., Xenos, A., & Vassalos, V. (2024). Fraud detection with natural language processing. Machine Learning, 113(8), 5087-5108.
[6] Zhang, L., Yang, X., Li, S., Liao, T., & Pan, G. (2022). Answering medical questions in chinese using automatically mined knowledge and deep neural networks: an end-to-end solution. BMC bioinformatics, 23(1), 136.
[7] Sawant, A. A., & Devanbu, P. (2021). Naturally!: how breakthroughs in natural language processing can dramatically help developers. IEEE Software, 38(5), 118-123.
[8] Yang, D., Zhu, D., & Wan, G. F. (2024). Semantic similarity caculating based on bert. Journal of Electrical Systems, 20(2), 73-79.
[9] Liu, H., Jun, G., & Zheng, Y. (2021). Chinese named entity recognition model based on bert. MATEC Web of Conferences, 336(3), 06021.
[10] Alyafeai, Z., Al-Shaibani, M. S., Ghaleb, M., & Ahmad, I. (2023). Evaluating various tokenizers for arabic text classification. Neural Processing Letters, 55(3), 2911-2933.
[11] Shao, Q., Zhang, W., & Wang, S. (2023). End-to-end aspect category sentiment analysis based on type graph convolutional networks. high technology letters, 29(3), 325-334.
[12] Wang, W., Lee, C. M., & Liu, T. P. W. (2023). An empirical study of cyclical learning rate on neural machine translation. Natural language engineering, 29(2), 316-336.
[13] Zou, A., Wu, X., Li, X., Zhang, T., Cui, F., & Xu, J. (2024). Curriculum pre-training for stylized neural machine translation. Applied Intelligence, 54(17-18), 7958-7968.
[14] Maruf, S., Saleh, F., & Haffari, G. (2021). A survey on document-level neural machine translation: methods and evaluation. ACM Computing Surveys, 54(2), 1-36.
[15] Yang, M., Liu, S., Chen, K., Zhang, H., Zhao, E., & Zhao, T. (2020). A hierarchical clustering approach to fuzzy semantic representation of rare words in neural machine translation. IEEE Transactions on Fuzzy Systems, 28(5), 992-1002.
[16] Wang, J., & Dong, Y. (2020). Measurement of text similarity: a survey. Information (Switzerland), 11(9), 421.
Downloads: | 10606 |
---|---|
Visits: | 361365 |
Sponsors, Associates, and Links
-
Information Systems and Signal Processing Journal
-
Intelligent Robots and Systems
-
Journal of Image, Video and Signals
-
Transactions on Real-Time and Embedded Systems
-
Journal of Electromagnetic Interference and Compatibility
-
Acoustics, Speech and Signal Processing
-
Journal of Power Electronics, Machines and Drives
-
Journal of Electro Optics and Lasers
-
Journal of Integrated Circuits Design and Test
-
Journal of Ultrasonics
-
Antennas and Propagation
-
Optical Communications
-
Solid-State Circuits and Systems-on-a-Chip
-
Field-Programmable Gate Arrays
-
Vehicular Electronics and Safety
-
Optical Fiber Sensor and Communication
-
Journal of Low Power Electronics and Design
-
Infrared and Millimeter Wave
-
Detection Technology and Automation Equipment
-
Journal of Radio and Wireless
-
Journal of Microwave and Terahertz Engineering
-
Journal of Communication, Control and Computing
-
International Journal of Surveying and Mapping
-
Information Retrieval, Systems and Services
-
Journal of Biometrics, Identity and Security
-
Journal of Avionics, Radar and Sonar