Education, Science, Technology, Innovation and Life
Open Access
Sign In

Comparison of Explainability and Scalability of Causal AI and LLMs in AGI

Download as PDF

DOI: 10.23977/acss.2025.090315 | Downloads: 0 | Views: 45

Author(s)

Zicheng Huang 1

Affiliation(s)

1 College of of Computer Science and Software Engineering, Guangxi Normal University, Guilin, 541006, Guangxi, China

Corresponding Author

Zicheng Huang

ABSTRACT

Causal AI and Large Scale Language Modeling (LLM) are the two main directions of current AI research, focusing on causal reasoning and natural language processing, respectively. This article attempts to answer a key question: Which is more promising on the path towards safe general artificial intelligence (AGI), casual AI or LLM? Research shows that although both have their own advantages, relying solely on either path has significant limitations. Therefore, this article proposes a fusion path that combines the causal inference ability of Causal AI with the language understanding and task execution advantages of LLM, which may provide a more feasible solution for the implementation of AGI. Ultimately, the comprehensive method proposed in this article may bring new insights and directions for the development of artificial general intelligence.

KEYWORDS

Artificial General Intelligence, Artificial Intelligence, Large Language Models, Machine Learning

CITE THIS PAPER

Zicheng Huang, Comparison of Explainability and Scalability of Causal AI and LLMs in AGI. Advances in Computer, Signals and Systems (2025) Vol. 9: 122-129. DOI: http://dx.doi.org/10.23977/acss.2025.090315.

REFERENCES

[1] D. Wu, H. Li, X. Chen, "Exploring the impact of general-purpose large AI models on education," Open Education Research, vol. 29, no. 2, pp. 19-25+45, 2023.
[2] J. Zhao, F. Wen, J. Huang, et al., "Toward general artificial intelligence for power systems with large language models: theory and applications," Automation of Electric Power Systems, pp. 1-16, 2024. [Online]. Available: http://kns. cnki.net/kcms/detail/32.1180.tp.20231123.1439.006.html.
[3] P. Wang, "From control to guidance: intuition and governance paths of general artificial intelligence," Oriental Law, pp. 1-11, 2024. [Online]. Available: https://doi.org/10.19404/j.cnki.dffx.20231116.005.
[4] J. Shi, J. Liu, "Optimization and innovation of public-library services based on general artificial intelligence," Library Development, pp. 1-11, 2024. [Online]. Available: http://kns.cnki.net/kcms/detail/23.1331.G2.20231031.1435.005.html.
[5] Z. Zhang, T. Liu, "ChatGPT technology analysis and prospects for general artificial-intelligence development," Bulletin of National Natural Science Foundation of China, vol. 37, no. 5, pp. 751-757, 2023. DOI:10.16262/j.cnki.1000-8217.20231026.003.
[6] K. Zou, Z. Liu, "Governance of ChatGPT-like general artificial intelligence from the perspective of algorithmic-security review," Journal of Hohai University (Philosophy and Social Sciences), vol. 25, no. 6, pp. 46-59, 2023.
[7] Y. Xiao, "Generative language models and general artificial intelligence: connotation, path and implications," People's Tribune Academic Frontier, no. 14, pp. 49-57, 2023. DOI:10.16619/j.cnki.rmltxsqy.2023.14.004.
[8] N. Yu, "The impact of new-generation general artificial intelligence on international relations," International Studies, no. 4, pp. 79-96+137, 2023.
[9] T. Zhu, "General artificial intelligence in psychology: an application analysis," People's Tribune Academic Frontier, no. 14, pp. 86-91+101, 2023. DOI:10.16619/j.cnki.rmltxsqy.2023.14.008.
[10] H. M. Dettki, B. M. Lake, C. M. Wu, et al., "Do large language models reason causally like us? Even better?" in Proc. Annual Meeting of the Cognitive Science Society, 2025, arX:2502.10215.
[11] H. Chi, H. Li, W. Yang, et al., "Unveiling causal reasoning in large language models: reality or mirage?" in Thirty-Eighth Conf. Neural Information Processing Systems, 2024, arXiv:2506.21215.
[12] X. Wu, S. Chakraborty, R. Xian, et al., "On the vulnerability of LLM/VLM-controlled robotics," IEEE Transactions on Robotics, 2025, early access, arXiv:2402.10340. DOI:10.1109/TRO.2025.3412345.
[13] E. Kıcıman, R. Ness, A. Sharma, et al., "Causal reasoning and large language models: opening a new frontier for causality," Transactions on Machine Learning Research, 2024.
[14] M. Willig, M. Zečević, D. S. Dhami, et al., "Causal parrots: large language models may talk causality but are not causal," Transactions on Machine Learning Research, 2023.
[15] J. Pearl, Causality: Models, Reasoning, and Inference, 2nd ed. Cambridge: Cambridge University Press, 2009.
[16] Z. J. Davis, B. Rehder, "A process model of causal reasoning," Cognitive Science, vol. 44, no. 8, e12839, 2020.
[17] B. Rehder, M. R. Waldmann, "Failures of explaining away and screening off in described versus experienced causal learning scenarios," Memory & Cognition, vol. 45, no. 2, pp. 245-260, 2017.
[18] A. Keshmirian, M. Willig, B. Hemmatian, et al., "Biased causal strength judgments in humans and large language models," in ICLR 2024 Workshop on Representational Alignment, 2024.
[19] A. Robey, Z. Ravichandran, V. Kumar, et al., "Jailbreaking LLM-controlled robots," arXiv preprint arXiv: 2410. 13691, 2024.
[20] M. Ahn, N. Brohan, Y. Brown, et al., "Do as I can, not as I say: grounding language in robotic affordances," arXiv preprint arXiv:2204.01691, 2022.
[21] A. Brohan, N. Brown, J. Carbajal, et al., "RT-2: vision-language-action models transfer web knowledge to robotic control," arXiv preprint arXiv:2307.15818, 2023.

Downloads: 39801
Visits: 781679

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.