Buddhism and artificial intelligence
Buddhism and artificial intelligence is the relationship between Buddhist philosophy and artificial intelligence (AI), including how principles such as the reduction of suffering and ethical responsibility may influence AI development. Buddhist scholars and philosophers have explored questions such as whether AI systems could be considered sentient beings under Buddhist definitions, and how Buddhist ethics might guide the design and application of AI technologies. Some Buddhist scholars, including Somparn Promta and Kenneth Einar Himma, have analyzed the ethical implications of AI, emphasizing the distinction between satisfying sensory desires and pursuing the reduction of suffering. Other thinkers, such as Thomas Doctor and colleagues, have proposed applying the Bodhisattva vow—a commitment to alleviate suffering for all sentient beings—as a guiding principle for AI system design. Buddhist scholars and ethicists have examined Buddhist ethical principles, such as nonviolence, in relation to AI, focusing on the need to ensure that AI technologies are not used to cause harm. ContextSentient beingsA major goal in Buddhist philosophy is the removal of suffering for all sentient beings, an aspiration often referred to in the Bodhisattva vow.[1] Discussions about artificial intelligence (AI) in relation to Buddhist principles have raised questions about whether artificial systems could be considered sentient beings or how such systems might be developed in ways that align with Buddhist concepts.[2] If AI systems are determined to be sentient under Buddhist definitions, their suffering would also need to be addressed and alleviated in accordance with the principles of Buddhist thought. Buddhist principles in AI system designNonviolence and AIThe broadest ethical concern is that artificial intelligence should align with the Buddhist principle of nonviolence. From this perspective, AI systems should not be designed or used to cause harm.[3] Instrumental and transcendental goalsScholars Somparn Promta and Kenneth Einar Himma have argued that the advancement of artificial intelligence can only be considered instrumentally good, rather than good a priori, from a Buddhist perspective.[4] They propose two main goals for AI designers and developers: to set ethical and pragmatic objectives for AI systems, and to fulfill these objectives in morally permissible ways. Promta and Himma identify two potential purposes for creating AI systems. The first is to fulfill our sensory desires and survival instincts, similar to other tools. They suggest that many AI developers implicitly prioritize this goal by focusing on technicalities rather than broader functionalities.[5] The second, and more important goal according to Buddhist teachings, is to transcend these desires and instincts. In texts like the Brahmajāla Sutta and minor Malunkya Sutta, the Buddha emphasizes that sensory desires and survival instincts confine beings to suffering, and that eliminating suffering is the primary goal of human life.[6][7] Promta and Himma argue that AI has the potential to assist humanity in transcending suffering by helping individuals overcome survival-driven instincts.[5] Intelligence as careThomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, and Michael Levin propose redefining intelligence through the concept of "intelligence as care," and promote it as a slogan. Inspired by the Bodhisattva vow, they suggest this principle could guide AI system design.[2] The Bodhisattva vow involves a formal commitment to alleviate suffering for all sentient beings, with four primary objectives:
This approach positions AI as a tool for exercising infinite care and alleviating stress and suffering for sentient beings. Doctor et al. emphasize that AI development should align with these altruistic principles.[2] References
|
Portal di Ensiklopedia Dunia