
【Article by IDAS】
AI derived systems have profoundly transformed the conduct of warfare. This is particularly salient for the campaign to regulate AI weapons with implications to the ethics and legal concerns of human culpability. It is our honor to have Dr. Jeremy Moses, the Associate Professor of Political Science and International Relations, dissects the landscape of political debate over AI weapons in his lecture “The Meaninglessness of ‘Meaningful Human Control’: Learning from Israel’s Use of Military AI” held on 06 November 2025. The lecture was organized by National Chengchi University’s (NCCU) International Doctoral Program on Asia-Pacific Studies (IDAS).
Dr. Moses argues that the commonly invoked concept of “meaningful human control” over AI-enabled weapons is effectively devoid of substantive value. His main argument is that human control is meaningless in the face of a permissive policy of casualties. Israel's lenient policies regarding civilian harm is producing the kind of war political leaders want. A central case study in the lecture is Israel’s deployment of AI systems in military operations, particularly in Gaza as a laboratory for testing new Israeli weaponry. Israel’s military technologies—tested in live combat—have enabled its defense industry to assert global competitiveness. These systems integrate vast quantities of metadata to identify targets, often designating individuals as legitimate threats based on tenuous associations. Dr. Moses contends that flawed algorithms not only manufacture “guilt” but also construct the very idea of “the enemy.” Moreover, permissive policies toward civilian casualties and the deliberate designation of populated areas as empty contribute to accusations of genocidal intent.
For Dr. Moses, these AI systems do not control political actors; rather, they serve to legitimize policies already aligned with political objectives. Questioning if the concept of “meaningful human control” would actually minimize casualty. He asserts the human aspect of designing and implementing AI systems, which points the problem not at technology itself but the human decision to wage war in a particular way. The existing conversation around the campaign to regulate autonomous weapons systems has an inherent confusion in its position and a misdiagnosis of the problem, as the problem is human not machine. As the concept of meaningful control means nothing in a situation where war is being driven by genocidal politics.
The discussion also explored the implications for Taiwan. Dr. Moses highlighted the appeal of Israeli AI technologies for states engaged in asymmetric warfare, particularly for enhancing speed, surveillance, and targeting. However, he cautioned that close alignment with Israel carries reputational risks, as Israel is increasingly perceived internationally as a “toxic brand.” For Taiwan, a state that seeks to project a rights-respecting image, adopting such technologies may undermine its international legitimacy and, by extension, its security.
In the forum, when asked about the development of the argument for meaningful control, Dr. Moses responds about the dangers of the argument questioning if one way of inflicting casualty was more dignified than another, what differentiates a person pulling the trigger versus a machine. The succeeding questions concerned levels of oversight and responsibility. The core of the conversation focused on the abstract principle of responsibility, which historically has been held by leadership. As Dr. Moses emphasized that debates about AI in warfare are less about technological efficiency and more about political intent. Moving on the construction of a regulatory framework, in particular the corporation’s participation, Dr. Moses notes that the participation of corporations is channeled through think tanks, indicated by the cross over of corporate marketing and legal debate. When asked whether democracies might use such technologies more responsibly than authoritarian regimes, he argued that most likely and effective employment of AI is not warfare, but as a part of domestic social control.
In conclusion, the lecture presented a compelling set of social conundrums for attendees to grapple with as ethical dilemmas presented by AI-assisted warfare and the broader societal consequences of delegating life-and-death decisions to algorithms become more pervasive. Students and participants gained deeper insight into the contested terrain of military innovation, responsibility, and human dignity. These discussions underscore the need for continued critical inquiry into the political and moral dimensions of regulating new technologies.