Introduction
In recent years, the rapid development of artificial intelligence (AI) technologies has sparked significant debates in various branches of law, including criminal law. Particularly, the capacity of autonomous systems to make decisions independently of human intervention necessitates a reassessment of fundamental concepts such as fault, intent, and volition, which constitute the cornerstone of classical criminal liability. This article examines the issue of criminal liability concerning AI systems, discusses current doctrinal approaches, and evaluates potential solutions.
The Legal Status of Artificial Intelligence Systems
In order to discuss the criminal liability of AI systems, it is first necessary to determine their legal status. Under Turkish law, as well as in comparative law, AI systems have not been recognized as independent legal subjects. Accordingly, AI continues to be classified as a “thing” or an “auxiliary tool,” and its acts are attributed directly to the developer, user, or owner of the system.
However, with the development of “strong AI” or Artificial General Intelligence (AGI), the increasingly autonomous and unpredictable behaviors of these systems have raised concerns that existing liability frameworks may become insufficient.
Criminal Law Perspective on Liability
- The Requirement of Fault and Volition
One of the fundamental principles of the Turkish Penal Code (TPC) is that criminal liability necessitates the presence of fault, namely an act performed with volition (TPC Articles 21-23). AI systems, however, lack legal volition and consciousness. Therefore, recognizing AI as a direct criminal actor is not possible under current legal structures.
- Assessment Within the Scope of Instrument Liability
If an AI system is used merely as an instrument in committing an act, traditional rules regarding liability through tools and instruments will apply. Similar to the use of a weapon or a motor vehicle, when AI is utilized as a means to commit an act, the responsibility falls upon the human actor who is culpable.
Thus, harmful actions committed through AI are evaluated based on the fault of the programmer, operator, or party responsible for the system’s supervision.
- Indirect Perpetration and Liability by Omission
In highly automated systems where there is no direct human actor at the time of the act, liability may arise under the framework of “offenses committed by omission.” According to TPC Articles 88 and following, individuals who breach their duty of care may be held criminally responsible for outcomes resulting from their negligent conduct.
For instance, if an autonomous vehicle causes a fatal accident due to a faulty software update, the software engineer or system owner may be held liable for “negligent homicide” (TPC Article 85) based on the breach of their duty of care.
International Approaches
In 2020, the European Commission, through its “Ethics Guidelines for Trustworthy AI,” initiated discussions on the potential direct liability of AI systems within legal frameworks. Additionally, in 2021, the European Parliament debated the notion of “electronic personality” for highly autonomous AI systems, although no binding regulations have yet been established.
In U.S. law, damages caused by AI are typically addressed through the mechanisms of “product liability,” and there is no substantial discourse on the direct criminal liability of AI systems within the American legal system.
Evaluation and Conclusion
Recognizing AI systems as direct perpetrators under criminal law is incompatible with current legal structures and fundamental principles. However, the increasing unpredictability of these systems necessitates the reinforcement of indirect liability mechanisms and the development of new legal solutions.
In this context, the following proposals may be considered:
- Legislative Regulation: Enact clear statutory regulations that define duties of care concerning the development and use of AI and specify criminal consequences for breaches.
- Supervisory Mechanisms: Establish independent supervisory authorities to oversee the design, development, and deployment stages of AI systems.
- Insurance Mechanisms: Create systems such as Mandatory AI Insurance to ensure compensation for damages caused by AI.
Ultimately, the incompatibility between the classical structure of criminal law, which relies on human volition, and the autonomous operation of AI systems will provoke deeper discussions and demands for reform in the near future.