The question for the 2024 6KBW Essay Competition will be released on 8 July 2024. Further information can be found here. To inspire any potential essayists, here are the three winning entries from the 2023 competition. The question was ‘Can a computer programme commit a crime?’ Congratulations once again to Nicole, Lewis and Francesca! 


1st Place: Nicole Chan

In the movie “I, Robot”, a robot named Sonny kills its creator at the creator’s direction. The detective investigating the case finds that Sonny cannot legally commit murder as it is a machine and was only acting according to its programming. 

Released in 2004, the plot of “I, Robot” seemed more fantasy than reality at the time. However, in 2023, with the rise of artificial intelligence and machine learning, the question of whether a computer programme can commit a crime is no longer as far-fetched as it used to be.

In this essay, I will explore this topic by first examining whether a computer programme can have mens rea. Second, I will briefly explore the possibility of a computer programme committing crimes where the presence of mens rea is not necessary. Third, I will argue that, even if computer programmes could have mens rea and can commit crimes, it would undermine the principles and purpose of the criminal justice system to convict computer programmes.

Can a computer programme have mens rea?

For a crime to have been committed, the two fundamental elements, mens rea and actus reus, have to be fulfilled. (Strict liability crimes will be discussed below.) An act by a computer programme may fulfil the actus reus of an offence, for example, the case of an Uber self-driving car hitting a pedestrian and causing their death[1]. However, whether a computer programme can have mens rea, i.e. intention, recklessness, negligence or knowledge, is a complicated issue.

This is because computer programmes make decisions based on its programming and the datasets fed to it, which are entirely controlled by its human creators. In “I, Robot”, Sonny’s creator programmed it to kill himself. In the case of the Uber self-driving car, a failure in programming led to the death of a pedestrian – the programme lacked “the capability to classify an object as a pedestrian unless that object was near a crosswalk.”[2] In another example, an artificial intelligence-based (AI-based) tool designed to recruit talent was trained on tainted data, leading to a heavy bias against women’s resumes.[3]

Even machine learning programmes, which require less direct input from programmers, merely mimic human interactions at the direction of its programmers, and often require human intervention. Tay, an AI chatbot launched by Microsoft, was designed to mimic casual speech patterns, which it self-taught through its interactions with users on Twitter. But after Tay tweeted a string of racist and inappropriate tweets, Microsoft took Tay offline and issued an apology.[4]

In each instance, the computer programmes’ decision-making was dictated by its programming or influenced by its datasets. Computer programmes are not yet capable of making fully autonomous decisions, and it will likely be a long time before become capable. Without choice, a computer programme cannot intend or be reckless, neglect or know. As such, computer programmes cannot have mens rea, and thus cannot commit crimes.[5]

Strict liability

Theoretically, a computer programme could commit a strict liability crime. As discussed earlier, a decision made by a computer programme could constitute an actus reus. For example, a self-driving car without a passenger or driver could park in a no-parking zone, resulting in a parking offence. However, can a computer programme be punished? If so, how? We now turn from whether a computer programme can commit a crime to whether it should be held criminally liable.

Computer programmes and the criminal justice system

  1. Autonomy and responsibility

The principle of autonomy is one that is deeply imbedded in criminal law. A person has autonomy and is therefore responsible for their actions. It is for this reason that animals, young children and the mentally insane are not held criminally liable for their actions. As computer programmes are not yet capable of autonomous decision-making, this reasoning should extend to them as well. In the future, should computer programmes acquire this capability, perhaps their potential responsibility could be revisited.

In the meantime, responsibility for the actions of a computer programme should be carried by its user and/or creator. For example, the supervising driver of the Uber self-driving car was charged with and pled guilty to endangerment.[6] The family of the victim also settled with Uber out of court.[7] In the case of United States v Jitesh Thakkar, the defendant was charged with conspiracy and spoofing offences after his programme was used to manipulate trading activities and caused a “flash crash”.[8] Thakkar was later acquitted,[9] although Sarao, the co-conspirator who used Thakkar’s programme, pled guilty.[10]

In these cases, computer programmes are viewed as a tool that the perpetrator used to achieve their goal, similar to a knife or a gun wielded by others.

  1. Culpability

The concept of culpability, or fault, is also another tenet of the criminal justice system. Culpability presupposes an actor’s ability to differentiate right from wrong, and their ability to avoid doing wrong. But the inability to have mens rea makes it difficult to place blame on computer programmes. Gless, Silverman and Weigend (2016)[11] propose that it makes “little social sense” to attribute blame to a being that could not self-reflect and evaluate its actions in accordance to a moral system. 

However, the authors admit that, in the future, should a code of ethics be formalized and implemented for computer programmes, a concept of culpability adapted for artificial intelligence could be established.[12] A computer programme could therefore be “blamed” for any actions it took which it was able to evaluate as “negative”. This proposal mirrors the plot in “I, Robot”, where robots that violated a fictional code of ethics were deemed faulty and decommissioned.

  1. Punishment

Last but not least, even if a computer programme could make autonomous decisions, have mens rea, and have the ability to evaluate its actions in reference to a moral framework, it would still make little sense for it to be criminally liable if it could not be punished.

The existing sanctions in the criminal justice system are geared towards humans, and it is difficult to apply any of them to computer programmes in an effective way. Financial sanctions could only be paid by a programme’s legal owner, and physical imprisonment, corporal punishment and the death penalty would be impossible to implement on computer programmes. Further, the purposes of punishment would not be achieved, i.e. restitution, deterrence, or would be frustrated, i.e. restitution, or could only be achieved through reprogramming, i.e. rehabilitation. 

It is only if the programmes are taught to understand punishment and draw the connection between the punishment and their prior actions that punishment would be effective. However, it would be far more efficient for any programmer to simply scrub and rewrite a new programme that would avoid its predecessors’ mistakes.

Conclusion

In conclusion, computer programmes of this era are incapable of committing crimes. This is because they are unable to form autonomous decisions, and therefore cannot fulfil the mens rea element of committing a crime. Theoretically, computer programmes could commit strict liability crimes, but it would be socially undesirable to hold them criminally liable. This is because doing so would undermine the tenets of criminal law – the principles of autonomy, individual responsibility, and culpability – and frustrate the purpose of punishment. Further, it would be an impractical endeavour as existing sanctions cannot be applied to computer programmes. 

In the present day, law enforcement should focus on regulating the users and creators of computer programmes. However, in the future, as computer programmes become more and more sophisticated (and as we reach the days of “I, Robot”), a very different criminal justice system would have to emerge to regulate the actions of computer programmes.

[1] Rory Cellan-Jones, ‘Uber’s self-driving operator charged over fatal crash’ (BBC, 16 September 2020) <https://www.bbc.co.uk/news/technology-54175359> accessed 3 September 2023.

[2] Phil McCausland, ‘Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk’ (NBC News, 9 November 2019) <https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281> accessed 3 September 2023.

[3] Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’ (Reuters, 11 October 2018) <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G> accessed 3 September 2023.

[4] Carrie Mihalcik, ‘Microsoft apologizes after AI teen Tay misbehaves’ (CNET, 25 March 2016) <https://www.cnet.com/science/microsoft-apologizes-after-ai-teen-tay-misbehaves/> accessed 4 September 2023.

[5] Sweet v Parsley [1970] AC 132.

[6] Cellan-Jones (n 1).

[7] McCausland (n 2).

[8] United States v Jitesh Thakkar [2019] https://www.justice.gov/criminal-vns/united-states-v-jitesh-thakkar 

[9] United States v Jitesh Thakkar [2019] https://www.justice.gov/criminal-vns/case/jitesh-thakkar/update

[10] United States v Navinder Singh Sarao [2020] https://www.justice.gov/criminal-vns/united-states-v-navinder-singh-sarao

[11] Sabine Gless, Emily Silverman, Thomas Weigend, ‘If robots cause harm, who is to blame? Self-driving cars and criminal liability’ [2016] New Criminal Law Review: An International and Interdisciplinary Journal , Vol.

19, No. 3 (SUMMER 2016), 412, 422.

[12] Ibid, 423.


2nd Place: Lewis Hazeldine

Despite a continuing struggle to define Artificial Intelligence (AI), there remains an inevitability that the advancement of AI will transform our lives.[13] AI, when used correctly, can bring about enormous societal benefits including: accelerating the discovery of new medicines, piecing the first complete images of a black hole[14] and more generally increasing efficiency in everyday tasks (as evidenced by the recent emergence of ChatGPT). Despite this, there are significant vocal concerns surrounding the speed and lack of control that humans may have on programmes, that in time could become entirely autonomous. Recent examples of such concern can be evidenced in the emergence of WormGPT; an alternative to ChatGPT which has undergone deliberately inappropriately misguided training with extremist content. Such concerns have been voiced by industry leaders, in the form of a recent open letter calling for a pause of the development of training AI systems.[15]

At present, “malicious” programmes such as WormGPT adhere to the limitations established by its creators for a specific training objective. Therefore, in the event of misapplication, accountability will rest upon the human individuals responsible for its creation. However, one can envision in the future, that with further advancement, AI systems will be capable of setting targets, assessing outcomes and adjusting behaviour to increase likelihood of success, all without human orders.[16]

This essay will therefore focus on the current liability (or lack of) for computer programmes, potential solutions for this and whether there are any feasible consequences, should the situation surrounding liability alter.

“The act is not culpable unless the mind is guilty”

The general rule of criminal law in the UK is that “the act is not culpable unless the mind is guilty”,[17]therefore generally, a mental state is required for an offence to be committed.[18] Unlike human beings, computers lack the mental state and intent necessary to commit crimes.  Instead, they function according to programmed instructions and algorithms, devoid of the consciousness and subjective experience needed for culpability in criminal law. Presently, those who provide the instructions that cause damage through technology can and will be held criminally responsible under existing legislation.[19] It is worth reiterating; under current technological limitations, computers simply do not develop criminal intent on their own. Programmes (including artificial intelligence) lack the intentionality, culpability, motivations and autonomy necessary for criminal agency as we perceive it in humans.

Looking forward, it is suggested that existing legislation would be rendered inadequate should AI develop to an entirely autonomous state. Any AI system in the future is still likely to lack “intention” as per our current understanding, owing to a lack of capacity to deliberate and weigh reasons. Further, to punish AI despite a lack of capacity would go against a core principle of criminal law; that some mental element is required for culpability.

Crimes without intent

Strict Liability 

Despite this lack of intention, crimes committed without an intention are still prosecuted, largely through two mechanisms: strict liability and the prosecution of corporate bodies. The existence of these two mechanisms warrants a short debate as to whether either may be appropriate to potentially assign liability to computer programmes under these mechanisms.

Assigning liability to wrongful actions caused by computer programmes without a necessary intent would seem the perfect solution. Strict liability is successfully used in the UK for offences such as outraging public decency, parking violations and contempt of court, with the primary rationale being for the protection of the public. Further, while there remain advocates against the use of strict liability on the grounds of fairness and incompatibility with the European Convention of Human Rights[20], the use of strict liability remains. 

Despite this seemingly perfect fit for unlawful computer programmes, a defendant must commit a voluntary act that causes the prohibited state of affairs. Therefore, in order for strict liability to apply, a programme would have to commit a voluntary act that causes some harm or undesirable situation.[21] As established earlier, a computer programme does not have intent, deliberation or reasoning and therefore without such, a voluntary act will not be committed and there can be no nexus between the actions of the programme and the resulting harm.[22] This is not to suggest that this situation may not change in the future, but with the current technological limitations of AI, strict liability is currently not a feasible alternative to assigning responsibility to computer programmes alone. The current law reflects the reality.

Corporate Liability and Regulatory Prosecutions

Alternatively, it is reasonable to examine whether the prosecution of computer programmes/artificial intelligence could realistically fall within corporate criminal liability[23] or regulatory contexts such as health and safety regulations. Under both contexts, corporations can face prosecution for offences or breaches committed by their employees, agents or directors; therefore, a computer programme could feasibly come under the umbrella of an agent. 

However in both proposals, the liability of the agent (in this scenario a computer programme) flows back to the overall company management or the creator of the programme. The current law already accounts for misuse of computer programmes when controlled by humans[24], thereby an attempt to assign liability under corporate liability or health and safety regulations would be unnecessary.

Consequence

Eventually, in circumstances in which a computer programme may have been found to commit a crime, questions arise over the issue of consequence. Ultimately, as computer programmes cannot recognise their choices and actions, there can be no deterrence to the programme itself, therefore there will no harm-reducing benefits. Despite this, Abbott correctly opines that punishment to the programme could provide a general deterrence to the developers of AI to avoid creating programmes that cause egregious types of harm without excuse.[25] This remains difficult however, as in circumstances where programmes have been become autonomous, there will be difficulty in assigning responsibility to human actors as the programme may have developed into an entirely different being than that of what was originally intended.

There are also obvious practical difficulties in punishing computer programmes. The most obvious suggestion to punish a computer programme would be to just delete it, however does that punish the programme or the original creator? Further, where a programme has become semi-sentient, does deletion of the programme become an execution of a life form? Such questions are bound to cause difficulty in the future; however we are thankfully some time away from semi-sentient programmes.

Concluding Remarks 

There remain active discussions among legal experts and academics regarding whether advances in autonomous systems and AI could eventually warrant the re-examination of certain principles of criminal law. Following a recent policy paper presented to Parliament by the Secretary of State for Science, Innovation and Technology[26] and the views expressed by Sir Geoffrey Vos[27], the current consensus seems to be to see how AI develops and tackle the application of AI output as well as regulating the training data used in AI to avoid harmful consequences. While some remain hopeful that more positive steps are proposed at the first AI world safety summit in November[28]; it will remain extremely difficult to regulate and propose major doctrinal shifts in criminal law without a firm understanding of the true capabilities and autonomous nature of these programs.

[13] O Dowden MP “[AI] will transform almost all areas of British life in the coming years and months in a “total revolution” in https://www.telegraph.co.uk/politics/2023/08/11/artificial-intelligence-transform-british-life-dowden/ Accessed on 12th August 2023

[14] See https://www.nature.com/articles/d41586-019-01155-0, Accessed on 12th August 2023

[15] Future of Life Institute ‘Pause Giant AI Experiments: An Open Letter’ 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, Accessed on 12th August 2023

[16] See R Abbott, Everything Is Obvious, 66 UCLA L. REV. 2, 23-28 (2019)

[17] “actus reus non facit reum nisi mens sit rea”

[18] Exceptions will be discussed.

[19] Largely prosecuted under the Computer Misuse Act 1990.

[20] See R v G (2006) EWCA Crim 821 in which the Court considered a prosecution under strict liability may interfere with a defendant’s rights under Art.8 of the Convention. 

[21] J Herring, ‘Blackwell; Criminal Law- Text, Cases and Materials’ (2022) Tenth Edition, Accessed at https://blackwells.co.uk/extracts/9780199234325_herring.pdf on 28th August 2023

[22] J Child, ‘Defence of a Basic Voluntary Act Requirement in Criminal Law from Philosophies of Action’ (2020) New Criminal Law Review, vol 23, no.4, pg. 437

[23] For example, the Corporate Manslaughter and Corporate Homicide Act 2007

[24] See n7

[25] R Abbott & Others, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’ (2019) University of California, pg.345

[26] Secretary of State for Science, Innovation and Technology, ‘A pro-innovation approach to AI regulation’ (2023) Accessed at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#part-7-conclusion-and-next-steps on 1st September 2023

[27] The Rt Honourable Sir Geoffrey Vos at the 20th Annual Law Lecture (21st June 2023), Accessed at https://www.barcouncil.org.uk/uploads/assets/789f134c-38d7-4ff5-9e2fba6c541eec8d/20230611-Bar-Council-Law-Reform-Lecture-2023-AI-Digital.pdf on 1st September 2023

[28] UK Government Press Release, ‘Iconic Bletchley Park to host UK AI Safety Summit in early November’ (2023), Accessed at https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november on 1st September 2023


3rd Place: Francesca Jackson

Computer programmes have developed to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages. This branch of computer science is known as Artificial Intelligence (AI).[29] What distinguishes these artificially-intelligent systems from conventional ones is that the former operate under the control of a computer programme which allows them to, inter alia, make decisions autonomously, learn, reason, self-correct and decide on what basis it is going to do something. A broad array of technologies ranging from basic automation to advanced automation fall within the ambit of AI. Basic automation encompasses AI applications such as data collection and voice, video and image recognition, whilst advanced automation includes AI applications such as Machine Learning and Smart Devices.[30]

The speed at which AI-generated computer programmes have developed is startling. Even more startling, however, is the increasing way in which these programmes can be used for a range of nefarious purposes. For example, in California a 300-pound security robot ran over a toddler at a shopping centre after its programme failed to detect her.[31] In Switzerland, a robot was arrested for buying ecstasy online before later being released by police.[32] These AI programmes are also capable of causing fatalities: in Japan, a robot in a motorcycle factory killed a human worker,[33] while a driver was also killed in a car crash whilst inside an automated car which was moving on autopilot.[34]

Who, if anyone, is to be held responsible for these crimes? If a person uses a computer programme to commit a crime, they commit an offence.[35] For example, English law provides that a human perpetrator may be held criminally liable if an offence committed by an AI computer programme is an extension or consequence of the human perpetrator’s acts.[36] But what happens in the instances above, when an AI computer programme acts of its own volition, such as by making a decision to do something and then doing it without any external (i.e. human) influence? In these cases, the AI computer programme’s own acts may not be a mere extension or consequence of a human perpetrator’s act, thus absolving the human perpetrator of liability. But the computer programme itself also cannot be held liable either, as the law only applies only to natural and ‘legal persons’. This latter term encompasses any entity which has legally-recognised and enforceable rights and responsibilities. This begs the question, could AI computer programmes which make autonomous decisions of their own accord be considered legal persons? As the law stands, the answer is no, as there is currently no recognition of computer programmes as legal persons.[37]

There is an additional obstacle to recognising AI computer programmes as capable of committing crimes. Criminal liability requires the establishment of the actus reus (the physical element of a criminal act or omission to act) and the mens rea (the mental element of an intention to cause harm to someone, or having the awareness and knowledge within the standard expected of a reasonable man that harm was a likely consequence of their action). It is this mental element which represents the ‘sticking point’ in terms of holding computer programmes criminally liable.[38] Whilst it is ‘easy’ to attribute the actus reus to an AI programme which has acted to commit an offence or omitted to act where there is a duty to do so,[39] how do we know that the robot intended to do what it did? In other words, how can we establish the intention of an inanimate computer programme so as to satisfy the mens rea of a crime? For this reason, computer programmes are incapable of meeting existing criminal law principles.[40]

Interestingly, because of the centrality of fault in our understanding of crime, Stewart argues that the absence of fault means that AI computer programmes are incapable of committing crimes, and that terms such as ‘malfunctioning’ should be used instead.[41] This raises the prospect of using the concept of strict liability as a framework for assigning computer programme liability. Strict liability has been defined as ‘crimes which do not require mens rea or even negligence as to one or more elements in the actus reus.’[42] This faultless liability, which is increasingly used for product liability in tort law,[43] would lead to liability being assigned to the faultless natural or legal person who deploys a computer – for example, through programming – despite the risk that it may conceivably perform a criminal act or omission. Many believe that strict liability is the most appropriate criminal liability model when it comes to computer programmes. For example, Kingston asserts that AI programmes could commit strict liability offences,[44] as does Hallevy, who speculates that if a self-driving car was found to be breaking the speed limit for the road it is on, the law may attribute criminal liability to the AI programme that was driving the car at the time which, in turn, may hold the AI programmer to be liable.[45] However, introducing strict liability into the criminal law context would be novel, and there has been a lot of resistance to it,[46] since the establishment of mens rea with intent or knowledge ‘is central to the criminal law’s entitlement to censure’ which cannot be abandoned simply because it is difficult to prove.[47] Furthermore, the more serious a crime, the less likely that criminal law systems will allow for strict liability,[48] strongly implying that computer programmes could never commit offences like murder. Therefore, assigning strict liability to hold computer programmes liable for crimes – especially serious ones – would be extremely contentious.

Problems arise further still when defences are considered. Where a programme ‘malfunctions’ to use Stewart’s terminology, as in the examples outlined earlier, could it claim the defences currently available to people such as diminished responsibility, provocation, self-defence, necessity, mental disorder or intoxication? It appears not, as currently the malfunction is said to lie within the design, programming and manufacturing, rather than the computer itself.[49] It is unclear whether the Trojan defence might also be raised by an accused if the AI programme had been taken over by a Trojan or similar malware programme, which committed offences using the AI programmed by the accused but without their knowledge. This defence was successfully raised in a UK case where a computer containing indecent pictures of children was found to have eleven Trojan programmes on it,[50] and in another UK case where a computer hacker’s defence to a charge of performing a denial of service attack was that the attack had been conducted from the accused’s programme by a Trojan programme, which had then eliminated itself from the computer before it was forensically analysed.[51]  It remains to be seen whether this defence would be applicable in the context of criminal liability.

The fact that computer programmes cannot currently be held liable for the crimes they commit opens up a lacuna in the law, as it is also difficult to blame humans for the actions of the AI programme, since they are not always able to control the programme themselves.[52] The complexity of programming means that the designer, developer or user may neither know nor predict the computer’s criminal act or omission.[53] As the UK Government has recognised, many AI computer programmes are ‘opaque, which make it hard for an individual to take direct responsibility for them.’[54] For this reason, the terms of use for the AI-powered programme ChatGTP state that its owner, Open AI, is not liable for any damages arising out of use of the model. In the criminal context, this effectively leaves victims of AI computer programme crimes without any sure way of redress – an unsatisfactory state of affairs given the sheer levels of damage which these programmes can inflict.

Conclusion

In the American film ‘terminator Salvation’, the AI robot ‘Skynet’ deploys ‘Terminator’ to annihilate human beings. Far from being the preserve of science fiction films, the commission of crimes by AI computer programmes is increasingly becoming reality. This analysis has shown that, whilst AI computer programmes are clearly capable of committing criminal acts, they lack the requisite mental element to be held liable for them. At the same time, the law is also currently ineffective at holding human perpetrators liable for the criminal acts performed by AI programmes. All in all, if there is one key takeaway from all this, it is perhaps this: try not to be the victim of an AI computer programme crime – your chance of redress is slim.


[29] Mia Stewart, ‘Constructing ‘electronic liability’ for international crimes: transcending the individual in international criminal law’ (2023) 589, 590.

[30] Ibid.

[31] Steven Hoffer, ‘300-Pound Security Robot Runs Over Toddler at California Shopping Centre’ (13 July 2016, Huffington Post) available at < 300-Pound Security Robot Runs Over Toddler At California Shopping Center | HuffPost UK Crime (huffingtonpost.co.uk)> accessed 5 September 2023.

[32] Jana Kasperkevic, ‘Swiss police release robot that bought ecstasy online’ (22 April 2015, The Guardian) available at < Swiss police release robot that bought ecstasy online | Switzerland | The Guardian> accessed 5 September 2023.

[33] Robert Whymant, ‘From the archive, 9 December 1981: Robot kills factory worker’ (9 December 2014, The Guardian) available at < From the archive, 9 December 1981: Robot kills factory worker | Japan | The Guardian> accessed 5 September 2023.

[34] Danny Yadron and Dan Tynan, ‘Tesla driver dies in first fatal crash while using autopilot mode’ (1 July 2016, The Guardian) available at < Tesla driver dies in first fatal crash while using autopilot mode | Tesla | The Guardian> accessed 5 September 2023.

[35] Thomas C. King et al, ‘Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions’ (2019) Science and Engineering Ethics 89.

[36] Ibid.

[37] Ibid.

[38] Stewart (n 1), 598

[39] Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence (2014, Springer).

[40] Nora Osmani, ‘The Complexity of Criminal Liability of AI Systems’ (2020) Masaryk University Journal of Law and Technology 53, 57.

[41] Stewart (n 1), 598

[42] David Omerod and Karl Laird, Smith, Logan and Ormerod’s Criminal Law (2021, 16th edn, OUP)

[43] King et al (n 7).

[44] John Kingston, ‘Artificial intelligence and legal liability’ (2016).

[45] Hallevy (n 11).

[46] Larry Alexander, ‘Is there a case for strict liability?’ (2018) Criminal Law & Philosophy 531.

[47] Andrew Ashworth, ‘Should  strict criminal liability be removed from all imprisonable offences?’ (2010) Irish Jurist 1.

[48] Stewart (n 1), 598.

[49] Ibid.

[50] Susan W. Brenner et al, ‘The trojan horse defense in cybercrime cases’ (2004) Santa Clara High Technology Law Journal 1.

[51] Ibid.

[52] King et al (n 7).

[53] Ibid.

[54] Ibid.

Previous post 6KBW Essay Competition