Are Legal Advancements at Pace with Technical Advancements?

by Bhumi Agrawal and Shweta Pandey


We all live in a technologically advancing society which is deeply embedded in our lifestyle. Robotics and Artificial intelligence is an offspring of human intelligence which has proven to be a helping hand for the people at large. There are many examples where this nascent use of robotics has proven itself to be very helpful and productive. It has emerged magnificently in various sectors like healthcare, learning, service sector, etc. For Example, in 2019, a murder case was solved using the recordings of Amazon echo[1], in an Indian restaurant, the robots act as waiters[2]

But there are certain instances when these devices glitch and can go wrong at any point in time. There have been practical scenarios when this theory has been proven to be evident one such example being the malfunctioning of Tesla Model, which slammed into a parked fire truck, putting human lives at risk[3]. Further, when IBM Watson delivered inaccurate cancer treatments to the patient speculations were raised as to whether too much reliance on robotics is worthy?[4]There is no proper argument as to which side of the scale is heavier. The argument about the positive and negative implications of AI in our lives is generally put to an end stating that, a machine does not have human intellect and cannot think like humans and act accordingly.

Technical Aspect of Robotics 

“As a programme is purely formal or syntactical and as minds contain mental or semantic elements, every attempt to produce a mind solely by means of computer programs, excludes the basic characteristics of the mind”[5]

There are two types of learning in a machine- ‘Expert system’[6], that works on a set of rules and programmes that were pre-programmed and ‘self- learning’ which gathers information from the interactions with humans and later produces the result based on the information so gathered[7]. Most of the modern machinery works on this system of self-learning. Modern machineries are autonomous enough to function based only upon the data gathered by them without the need for any human intervention. The robots that are designed for a specific task work on expert system whereas, the robots and AI designed for interaction with human’s works on self-learning system.

The foundation of Artificial Intelligence is Neural Networks and Machine Learning and Deep Learning based on the data from different sources, including the Internet[8].Neural network works exactly like the neurons in the human brain. They receive the raw data, processes it and provides the precise useful data. Machine learning, on the other hand, is a subset of artificial intelligence, which enables the programme to function efficiently with its experience. Machine learning keeps on learning by accessing the data within itself. Deep learning is a subset of machine learning that uses a multi-layered neural network to process the data. [9]

This autonomous working of the machinery can be considered a threat and there may come a time when this machinery will outlive the humankind. There are certain solutions required to arrive at a balance where the humans and robotics coexist cordially. 

International Legislations For Robotics And Artificial Intelligence

Humans have made Robotics and Artificial intelligence such an integral part of our lives, that we have forgotten when to pull ourselves back if anything goes wrong. Considering the instances of malfunctioning of these devices, various nations have realized the need for stringent legislation to prevent situations like these. The major question that arises is as to who shall be punished for the malfunctioning of these devices. Since the robots cannot be punished as they are not humans and cannot be tried in a court of law. Furthermore, if the developer of the machinery will be punished, it would deter the developers to come up with new technologies and would defeat the whole idea of becoming a technologically advanced society. 

As of now, there are three liability models established for the punishing of the wrong-doer in case of AI malfunctioning. They are:

  1. Preparatory- by liability model;
  2. The Natural Probable Consequence Liability Model; and
  3. The Direct Liability Model[10].

The European Union came up with the idea of ensuring trust and legal clarity with the human-centric Artificial Intelligence. The General Data Protection Regulation (GDPR) along with E-privacy directives guarantees the free flow of personal data within the Union. It guarantees respects of the privacy of all the citizens of the Union and thereby prevents any misuse of such data. Article 82 Of GDPR[11] provides for liability in case of data infringement. the European Commission for is set up to frame laws to make a report on the safety and liability implications of artificial intelligence, the internet of things and robotics to be furnished to the European Parliament, the Council and the European Economic and Social Committee[12]. The main focus is laid on the product safety instructions to the developers for the users and consumers. According to the report of the commission, the developer is held responsible for the final product and must undertake a risk assessment before launching the product. As per this legislation, if any product does not meet the mandatory safety rules would be declared defective and accordingly the compensation has to be paid. The legislation, the Product Liability Directive, also provides for strict, vicarious liability and contributory negligence of the developer.

 Russia drafted ‘Grishin Law’ (2015) for improving legal regulation with regards to the field of robotics in which the robots were made a legal person.  The Russian law considers Robots independent from humans and therefore, liability to be imposed on these robots is yet to be decided[13]. It imposes total liability on the developer and is still under the Russian Parliament for consideration[14].

In Saudi Arabia[15] and Japan[16], robots, namely Sofia and chatbot Shibuya Mirai respectively, were allotted citizenship. However, these robots do not qualify to be granted citizenship as per the citizenship norms of both countries.

In 2017, the UK formed the Artificial Intelligence committee realising the need for AI regulations. It does not strive to provide a legal entity to Artificial Intelligence but instead to, provide for a legal definition for the same[17].

In the United States, according to the National Conference of the State Legislature, twenty-eight states have adopted regulations relating to autonomous vehicles. Time and again reports have been made concerning Artificial Intelligence but as of now, no binding regulations have been made[18].

Indian Perspective

The prospect of technology permeating into the day to day chores of our life is high. Due to the ever occurring advancements in the technical front, the requirement of legislation defining the liabilities of the party is of supreme prominence. The intention behind using the word “party” while defining the liability is to bring up the question of distinguishing between an artificially intelligent technology or robots and their developer or manufacturer respectively. When the question of the distinction between the developer and technology is raised, it is due to the redundancy of the existing legal provisions. If the Indian legislation is to be considered, there is no perfect definition to have covered all aspects of AI and robots or any provision that specifically deals with AI and Robotics.

After much pondering and evaluation on the issue of redundancy of law, it was the Ministry of Industry and Commerce of India which first took into consideration the facilitation of growth and development in this uncharted sector. While brainstorming upon this issue, the Ministry constituted an 18 member strong team from different relevant backgrounds such as NITI Aayog[19], Ministry of Electronics and Commerce etc. These bodies came together with suggestions upon areas which the country needs to work on. Apart from that, the task force formulated guidelines as to what all measures does India need to take in order to give recognition to Artificial Intelligence and Robotics. Nevertheless, the task force did have suitable suggestions; the plan of action was not adequate to address the questions raised.

A relevant issue that comes up is the difference between AI and law is a duty. The standard of care expected from an AI and implications when such standards are not met and this results in loss/damage. Determination of duty / standard form of AI requires additional care is relevant from the perspective of imputing responsibility on AI a supposedly negligent action. Currently, the law assumes machines as a consumer product and treats it the same way. In most cases, when an accident occurs, strict product liability law standards apply. If simply put, unless a consumer uses a product carelessly and is ignorant of the warnings given by the machine, the manufacturer and those associated with the manufacturing of the machine would be at fault.

At present, the law contemplates machines as if they were all created the same as a consumer product. In most cases, when a calamity occurs, strict product liability law standards apply. In other words, unless a consumer uses a product in a disparaging manner or overlooks a safety warning, the manufacturer (and people associated with the product) is generally considered to be at fault.

This difference is important pertaining to financial results and effects technology adoption rate.

The Relevance of AI in Pandemic Circumstances

Machine Learning and Artificial Intelligence (AI) is making rapid progress in the healthcare sector in India[20]. Along with global technology leaders, dozens of new Indian start-ups are offering promising AI solutions to meet the nation’s healthcare needs.

There has been a prominent increase in the number of patients owing to the COVID-19 pandemic. This boom in the number of patients has thereby brought forward an otherwise neglected concern that of the patient-doctor ratio in our nation. Keynote stakeholders such as the Prime Minister’s Office have also been assertive for the integration of the Indian healthcare sector with AI.  

Bengaluru-based start-up Niramai[21], partially funded by Flipkart co-founder Binny Bansal, is using AI to create accurate, painless and inexpensive breast cancer screening tests. Max Healthcare, a major hospital chain in North India, is deploying AI to monitor critical care[22]. According to the company, the technology has already reduced critical care costs by about 30 per cent with optimized use of ICU beds. LiveHealth a start-up from Pune, recently raised 1.1 million dollars in seed funding, is deploying AI to create millions of ERP transactions and medical records and is now planning to develop products that help patients and doctors which would aid in making informed decisions.[23]

Artificial Intelligence has helped sector by forming a Disease Surveillance AI, Virtual Healthcare Chatbots, Diagnostic AI, Thermal Scanning AI, Intelligent Drones and Robots, Information Verification AI and many more upcoming technologies which would help in curbing the COVID-19 situation to some extent. 


Instead of fearing integration of AI in different areas, or going to the other extreme of allowing for its blanket implementation without a well-defined legal framework, a balanced approach must determine where AI should be implemented and what should be left best the traditional way. A comprehensive legal framework of checks and balances should be developed to ensure compliance so that the benefits of AI can be excluded from the risk. It is time for Indian MPs and the judiciary to step in and pave the way for a successful transition of India’s technological sector into an era of artificial intelligence.

Some of the hurdles facing the progress of AI in India are manpower and limited availability of good quality and clean data, as there is no institutional mechanism to maintain high-quality data. Another impending concern is that even with all the probable benefits of AI, envisioned to help humans, people still have concerns about data privacy and are hesitant to share data. From these reports, it can be concluded that Data Privacy and Security is one of the biggest impediment for the development of AI.

Another cultural challenge facing India is that the cost of failure is much higher here as compared to other nations. All these challenges tell us that with government funding and industry involvement, India with regard to AI and Innovation has a long way to go.

Opinion expressed by the authors are personal.

[1] BBC News, Amazon asked to share Echo data in US murder case (May 4, 2020)

[2] The Times of India, In this Kerala restaurant, robot waiters serve food to customers ( May 4,2020)

[3] CNBC news, Tesla Model S crash result of user error and Autopilot design, says NTSB (May 5, 2020)

[4]STAT news, IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show (May 5,2020),

[5]John R. Searle, Minds, Brains and Science (1984).

[6] What is an Expert System?(May 5,2020),

[7] Self-Learning AI: This New Neuro-Inspired Computer Trains Itself

It’s capable of tackling problems beyond reach of previous systems(May5,2020),

[8] AI v. Machine learning v. Deep learning (May 5 2020),

[9] How Deep Learning is Different from Machine Learning?(May 5,2020),

[10] J.K.C. Kingston, Artificial Intelligence and Legal Liability(May 5 2020),

[11] General Data Protection Regulation (EU) 2016/679 (2016).

[12] Report On The Safety And Liability Implications Of Artificial Intelligence, The Internet Of Things And Robotics (May 5, 2020),

[13] The Russian draft bill of “The Grishin Law” in terms of improving the legal regulation of relations in the field of robotics: critical analysis (May 5, 2020), The_Russian_draft_bill_of_the_Grishin_Law_in_terms_of_improving_the_legal_regulation_of_relations_in_the_field_of_robotics_critical_analysis

[14]A. Atabekov et al, Legal Status of Artificial Intelligence Across Countries:

Legislation on the Move (May5,2020),

[15] Id, Pg No. 776

[16] Id

[17]House of Lords Report of Session 2017–19, AI in the UK: ready, willing and able?(May 5, 2020),

[18] Regulation of Artificial Intelligence: The Americas and the Caribbean (May 5, 2020),

[19] Niti Ayog, National Strategy for Artificial Intelligence(May 6,2020),

[20]Business Insider, Artificial Intelligence News (May6, 2020),

[21] Non-Invasive Risk Assessment with Machine Intelligence (NIRMAI) (May5, 2020),

[22]AI plays a critical role in Max Healthcare’s digital journey (May 5, 2020),

[23]LiveHealth raises $ 1.1 mn in seed funding (May 6, 2020),–11-mn-in-seed-funding/1265412

2 thoughts on “Are Legal Advancements at Pace with Technical Advancements?

Leave a Reply

%d bloggers like this: