Since the Internet became ubiquitous in the 1990s, few technologies have been developed that are far reaching enough to challenge ethical and legal issues. Artificial intelligence is one of those chosen few. As with any disruptive innovation, the enormous potential for the development and application of AI-based technologies also raises many questions about the need for legal regulation and the definition of rules of the game with regard to the possible use of artificial intelligence. It is not for nothing that Mira Murati, the CTO of OpenAI (the company responsible for the development of ChatGPT), has called on governments and regulators to regulate the field.
While complex debates are ongoing on this topic, it is important to examine the key regulatory developments in this area, both in Israel and internationally. It is also important to highlight the key principles to question when considering these developments, some of which already exist and are legally binding today.
Regulation in the international arena Europe
The European Union has taken a significant initiative by developing and promoting comprehensive regulation of the various areas of AI. First published in April 2021, the proposed European law, the AI Law, lays down essential obligations for developers and users of AI technologies and imposes extremely high fines on companies that do not comply with the obligations (up to 30 million euros or 6% of annual turnover for each violation, whichever is higher). Similar to the European General Data Protection Regulation, the GDPR, the AI law will also apply to non-European companies that develop and use AI-based products that are also intended for use in Europe. The AI law, the first ever comprehensive law in this area, shows a clear potential to become a standard for specific AI regulations due to its extraterritorial reach.
European law divides the types of artificial intelligence into four categories depending on the level of threat to individual rights, and adjusts the different requirements to each level of threat. Among other things, the law stipulates that there are high-risk systems (such as an AI-based system for screening job applicants or an AI-based credit rating system) as well as systems with unacceptable risk, such as the tariffs for citizens in China, which is what the European law will absolutely prohibit.
The European Parliament has to approve the law for it to come into force, after which final discussions between the member states, the Parliament and the European Commission begin. Those talks are expected to begin in April, with the goal of finally approving this revolutionary law by the end of this year.
United States
The United States does not yet have a comprehensive regulatory initiative, but we are seeing early indications in both Federal Trade Commission (FTC) policy guidance and state-level legislative initiatives.
The FTC published necessary guidelines and guidelines for the development and use of AI technologies back in 2020. These policies, most of which have been incorporated into Europe’s AI law, have been underpinned by the FTC’s enforcement processes over the past several years, including various provisions of federal law, including the Federal Credit Reporting Act (FCRA), the Equal Opportunity Credit Act and the FTC Act.
At the state level, state legislators are addressing the use of AI in hiring processes via Automated Employment Decision Tools (AEDT), in addition to legislation on privacy and automated decision-making. For example, New York State’s AEDT statute requires, among other things, that AEDT systems be subjected to an annual audit for the potential for bias and discrimination and the results of that audit must be publicly available.
regulation in Israel
Israel has taken a different approach. In late 2022, the Department of Innovation, Science and Technology released a document for public comment entitled “Draft Policy for Regulations and Ethics in Artificial Intelligence”. This document is the first comprehensive investigation by the Israeli government into the impact of AI systems entering the public domain and the challenges they pose. This document promotes a policy of “soft regulation” by calling on industry and government agencies to adopt a framework of comprehensive and non-binding ethical rules. The document also calls on the various regulators, each in their own area, to examine the need to promote concrete regulations.
Although there are significant differences between the various initiatives in Europe, the United States and Israel, all of these initiatives share several important principles related to the development and use of artificial intelligence.
Key Principles of Transparency in AI Regulations
Decision-making processes rely heavily on AI-based algorithms, for example in the authorization of financial transactions, the screening of job applicants, etc. The transparency principle aims to inform all persons who come into contact with artificial intelligence or are affected by its use about the decision-making and the Parameters that the system uses to make its decisions.
fairness and non-discrimination
The principle of fairness obliges companies to ensure that the development and use of artificial intelligence takes into account the need for gender equality, racial equality, religion, etc. and that they eliminate bias in AI-based systems in order to minimize the risk of unjustified discrimination of individuals or groups.
accuracy and reliability
In order to be able to rely on information and decisions made with artificial intelligence, the accuracy of the technology is of paramount importance. One of the main concerns in this regard is that the artificial intelligence is not trained enough, which leads to poor performance of the AI system and thus a reduction in its reliability. Another risk concerns the situation that the information sent to the AI system has changed, but the system has not been updated during its training process. This can then lead to wrong consequences and decisions.
Data protection and data safety
AI-based systems often require extensive use of personal data. This use is subject to data protection regulations both during the training phase and during the use phase. Consequently, organizations need to ensure they comply with data protection laws, including the need to obtain data subject consent, transparency, data minimization, data erasure, etc. Additionally, information security and maintaining technology reliability are also essential to ensure compliance regulatory requirements and to ensure that no unauthorized party gains access to personal data or interferes with the operation of the AI system.
Accountability and Risk Management
Who should we hold accountable for AI decisions when hiring processes, credit ratings, and maybe even lawsuits are at stake? Is it justified to hold companies that developed or used the technology accountable, even in situations where the decision or its impact on the data subject cannot be foreseen? In order to provide answers to this complex question, it is obvious that at least an internal risk management of the different types of technologies is required to derive the risk potential. Accordingly, artificial intelligence poses significant risks and implications (e.g., a credit score that could prevent individuals from completing transactions, or the use of biometric information by law enforcement agencies to fight crime) and requires internal controls and constant monitoring procedures to ensure accuracy, among other things , reliability and lack of bias and discrimination. These processes require documentation, sometimes even within the framework of the responsible supervisory authority. In this regard, we note that the proposed European law will oblige relevant companies to register high-risk AI systems in an EU database managed by the European Commission in order to improve transparency and public oversight.
There is no doubt that the regulatory discussions about artificial intelligence will continue with increasing intensity as the technology evolves and penetrates even more areas of life. However, you should be aware that there are already legislative initiatives and extensive regulations that have an impact at international level. Furthermore, in addition to the specific laws and regulations mentioned above, various provisions in existing laws in Israel and internationally, including data protection laws, contract and tort laws, consumer protection laws, labor laws, etc. are relevant and applicable to various uses of artificial intelligence.
Given this situation, companies developing and purchasing AI-based products, as well as investors considering investing in these companies, should check at this stage whether the companies can comply with the relevant regulations, in particular the five principles set out in the artificial intelligence are based on regulations that are expected to be issued in the near future.
[View source.]