Artificial intelligence needs intelligent regulation

In last Sunday’s editorial, we looked at the impact the rapidly growing artificial intelligence (AI) phenomenon could have on the business process outsourcing (BPO) sector in the Philippines. This is no small concern as the BPO sector is one of the key drivers of our economy, contributing more than 7 percent of GDP annually and also happens to be a sector where AI can and will be widespread.

Of course, not all consequences of this are bad; it can significantly improve the performance of the sector and make it even more competitive. But it can also lead to serious business disruption and worker displacement if not managed well, and we continue to urge the BPO industry and government to work together to do just that.

Concerns about AI go well beyond an economic sector, however, and the views of the man dubbed “AI’s godfather,” published in a story on Wednesday, should serve as cautionary tale.

Geoffrey Hinton, the developer of a fundamental technology for AI and until recently a scientist at Google, told the New York Times in an interview that advances in AI pose “profound risks to society and humanity”.

Hinton reportedly left his position at Google last month to comment on the potential dangers of unchecked AI development. Certainly Hinton wasn’t criticizing the concept of AI itself, but explained that competition between tech giants – including Google, OpenAI, Microsoft and IBM – pushed them to develop new AI applications too quickly without considering how they might disrupt it jobs, are misused to spread disinformation or harm others.

From our point of view, it is premature to blindly jump into the use of AI without considering the possible consequences. A warning that’s been repeated a lot lately is “regulate the AI ​​before it regulates you,” and while that may sound a bit perfunctory, it should be taken seriously. Just as we have seen with other world-changing technologies such as the internet, the products of human ingenuity can and will grow beyond our control if we only react to their impact and fail to think critically ahead.

For the same reason, scare-mongering about the dangers of AI — an example being the recent call by tech entrepreneur Elon Musk and others to “pause” and regulate AI research “before it’s too late — is just as premature. Yes, it’s obvious to some that regulation is necessary, but it’s equally obvious that there are an almost unimaginable number of potential benefits from AI if applied forever. Regulation is a blunt instrument in the best of circumstances, and the prospects for their application are the best effects on a rapidly evolving form of technology that is not fully understood are in fact minor.

In order to articulate a regulatory approach in a suitably proactive manner, we would make three recommendations to the government. First, any notion of regulating AI research should be off the table; this will only serve to stifle innovation and prevent improvements in technology. Instead, regulation should focus on their application.

Second, the government should formulate rules for the use of AI in authorities and processes. This will force the government to think carefully about how AI is and can be used, and will at least implement some safe policies before it gets out of control. These rules can also be closely monitored and adjusted to real-world and real-time conditions without going through a time-consuming bureaucratic process. Then, after a reasonable time to fix any errors that may arise, the rules can be used as a basis for drafting an actual law to govern AI.

Finally, companies operating in the Philippines should be required to publicly disclose their use of AI. There need be no limits to the use of AI by private companies – at least not initially, except in clear cases of illegal activity – but transparency is crucial, both for informed choices by consumers and other companies, and for government intelligence gathering for the eventual development of intelligent regulation.