Sundar Pichai, the chief executive of Alphabet and Google, has reiterated his stance on the need for regulation of artificial intelligence technologies, in an op-ed published in The Financial Times. Stating the importance of regulating a technology as widespread and powerful as AI, Pichai brought up Google’s AI principles published in 2018, and stated that the technology makers now need to work with policymakers to realise the full potential of technologies such as AI, and develop them to their true abilities.
In recent times, the use of AI has, as with most powerful innovation, seen considerable usage in non-ideal situations. It is this that warrants cross-sector and cross-industry collaboration, writes Pichai. He states, “(The) history is full of examples of how technology’s virtues aren’t guaranteed. These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to repressive uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.”
Pichai described the use of Google’s AI tools in critical sectors, taking examples from healthcare (usage of AI by doctors in early detection of breast cancer symptoms), climate change (use of AI to pre-detect weather anomalies) and travel (use of AI to reduce flight delays). He further touched upon how Google uses its new AI tools to see their compliance with their own AI principles, stating, “Principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.”
However, many have, over time, criticised Google’s principles for not being transparent enough, and even for their stance on accountability of their own technologies. While Pichai speaks in the op-ed about testing his company’s technologies, many have stated that Google needs to be more accountable in how their technology works, and just self-regulatory principles do not make the cut. For instance, Google’s recent work on using AI to detect deepfakes on the web only builds a database that can be tallied to detect the problem, but does not provide preventive measures to curb its spread, beyond the scope of this database. While arguments regarding infrastructural limitations will remain, a company of Google’s magnitude and technological prowess have time and again faced the criticism of not doing enough on the final third.
It is this that brings light to Pichai’s push for government regulations, and building uniform international regulations. Official regulations, as seen often, typically develops at a far slower pace in comparison to technologies such as AI, which has progressed at a staggering speed. As a result, it is questionable as to how far can the idea of uniform government regulations across the world in terms of ethical and legal implementation of AI across sectors can be framed. Rather, while framing such regulations is technically possible, it is too difficult to establish such regulations within a short span of time, bringing the onus of self-regulation and transparency back to tech giants such as Google.
“AI has the potential to improve billions of lives, and the biggest risk may be failing to do so,” says Pichai in the FT op-ed. While pretty much everyone will agree to it, what remains to be answered is how much of that would depend on the responsibility that lies of Google’s (and others’) own shoulders.