Why Aren’t Governments Paying More Attention to AI?

Governments around the world have no idea how well-developed their countries’ AI programs are coming along, but this knowledge might be crucial for everything from policymaking to fraud detection.

Written by Ari Joury, Ph.D.
Published on Oct. 06, 2021
Brand Studio Logo

AI has come a long way over the past decade. The compute resources dedicated to the technology have been increasing exponentially, doubling every four months on average. This corresponds to an increase by a whopping 30,000,000 percent since 2012.

This stellar growth has enormous implications for the future of work and of society as a whole. Despite this potential, governments around the world don’t know exactly how much compute their country has for AI, how it’s used, or how to prevent it from being harmful.

Their ignorance is concerning, given that AI as a concept has been around for several decades. And for the last decade, a range of organizations have conducted massive research efforts aimed at deploying AI for industrial means. Such efforts could have a profound effect on people’s lives.

Precise knowledge of the state of the art is crucial so that governments can put effective policies in place regarding AI and its implementation. Think about it this way: If governments didn’t know precisely what the inflation rate was at any given time, deciding on measures for growing the economy would be much harder. Likewise, information about traffic gives the government important clues on where to invest in infrastructure. Most recently, we’ve all been witness to how data on Covid-19 has led to different rules for public conduct.

Along these same lines, failing to collect relevant data about AI carries significant risk, as a recent paper by Jess Whittlestone and Jack Clark points out. Without data, governments will be in a much worse position to regulate AI and monetary investments will be more prone to fail. To get started, we need to learn from the past because some harm has already been done.

Governments and AI

AI might have incredibly good effects on healthcare, carbon emissions, sustainable crops, and economic growth. But it can also do lots of harm, for example as a tool for repression and monitoring of civilians. The latter would mean that all power is concentrated in a small class of elites, instead of getting AI to empower each and every citizen. It’s not too late to act now and start some pilot projects. The U.S. government can use its results to draft legislation for the years to come and position itself as a global leader once again. 

More From Rhea MoutafisThe Time Has Come to Decouple AI From Human Brains

 

AI Has Already Caused Harm

Have you already used an AI today? If you’ve scrolled through your favorite social network, watched a video on Netflix or YouTube, or you’ve streamed a podcast or music on any major platform, then yes. AI algorithms are incredibly good at knowing your taste and giving you whatever you want at any given moment. Likewise, they’re very good at getting you addicted to whichever platform you’re using.

But digital addictions are only one of many ways in which AI can cause harm today. Less lighthearted examples like Clearview AI show how facial recognition could upend privacy rights as we know them. The company has scraped the web to create the largest database of images of people around the world, and it allows users to match a photo of any person with other photos around the web, thus potentially revealing their identities. Only now, years after the company’s inception, has the European Union hit it with legal action. The U.S. and many other countries still remain silent, however, because they lack equally strict data protection laws.

Similarly, deepfakes can abuse anyone’s images or videos to create misleading and defamatory content. Of course there are legal consequences under many existing laws, but they don’t aptly compensate victims for all the damage a deepfake can do.

Finally, computer vision and natural language processing algorithms exhibit as much, if not more, human bias as their creators do. When a state-of-the-art computer vision algorithm labels images of people, for example, these labels are often misogynist, racist, ageist and ableist. Researchers are currently working on removing bias from such systems. Today, however, companies ranging from Google to Facebook and even many other companies you’ve never even heard about are using these flawed algorithms and causing harm.

 

Governments Need to Act Fast

While writing this article, it dawned on me that much of this harm was preventable. At some point during the AI research boom of the last decade, regulators could have taken a more active role. This intervention would likely have given them the expertise necessary for crafting appropriate legislation and penalizing actors who violate it. 

Because governments didn’t step in sooner, though, they now don’t have as much leverage over companies if they’re causing harm with their AI. This situation makes public investment into AI all the more important now. And if governments need to invest in the vast field of AI, they’ll need data to guide them.

Although plenty of anecdotal evidence exists, quantifying how far along a country is in developing its AI capabilities is surprisingly difficult. For example, an algorithm recognizing pedestrians for an autonomous vehicle might perform differently depending on the type of road its on, the weather, and the time of day. Governments must be able to have a protocol for checking this performance and taking legal action if such an algorithm is faulty. Otherwise, if autonomous vehicles hit pedestrians, it would be impossible for justice to be served.

Absent regulation, trust in such technologies would quickly erode as a consequence, and other countries with better laws, or better algorithms, would get a competitive advantage over those where accidents have happened. Of course, accidents need to be prevented in the first place. But if they do happen, that shouldn’t have an outsized impact on the whole industry, which would possibly set a country back by years. 

 

Compute and Progress Aren’t Fully Understood

Intuitively, an algorithm will perform better with bigger data sets and faster computers to train it. But it’s not at all clear right now if this relationship between compute and performance is linear, exponential or something totally different. Equally unclear is how that relationship varies across different types of algorithms and different areas of application.

Despite its current opacity, understanding this link is important in order to anticipate future developments in AI. If, for example, one finds that a computer with twice as much memory can run a model that’s three times more efficient, this would indicate that investing into computer memory is a very good idea. At the moment, large studies that investigate compute, whether that’s memory, nodes or another quantity, and its quantitative impact on AI are missing. 

That being said, the solution to a problem isn’t always throwing more resources at it, but also inventing smarter algorithms on the existing compute. Private companies, however, are investing a lot into smart algorithms and tend to employ the top developers by offering them very attractive salaries and benefits. Governments can’t compete with this, nor should they. In order to develop laws that help a country’s digital economy, they don’t need to understand the mechanics of different algorithms in detail. They do, however, need to provide an environment where developing smart algorithms is possible. And this includes making sure that enough compute is available for the right purposes.

If countries want to invest in their digital futures, then, they should focus on the hardware. In the light of the ongoing chip crisis, this is especially relevant. Plus, private companies are already taking care of the software. So, in order to provide the right resources and make legislation that stands the test of time, governments might want to investigate this link in more detail.

 

Identifying Key Areas for Policy Priorities

About 80 countries currently have AI policies in place, with vastly varying key priorities. Turkey, for example, has many initiatives in AI for healthcare, the U.K. has several directed at employment, and Japan focuses on corporate governance. This reflects the priorities that different countries set for the long term, and the fact that they are willing enough to throw a few million — or billion — dollars at AI to try and get the job done. 

The thing is, placing bets without knowing your own cards is risky. I have no doubt that every country has done thorough research before choosing its priorities. As the OECD points out, however, at this moment there is no systematic way to assess what the AI resources of a country are capable of. Measuring the progress in a standardized way might help to make more targeted investment decisions possible in the future.

This is especially puzzling because measuring the effect of AI inside private businesses is common and widespread today. Such measures include the cost savings from an AI versus a traditional algorithm, how much faster AI is doing its assigned task, how much an internal AI is helping managers and teams, and so on. And within a business, even a large one, it’s relatively easy to count how many computers it has and check how much of the compute AI-related software is taking up.

It turns out, however, that trying to measure the public resources that a country has is way harder. They tend to be larger than most businesses and often boast quite archaic structures that serve many more purposes than those in an average business. OpenAI is among the few research groups that have already proposed some measurements that might become standard one day. Among these is algorithmic efficiency, which quantifies the compute needed for a specific capability, such as spotting cats in a dataset of 100,000 images. Complementary measures include sample efficiency, training efficiency and inference efficiency, amongst others.

Whatever measures that AI researchers, the OECD, and governments finally settle on, the motivation is clear: by assessing how much compute a country has and how much more might be needed to develop a country’s key priorities, the better its government can plan such expenses.

 

How Governments Can Use Measurements 

If legislators know how AI algorithms typically behave, then detecting anomalies and punishing rulebreakers will be much easier for them. This is no one-time action, either. The industry is growing more and more sophisticated, and algorithms are continually updated. Therefore, legislators must keep monitoring different algorithms in order to spot those that don’t conform to the law. 

This kind of monitoring will also enable public officials to spot important trends. This gives them the option to incentivize trends that benefit the greater good and penalize those which they deem harmful, especially using grants or with taxes. For example, lots has been written about the grueling conditions of labor workers at Amazon. In 2019, an article surfaced detailing how Amazon used an AI algorithm to automatically fire workers who didn’t perform well enough. This is scary, unfair, and shouldn’t belong in the future of work. To prevent such events in the future, governments could heavily tax such practices, so that corporations have fewer incentives to treat workers like expendable resources unworthy of human dignity. 

On the flip side, researchers at Microsoft managed to train a classifier to analyze social media posts and detect whether a user suffers from a mental illness, such as depression. This is extremely useful for public health research because it would help gain an estimation on how many people are currently suffering. Another possible use of this classifier might be for an online test, where people can enter their profiles and find out if they’re likely to have depression or not. This might help individuals seek treatment faster, and further lift the stigma around mental health. Government grants could enable more such research and put it to good use.

Measurements might also help to establish industry standards. These could, and should, include safety, fairness, and robustness, i.e., the capability of an algorithm to produce comparable results in different environments. For example, the AI vision algorithm of a self-driving car must always be able to recognize a child jumping on the road, whatever the weather. Absent this, the public may lose faith in AI, causing countries to lose out on lots of economic benefits.

Finally, measuring and monitoring AI performance can also give rise to early warnings for risk or opportunity. For example, we know that neural language models have been getting more and more compute resources in the private sector. We also know that the more compute such models get, the more benefits and harm (yes, both) they yield. This means that legislators need to watch out and start putting sensible policies for these models into place. Continually monitoring progress would allow governments to intervene earlier, foster good growth, and prevent harm from happening. 

Referring to the examples from the previous section, this means that the more advanced Amazon’s AI gets, the more workers it could fire automatically. It’s worth noting that Amazon has discontinued the use of this AI. But if they, or another company, used a better version of it in the future, that would be horrendous. Likewise, if Microsoft’s depression detection algorithm gets more advanced, it could potentially help many more people. And if governments lose touch with where their country is at in terms of AI, they just might not prevent workers from being fired automatically and not foster progress on helping people with depression.

 

Pilot Projects, or AI Monitoring for Beginners

Although AI policy has been a hot debate subject for years, there are so many unknowns that there is no gold standard for measuring compute resources as of now. Despite this, the European Union, serving as a pioneer in this area, has drummed up an impressive piece of legislation. It has its critics, though, since we’re technically unable to implement many of the laws as of today. For example, the E.U. insists that algorithmic bias is a no-go. Despite lots of progress, however, nobody knows how to totally circumvent bias in AI as of today. Even detecting bias, extreme cases aside, can be tricky. And this is just one of the many areas where the E.U. legislation is difficult to apply. 

Only time will tell whether the European AI law will be future-proof. Such a policy does set it apart from the U.S., though, which lacks this sort of law. The regulation also makes the E.U. more interesting as an AI hub for other countries. Although the E.U.’s law will certainly keep on developing, the general direction and its priorities are clear. This makes it easier for businesses to set up shop and plan for the long term. In contrast, AI businesses in the U.S. face a lot more uncertainty since new laws could go one way or another.  

Regardless of whether a region has early stage legislation in place or not, governments must start exploring their options. Pilot projects might enable just this, as Clark and Whittlestone point out in their paper. For example, governments could start assessing biases in AI models and improve the data sets that biased models train on. The rationale is simple: By providing a less biased data set, one gets less biased models. This could be achieved through in-house research and literature review or by consulting with industry experts on a regular basis.

A pilot project of monitoring AI progress in different economic sectors would help spot how AI is impacting different industries in a more detailed fashion. This would help the government prepare for the future. For example, this data could allow governments to create programs to retrain specific groups of workers whose jobs might disappear in a few years.

A third idea would be to host competitions in economically important domains, such as employment, housing, legal contracts, or cybersecurity. These contests would help get an objective view of the state of AI, because the contestants will likely use various state-of-the-art technologies. By repeatedly hosting the same type of competition, governments could also track the progress, for example, from one year to the next and anticipate the speed of growth in the years to come.

 

Impending Doom?

Clark and Whittlestone paint a pretty dark picture of the future if governments don’t step up and start checking in on the state of AI. They predict that private companies will care more about their bottom line than about the greater good, thus making AI a means to exploitation and social injustice. 

In addition, in the moments when governments are forced to step in, they might make hurried and badly informed decisions, potentially leading to disastrous consequences. Finally, the private sector might make its own industry standards; these, however, would care less about the greater good and more about the power of individual companies.

These might be some dark forecasts, but they would likely only come to pass if governments do nothing at all. Fortunately, governments are waking up to the bell that the OECD has been ringing for a while now. As AI becomes more and more economically important, it’s unlikely that governments will continue to turn a blind eye to its developments.

More on Responsible AI6 Ways to Combat Bias in Machine Learning

 

Harnessing Progress Fairly

For the U.S. in particular, the time has come to take action. Chinese companies historically work closely together with the government, so it seems unreasonable to assume that the Chinese government hasn’t had an eye on its many AI-powered companies. Europe has drafted an ambitious piece of legislation, which it plans to improve in the future.

AI might have incredibly good consequences for healthcare, carbon emissions, sustainable crops, and economic growth. But it can also do lots of harm, for example as a tool for repression and monitoring of civilians. The latter would mean that all power is concentrated in a small class of elites, instead of getting AI to empower each and every citizen. It’s not too late to act now and start some pilot projects. The U.S. government can use its results to draft legislation for the years to come and position itself as a global leader once again. 

But it needs to act. Now. The bell is ringing.

Explore Job Matches.