congressional-report-AI-regulation

Artificial intelligence comes with much promise, but that same potential for good can be harnessed for misuse. While “narrow AI” devices--think Siri or Alexa--prompt serious discussions about user privacy, these concerns pale in comparison to still-hypothetical general AI machines that would “pass” for (or even surpass) regular human interactions and decision-making. The Conversation discusses a recent report from a congressional committee on the regulation of these emerging technologies.

The report acknowledges that the U.S. cannot claim dominance in this field due to shrinking research budgets. Russia and China, however, are taking the lead and investing in the development of AI technologies. With this fact, the U.S. government is concerned about the use of AI in surveillance to unlawfully access private information and spy on users. Even now, as the technology is in its infancy, algorithms have already been co-opted to incite confusion, particularly on social media through targeting.

Yet, as The Conversation also notes, AI offers far more than doom and gloom, citing an AI lip-reading program from Oxford called LipNet that boasts an incredible 93.4 percent accuracy rate. Compared to the top human lip-readers, whose accuracy hovers between 20 and 60 percent, this development comes as welcome news for people with hearing and speech impairments. In the wrong hands, though, this technology could be used for surveillance, both the public at-large and specific targets.

As these predictive technologies gain wider adoption, decision makers should be vigilant that attempts to eliminate existing bias do not end up only reinforcing it.

Bias is another issue that rears its head in AI technology. The humans that create the inputs for these systems have biases so these are reflected in allegedly impartial AI-powered decisions. This is particularly worrisome because of the perceived fairness of these computer generated decisions. 

For instance, some courts use an AI program called COMPAS to decide whether criminal defendants on bail are to be released. Evidence suggests that this program exhibits a dangerous bias against black defendants. As these predictive technologies gain wider adoption, decision makers should be vigilant that attempts to eliminate existing bias do not end up only reinforcing it, The Conversation notes.

Congress has the tightrope-walking task of regulating AI without hindering innovation. The Conversation concludes that the report indicates that policymakers are walking that fine line by calling for more AI funding and asking for legislators to exhibit restraint in the regulation of developing technologies. Moreover, adds The Conversation, people should think more broadly about AI to examine bias and understand what this technology can and cannot do.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us