Retired four-star general Gustave Perna led the AI-enabled U.S. initiative to produce and distribute the first coronavirus vaccines, called Operation Warp Speed. A journalist recently asked him what other government programs could be improved by incorporating AI. Perna spared no words: “Everything,” he said.
Operation Warp Speed was an emergency program, mounted in the midst of a global pandemic. It required the ability to process and sort nearly unimaginable amounts of data in record time and AI was the key to untangling what otherwise would have been a logistical nightmare. The experience impressed Perna enough for him to recommend AI’s capabilities for all manner of government programs, from the IRS to the organ donation system.
The potential for AI to solve gnarly logistical problems is immense, and it may be true, as Perna says, that “everything” would be made better with AI support. But the technology also comes with serious concerns and dangers. Data privacy, market volatility, the existence of deep fakes and humans’ ability to discern reliable information from unreliable reports are just some of the dangers of incorporating AI into the systems and structures that govern so much of our lives.
Tech companies who are integrating AI into their workflows and products are carrying much of the burden of implementing ethical and responsible practices around its use, and that is no small task. Built In asked industry leaders how they’re approaching responsible use of AI. They shared tips about guardrails, communication and explainability.
“Everything” might not be made better with AI, but these guardrails and guidelines can help us harness its power for good.
Analytics8 is a data and analytics consultancy that helps companies translate their data into meaningful and actionable information so they can stay ahead in a rapidly changing world.
What are the main practices you employ to create a responsible AI culture, whether that’s through transparent policies, employee training, etc.? How have these practices proven beneficial?
Education, policy, and communication.
First, employees must have a basic understanding about how AI systems function: what data is used, where the data is processed and stored and who owns the results. In the case of generative AI, employees should also understand that results often require extra scrutiny before accepting them unconditionally.
Next, create a position on AI that reflects your company’s core values, and update your policies accordingly. Include diverse perspectives from all levels of the organization, both to ensure the needs of all use cases are represented and also to make employees feel heard and invested in the process. This increases the likelihood of adoption.
Finally, communicate your AI policy to employees, but not in terms of additional rules and bureaucracy. Instead, come from a position of confidence and explain how a responsible AI culture aligns with and advances the company’s vision and values.
Why is it critical for your organization and others in your industry to establish a responsible AI culture?
As a leader in the data and analytics industry, Analytics8 has an obligation to set a precedent for how AI should be integrated and utilized responsibly. When working with customers, we emphasize that AI should not be thought of in isolation but rather as part of a broader data culture underpinned by a comprehensive data strategy. Such a strategy should adopt a holistic approach that prioritizes ethical considerations, governance and a blend of analytical capabilities.
“When working with customers, we emphasize that AI should not be thought of in isolation but rather as part of a broader data culture underpinned by a comprehensive data strategy.”
This approach ensures that the industry moves forward with a balanced perspective on AI, recognizing its potential while also acknowledging the broader context in which it operates. This is essential for fostering a culture that values responsible AI use, ensuring that its deployment benefits not just individual organizations but society as a whole.
What is the biggest lesson you or your team has learned as a result of establishing AI governance?
Many employees harbor distrust and apprehension toward AI, but we’ve witnessed this skepticism quickly diminish within customer organizations that implement and evangelize a data strategy and governance policies that speak to AI use cases. Employees often set aside their concerns, knowing they are operating in an environment with ethical, well thought out and well-communicated guardrails.
Integral Ad Science (IAS) is a leading global media measurement and optimization platform that delivers actionable data to drive superior results for advertisers, publishers, and media platforms.
What are the main practices you employ to create a responsible AI culture, whether that’s through transparent policies, employee training, etc.? How have these practices proven beneficial?
At IAS, we maximize ROI with AI-driven measurement and optimization. AI is revolutionizing advertising by enabling enhanced targeting, personalization, automation, content generation and data analysis. However, it's essential to ensure ethical use of AI and maintain a balance between automation and human creativity in advertising processes.
Our responsibility starts at the very first step of the AI solution building process: usage of data from our customers and partners. We have strict policies and permissioning in place that are adhered to by our employees, who are regularly trained. The next step is to build ML models that are explainable (they allow us to understand and interpret the models predictions) and are free of any biases. It is extremely important for us to be able to explain to our customers how our models build the decision boundaries which affect the outcome.
“It is extremely important for us to be able to explain to our customers how our models build the decision boundaries which affect the outcome.”
We also retrain our models regularly to overcome data and topic drifts, avoid model quality degradation and use newer, state-of-the-art algorithms. In addition, we bring observability to track input data usage, model output usage, effect of model outcomes on customer success, and generate useful insights. We define goals and guardrails, though we strive to meet the goals we set for our customers without affecting quality, user experience, and system health.
Why is it critical for your organization and others in your industry to establish a responsible AI culture?
Having employees trained to be aware of data usage helps us move fast on building solutions with restrictions in mind. Making ML models explainable helps establish confidence, not only among our customers, but among our scientists as well. Scientists’ confidence in the decisions the model makes allows us to calibrate it against unseen scenarios.
Estimation models inherently carry biases that are traded off against variance, but we keep a check on the biases by utilizing diverse and representative datasets in order to be inclusive and equitable in our solutions. By having tracking implemented for all our input and model outputs, we ensure our compliance teams can easily audit our data usage. Continuous evaluation and retraining of our models at a regular cadence helps us enhance the quality of our solutions and encompass shifts in the input data because of changing user behavior and seasonality.
As the AI landscape is changing at a fast pace, we strive to use state-of-the-art AI models to help our customers leverage the best in class solutions. These are critical steps any trustworthy AI-enabled firm should maintain to stay reputable, productive and to generate sustained revenue growth.