Solving 9 Challenges to Responsible AI With an IPO Framework

August 23, 2023

I recently had the opportunity to participate in a conversation with Nicole Alexander, head of web marketing at Meta on stage at Black Tech Week.

Not only was the discussion insightful for all, but we received some thoughtful questions. So I wrote down my thoughts from the conversation on building “Responsible AI”.

Often the Responsible AI framework uses a list of principles. And, yes, I generated this list with the help of GenAI tools along with my input. 

Responsible AI Principles

  • Fairness: AI systems should avoid bias and discrimination, and should be equally accessible and beneficial to all.
  • Transparency: AI systems should be transparent and explainable so users understand how decisions are being made.
  • Privacy and Security: AI systems should respect the privacy of individuals, and should have robust security measures in place to prevent misuse.
  • Reliability and Safety: AI systems should be reliable and safe to use, and any risks should be carefully managed and mitigated.
  • Accountability: Those who develop and use AI systems should be held accountable for their impact on individuals and society.
  • Inclusivity: AI systems should take into account the diversity of people, cultures, and contexts in which they will be used.

Challenges to implementing responsible AI

In practice, implementing responsible AI is difficult to achieve. Here are the 9 key challenges we must address:

  1. Users don't know where, when and how they are creating and sharing their data.  
  2. Users have no transparency on how their data is processed and have no say in it.
  3. Regulation and oversight has not caught up yet. There are many federal agencies working on developing standards in the U.S. However, there is no standard guidance yet on the topic as it is evolving.
  4. There is little regulation addressing cross-border data sharing and transparency. 
  5. Systems that understand and generate responses do not have adequate, historical, up-to-date data so the data is not timely. Case in point—an AI system that doesn’t understand the complete history of the United States cannot really respond to “Black Lives Matter” topics without the context.
  6. Companies do not have an AI transparency policy on their sites or apps.
  7. Data sits in multiple formats. For example, a video includes text, audio, images, etc. Capturing some elements but not all could create discrepancies in the output of the AI system. 
  8. There is a big gap in AI literacy between the “doers” vs. those that want to regulate AI. Our congress leaders asked deeply embarrassing questions and comments in many of their Q&A sessions with industry leaders. For these people to make laws without understanding how AI works is a folly.
  9. There is no objective way to quantify a ‘responsible AI system’ yet with a score or a coding system.

Solving issues with responsible AI using an IPO Framework

A way of addressing the problems above would be to employ an IPO framework—Input, Process, and Output. 

Input

Source data with permission. Is this data set growing, shrinking or the same over time? Have a data governance policy that addresses data timeliness. 

Ensure the data sourced is inclusive and diverse where relevant. Diverse teams bring diverse perspectives and diverse data, so who is sourcing matters along with what you are sourcing?

Post an AI transparency policy on your site. This should be similar to a privacy policy that describes when and if you are using data in an AI system.

Know your data. We are producing data all the time with every breath. Becoming data aware and educated on what data you are creating and where it is going to be is important.

Remove bias. Develop guidelines on what to exclude from input to remove bias.

Perform regular audits. Bring accountability by having regular audits of system IPO from data officers.

Promote a data-first culture. Proactively work to establish a culture within the company that values data and understands its importance. This includes training other employees on data-related topics, and ensuring data is used responsibly and ethically.

Process

Design Transparently. In designing systems that process data, systems should be able to account for, and be transparent about, a fact or an opinion vs. a generated statement.

A fake AD can be identified as fake if its ownership is identified along with whether the AD is a fact or an opinion or a genAI statement. 

Security. Know who has access to the data, so rogue parties don't have access to it.

Test for bias. A loan approval system should never include race in the mix and test that people of all races have the same outcome. Ideally, we need third party systems to test bias to rate outputs. 

Qualify time from which data was processed.

Data privacy. Ensure the company's data practices respect the privacy of customers and employees, and that data is stored and used securely.

Output

Quote your sources in the output.

Tag the output. Build a system to identify or tag a fact vs. a generated statement.

Third-party testing. Develop and mention third-party testing to improve trust. I expect badges to come from third parties that perform testing and auditing of AI systems. 

Conclusion: Responsible innovation is the future

Overall, we need to improve AI literacy across users, companies and governments to create balanced policies that benefit all parties while encouraging innovation.

Knowing where your user data is being collected, processed and leveraged is going to lead to more responsible AI-led innovation.

I would love to hear your comments on the topic.

Please add to the conversation.  

Recommended links:

About the Author

Ajay Bam is the CEO and Co-founder at Vyrill, a first-of-its-kind video intelligence company launched in 2017 through UC Berkeley’s Skydeck Incubator program. Vyrill helps brands and shoppers find the “moments that matter” inside videos. Its AI-powered “In-Video’' search technology analyzes & shares insights hidden within videos to improve personalization, SEO, and conversion. Before Vyrill, Ajay launched Boston-based, mobile shopping app company Modiv Media. He is a proven and accomplished product management professional, entrepreneurial thinker, and innovator with more than 13 years of experience leading startups and world-class brands.

Join thousands of people who get our video marketing, commerce, and SEO tips each month
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.