Transparency in AI: Proposal for a Universal “Is This an AI System?” Consumer Signal

When dealing with a chatbox, it is hard to know if there is a human behind the chatbox or if we are dealing with an Artificial Intelligence (AI) system. Or, if we are watching a video, is it a deepfake or an authentic video?  It got me thinking that maybe there needs to be a universal “411” (i.e., request for information) command or signal that we can give to a chatbox or a voice recognition system or a video that, in effect, says “Timeout. I want to know if this is an AI system or am I dealing with a human OR if what I am hearing or seeing is authentic vs. machine-generated.”  

Maybe think of this as a universal “yellow card” that is used in soccer that consumers can pull out and stop play at any time with respect to an AI system. Or the equivalent of wanting information, you dial 411 on the phone (at least that is what you do in the US), so this is the “411” equivalent for AI.  Think of it as a way to trigger in real-time your “right to know” equivalent that you have with GDPR or CPRA but instead of this right being associated with the collection and processing of personal data, this applies to AI. Anyway, maybe someone will come up with a better analogy.

 
 

But let’s first step back and briefly discuss AI ethics vis a vis the concept of “Trustworthy AI” and then drill down into what potential laws are asking for in terms of humans interfacing with AI systems.

The field of AI ethics has emerged as a response to the growing concern regarding the impact of AI and the acknowledgment that while it can deliver great gifts, it could also represent a modern “Pandora’s box.” AI ethics is defined as the “psychological, social, and political impact of AI.” Ethical AI aims to utilize AI in a lawful manner that adheres to a set of ethical principles that respects human dignity and autonomy, prevents harm, enables fairness, and is transparent and explainable. And it must meet a robust set of requirements from a technical and social perspective that helps ensure that AI performs safely, securely, and in a reliable manner that does not cause any unintended adverse impacts. The European Commission says that if an AI system meets this “overarching value framework” and takes a “human-centric approach” — i.e., designed, developed, and used for the betterment of humankind — then it can be considered “trustworthy.”

One of the requirements for Trustworthy AI is transparency.  Transparency from an AI perspective means that the capabilities and purposes of AI systems need to be clearly communicated and their decisions and any output to be explainable to those affected.

Drilling down, even more, it is recommended in the European Commission’s “Ethics Guidelines for Trustworthy AI” that specific to communication that:

“AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system. This entails that AI systems must be identifiable as such. In addition, the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights. Beyond this, the AI system’s capabilities and limitations should be communicated to AI practitioners or end-users in a manner appropriate to the use case at hand. This could encompass communication of the AI system's level of accuracy, as well as its limitations.”

This recommendation was subsequently reflected in the EU’s Artificial Intelligence Act that was proposed in 2021. Specifically, Article 52 says this …

“Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system”

And

“Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.”

This is great, but no doubt what will happen is that when we are interacting with an AI system this “AI notice” may either (a) be buried in the equivalent of a multi-page privacy notice and who has time to read that, or (b) flash before us in small print and then disappear, or (c) may have been communicated to us in a prior visit or usage of the AI system, but in subsequent visits or uses we may have forgotten that we were told that. Furthermore, there needs the ability to be informed in real-time.

So my proposal is that the industry (or some regulator like the EU) should come up with some sort of universal “yellow card” that a consumer can take out of their back pocket at any time and do the 411 to get the information.  Basically this consumer-initiated signal would say “Time out. Am I dealing with a human or some authentic content, or am I dealing with an AI system or something that has been computer-generated?”

So in the case of a chatbox, i.e., something you interface with by typing text into the system, maybe at any time you can type

411ai

and the chatbox needs to stop and respond if it is really a human behind the scenes or not.

Or if you are dealing with some sort of voice system and you are not sure if it is really a human or what not, then you say at anytime

411ai

And it stops what it is doing and tells you this is an automated system or what not.

 
 

It must answer if it is a computer or a human that you are dealing with. Then it should give you the option to request to interface with a human. And maybe it also responds to this signal that if it is an AI system, then it also replies do you want to see that the AI equivalent of a nutrition food label (but needs to be geared towards consumers).

For videos, I would recommend that videos add that 411ai or some universal symbol for that (or just the letters 411 as a symbol) ala the settings gear symbol.

 
 

That’s my idea — a universal way for a consumer to signal ala a yellow card to stop AI in its tracks, and give you the 411 lowdown if a human is in the loop or not, or if a video is authentic or a deepfake. In the case of dealing with an AI system, it then gives you the option to say hey I want to speak with a human.

Previous
Previous

Overview of AB 1651: The California Workplace Technology Accountability Act

Next
Next

Draft CPPA Regulations Puts Some Heat on Data Brokers