Trustworthiness
In this final blog of our Co-Creation series, we look at the role of trustworthiness in AI with regards to disinformation in media. In the media sector, accuracy was valued as the most important parameter for the trustworthiness of AI, with the priority of 53% of participants; this was followed by fairness with 25% and robustness closely in third with 22% preference.
Accuracy
A total of 29 user requirements for accuracy were proposed by the participants in Austria, Greece and Lithuania, which were clustered into seven themes: user friendliness, review and verification of data, continuous update of data foundation, legislation, AI’s self reflection, explainability & uniformity.
User-Friendliness
AI should support the user to utilise the technology effectively to gain accurate results. This can be through encouraging more detailed and precise prompts, which would provide more accurate outcomes. Additionally, participants proposed that AI tools should use natural language processing techniques so that it can comprehend the inputs of ordinary users and understand language nuances. Finally, AI should inform the user when providing potentially sensitive material which the user can then evaluate according to potential legal and social risks.
Review and Verification of Data
AI systems should have review processes built in to evaluate the reputation of sources and data. By publicising the sources used by AI, users can evaluate them themselves or defer to credentialed evaluators.
Continuous Update of Data Foundation
The continuously changing information in news was a major concern for the media sector, which requires that AI algorithms need to be continuously updated, (which is especially important with regards to new legal regulations and rulings). Additionally, AI would benefit by allowing users to update their own data and view when AI data was last updated.
Legislation
Especially with regards to disinformation, it is important that AI adheres to ethical standards and especially the European Standard Benchmarking. Liability is also an important feature for AI use, such as systems reminding the user that they liability for the content based on AI.
AI’s Self Reflection
AI would benefit from presenting the level of accuracy it provides in a visual output, so the user can appreciate the reliability of the tool.
Explainability
AI needs to be transparent and thereby allow users to evaluate the accuracy of the AI output.
Explainability is understood as AI being fully transparent with sources being available to the user, that AI explains how sources and data have been weighed and cleaned, how bias has been identified, and provides other reasoning behind the work of AI.
Uniformity
The users want the AI to abide by a minimum threshold of data. If the tool was able to reject a request if there was insufficient data, this would limit the risk of AI presenting disputable truths or being misleading.
Fairness
Participants identified nine user requirements categories for fairness, which were: update and review, compliance and media ethics, training data, explainability, legislation, digital literacy, accessibility, protection against outcome bias & features.
Update and Review
Considering the constantly changing nature of news and disinformation, it is essential that AI tools have mechanisms to regularly update and review information, additionally users should be informed about update and review processes.
Compliance and Media Ethics
AI tools must be developed using training data based on media ethical standards, such as intellectual property rights, duty of care, and privacy compliance.
Training Data
AI training data should be as diverse as possible, with relevant factors including data diversity of age, gender, education, occupation, origin, and socio-economic status. Additionally, AI should receive information from diverse media forms such as videos, pictures, posting, and print in order to fully understand the media landscape. Finally, there should be full transparency on the tools’ training data diversity.
Explainability
The user needs to have access to an explanation about how AI came to its conclusions, which could show arguments for or against, the sources used, the logic of the approach, what data has been used, and how the data has been weighted.
Legislation
Participants want AI to comply with GDPR laws, ensuring that data is encrypted, allowing instant data downloads and allowing users to delete accounts. In order to enforce compliance, it was suggested that users receive justice in case of AI issues, by establishing an AI citizens advocacy – in line with the way the European Ombudsman can investigate complaints.
Digital Literacy
In order to ensure digital literacy and safeguard users, AI users should be educated on how to use AI and avoid manipulation. Secondly, AI users should learn that they have a responsibility for what they create with AI knowing that user prompts can generate misinformation. Finally, it is important that AI is accessible to all, including remote areas, the less privileged or people with low education levels.
Accessibility
Accessibility can be ensured through the use of plain language, multiple formats, and accessibility features.
Protection Against Outcome Bias
Participants desire ethical guidelines implemented in AI tools to avoid reinforcing stereotypes of people which reproduce human bias, and suggest that AI should recognise unethical requests that can violate certain groups or influence sensitive topics.
Features
AI should be able to present different points of view on the topic requested, identify whether the output of the AI is based on sources that are in some way political and if so, if it is on the right or left side of the political spectrum. It should also be clear what output is generated by AI and some AI output should only be available to certain verified users (e.g. images with children on), at the same time AI should be able to consider the impact that the specific AI tool has on the sector.
Robustness
Participants identified eight user requirements for robustness, which were: code of conduct, data foundation and training, security and data safety, user-friendliness, regulations and legal consequences, transparency, changing versions & accessibility.
Code of Conduct
AI tools should comply with ethical rules to prevent malicious practices, with specific codes of conduct depending on the field of application.
Data Foundation and Training
AI tools need to be transparent of the limitation in data, such as by revealing when it was last updated, potential limitations from the AI developers or the various sources used, and combating disinformation by considering different media sites. Participants suggest that AI should be able to identify AI generated content and filter it out in the training of new AI, as well as using diverse and unexpected information in training data.
Security and Data Safety
AI systems should be able to mitigate vulnerabilities in case of cyber-attacks to avoid huge consequences for the user. Another aspect of data safety is privacy, where AI systems should give users control over uploaded information, giving them access to delete any information and control who has access to the information. Another approach to data safety is establishing a dual data protection system to avoid sensitive information being spread unintentionally, such as encryption, access controls, intrusion detection systems, secure data sharing protocols and encryption.
User-Friendliness
When considering robustness, user-friendliness is understood as AI systems having instructions for the user on how data is stored, used, what purpose it is used for, and who can access it.
Regulations and Legal Consequences
As well as complying with GDPR and the EU AI Act, AI should also comply with further global good practices from unions and parliaments. Standards should be clear and there should be legal consequences in case of data breaches. AI should always be updated with knowledge on new laws, regulations, and certifications.
Transparency
Transparency requires providing a clear statement to users about when a new AI version is applied and what the updates of a new version entails. The user should be made aware of potential interests of large companies, to understand if any lobbying potentially is taking place.
Changing Versions
With version changes, humans should be in the loop, able to switch between new and old versions, to manually assess the reliability of a new version and to have the possibility of working in the old version, if the user determines that the old version is preferable.
Accessibility
Participants want AI systems to make real time updates, and the tools should adapt to people with dyslexia applying readable fonts, larger spacing, simple layouts and writing styles. Finally, participants suggested that the AI system should be able to work offline, which is especially critical in situations of internet outage or cyber-attacks.
That concludes our Co-Creation series, hearing valuable feedback from a range of individuals in multiple sectors and nations across the European Union. We hope you have found this series to be both intriguing and insightful.
Comments