By Scott Friedman
Big Valley’s “AI Disclosure and Transparency: Closing the Trust Gap” report was designed to inspire wider awareness of AI dialogues and inspire business, marketing and communications leaders to engage in more conversation about the use of AI in their product and business decision making.
One key question the report focused on: What expectations do customers have for brands to be transparent in disclosing how they use AI in their marketing and communications?
The answer to this question is a crucial factor not only for brands that seek to maintain trust with customers, employees, partners, and the broader public, but to reinforce the need for ethical and fair AI practices generally.
In this rapid evolving moment, brands are unclear about how to navigate the complexities of GenAI content creation, as well as how to deploy that content while simultaneously safeguarding their brand reputation and fostering an environment of accountability and transparency. In parallel, there are no obvious industry best practices about how to be transparent about Gen AI content creation and it is likely the nascent dialogue on this topic will constantly evolve for at least the next several years.
Big Valley’s report delivers data from a range of perspectives that illustrate the need for more AI disclosure, but one thing is clear: many leaders and practitioners are asking how, where, and when to do this.
This is a huge question because proactively labeling the use of AI in generating content comes with potential benefits and unforeseen consequences. To address this challenge in the short-term, Big Valley’s report also outlines our perspective on AI Disclosure Principles that can help business, communications and marketing leaders navigate this complex scenario while maintaining – and even increasing – trust with stakeholders.
We hope these principles inspire good thinking and dialogue with your brand stakeholders and welcome additional thoughts about how companies can maintain and even strengthen trust in their brands with AI using accountable and transparent approaches.
Big Valley Marketing AI Disclosure Principles
Align Disclosure with Brand Strategy
Ensure that public disclosures about AI are framed in a way that enhances the brand strategy, demonstrating how GenAI content supports brand values, identity, and experiences—make sure your stakeholders understand the value AI-generated content delivers for them within this context.
Reinforce Human Oversight
Companies should establish and publicly share a clear, overarching standard to ensure content is vetted by humans and adheres to brand guidelines and corporate standards – maintaining consistency and quality as more AI content is created.
Establish Disclosure Thresholds
Establish internal thresholds or tiers for “substantial use” of generative AI to create internal alignment on when AI noticeably alters the meaning of content, such as manipulating images or translating text, and use this alignment to establish guidelines for authoring and labeling AI content.
Drive Authorship Clarity
Decisions regarding whether and how to label AI-generated content can significantly influence stakeholder trust. Your approach to authorship labeling should emphasize clarity about the extent of AI involvement in content creation, using precise and consistent terminology – i.e., AI-generated, AI influenced, AI-informed, AI-augmented, AI-manipulated. The choice of label(s) should align with your brand strategy, ideally strengthening – rather than undermining– your brand identity.
Determine Disclosure Prominence
Know your audience. Determine how prominent your disclosures and labeling must be to meet your stakeholders’ expectations and to avoid possible feelings of deception. By thoughtfully considering how and where to label AI-generated content, companies can reinforce their commitment to honesty and integrity, further strengthening brand reputation and fostering trust.
Always be Optimizing
Measure the performance of AI-generated content to gauge trust among consumers of the content. Companies should encourage feedback, answer questions, and involve the community in discussions about AI to foster trust and gain valuable human insights that can improve GenAI content.
Address Fairness and Non-Discrimination
Disclosures should include information about measures taken to prevent bias in AI content, helping companies demonstrate a commitment to ethical practices and social responsibility. This transparency not only helps build trust with audiences but also mitigates risks of legal and regulatory repercussions.
Ensure Regulatory Compliance
With more regulation around the use of AI, it is important to understand the evolving compliance requirements related to the use of AI-generated content, which can vary according to each industry’s regulations, guidelines, and reporting requirements.
Strengthen Your Ethical Posture
Appropriate disclosure about the use of GenAI to create content can serve as a foundational example of transparency and ethical responsibility, setting a precedent for how AI should be utilized and disclosed in other areas such as customer service. By being transparent about AI’s role in content creation, companies can more easily extend this trust to other AI applications across the business.
Drive Accountability
Establish clear lines of human accountability for ensuring AI content meets both internal and external performance, ethical, and regulatory requirements. A cross-discipline accountability team including members from marketing, legal, compliance, data science, ethics, and customer relations, can more comprehensively evaluate the implications of AI-generated content.