Big ValleyBig ValleyBig ValleyBig Valley
  • ABOUT
  • SERVICES
    • Brand Strategy
    • Marketing Strategy
    • Content Marketing
    • Corporate Communications
    • Digital and Social Marketing
    • Market Intelligence
  • CASE STUDIES
  • CULTURE
  • RESOURCES
    • Big Valley Marketing Blog
    • Pressing Matters Podcasts
    • Top Conversations in Tech
    • AI Research
  • CONTACT

Navigating AI Ethics: Disinformation and Content

    Home Innovation Navigating AI Ethics: Disinformation and Content
    NextPrevious
    AI at TechCrunch Disrupt: Ethics, Disinformation and Content

    Navigating AI Ethics: Disinformation and Content

    By Ashley Paula-Legge | Innovation, Interviews + Events, Media + Influencers | Comments are Closed | 27 September, 2023 | 0

    While its impact can be over-hyped or exaggerated, AI is arguably fueling the largest technology transformation since the .com era. And is forever changing the way we live and work.

    So it was quite fitting that Day Two of TechCrunch Disrupt had an entire industry stage devoted to discussions about AI. A stage at which the audience was packed and standing room only for every session.

    Instead of recapping all discussions — which were incredibly interesting — I thought I’d highlight three topics very pertinent to our industry: AI ethics, the technology’s impact on the disinformation economy, and whether AI will take all of our jobs.

    Giving Shape To AI Ethics

    In the session “Can AI Be Ethical?”, TechCrunch’s Amanda Silberling moderated a lively discussion among Chloé Bakalar, chief ethicist and assistant professor, Meta; Kathy Baxter, principal architect, responsible AI & tech, Salesforce; and Camille Crittenden, executive director, CITRIS & the Banatao Institute.

    Silberling addressed the concerns head-on with her opening question, which panelists kept coming back to throughout the discussion: “AI models have been known to be racist, sexist, expensive and unreliable. Why do we still believe that AI is the future?”

    While all the panelists agreed that all parties that touch AI — from developers to policy makers — have a responsibility to pay attention to and address these concerns, they also agreed that the technology will lead to positive impact and transformation beyond what we can currently imagine possible. “…it’s going to advance humanity in ways that we don’t even know about yet,” emphasized Crittenden. But, she says, “…the only way that we can ensure that something of this magnitude is done safely is if everyone is leaning in.”

    Bakalar agreed, stressing that it’s not just the responsibility of the technology industry. “It’s nonprofits, it’s academia, it’s government, it’s civil rights groups. It’s so many other stakeholders. These are really challenging issues,” she says.

    Baxter added, “We need to make sure that all of the wonderful benefits that come with this technology are evenly distributed because too often it’s not just about who benefits but it’s about who pays.”

    In my opinion, AI ethics has been a discussion for years, but seemed peripheral. And I hope that with generative AI making the technology more tangible and real to the mainstream consumer, the discussion will continue to occupy center stage more as organizations become more accountable and their bottom lines depend on it.

    AI’s Impact on the Disinformation Economy

    In addition to AI ethics, one of the ‘dark sides of AI’ that keeps me up at night is its potential to accelerate the disinformation economy beyond our control. TechCrunch’s Kyle Wiggers led a discussion between Sarah Brandt — EVP, Partnerships, NewsGuard, and Andy Parsons, senior director, Content Authenticity Initiative, Adobe, on this exact issue: “How AI Can Both Accelerate and Slow Down the Disinformation Economy.”

    Newsguard tracks disinformation and the bad actors spreading it. “We do this using, believe it or not, old school human being journalists. We’re not using any AI to detect misinformation or to flag bad actors, but just basic reporting skills,” says Brandt. Since the public release of ChatGPT, much of Newsguard’s focus has been on the role of generative AI in accelerating disinformation campaigns and helping companies like Microsoft and others fine tune their AI models and provide safeguards for their users against misinformation.

    “When it comes to applying these [AI] tools to spread disinformation, the phrase that comes to mind is ‘force multiplier’,” explains Brandt. “[Bad actors can] create disinformation campaigns that are more compelling, more sophisticated, have higher volumes and are cheaper because you can just have one person put in a prompt into a large language model and pump out hundreds if not thousands of compelling articles. Whereas previously, you may have needed to employ an army of people to create that content.”

    “Nobody has this exactly figured out 100%,” said Parsons when discussing the different tools and strategies for combatting misinformation, like watermarks and other technologies that can be used to prove provenance. He goes on to predict that in the next five years, we’ll have a new kind of media literacy that relies on understanding the context behind the media and content before sharing it.

    I do hope we figure this out, because the consequences of not doing so keep me up at night! As Parson so eloquently said, “If every single thing you read, see, talk about can be called into question then we don’t have truth at all. There’s no common ground to have productive discussions or exchange ideas or, you know, maybe even be creative and have satire and community around those things.”

    Conversely, it is because of this risk that Parsons and Brandt are optimistic that governments, industry, and others will come together to solve the problem. While Wiggers and I remain skeptics — time will tell.

    On AI Generated Content

    “AI Can Write Words — But Can It Understand Them?” This was the topic of discussion between May Habib, co-founder & CEO, Writer, and Ofir Krakowski, co-founder & CEO, Deepdub, moderated by TechCrunch’s Haje Jan Kamps.

    As in, is AI coming for all of our jobs? The answer, at least for now, is “NO.”

    “The AI is only applicable to the use case we provide, so I think there’s less risk. But there’s no question that there needs to be an accelerated thinktank put together, given the speed at which AI is being deployed and offered out to the marketplace.”

    – Andy Byrne, CEO and co-founder of Clari

    The consensus is that AI can write words but can’t understand them. Especially when it comes to different cultures and languages. “This is a great way to emphasize the limitation of AI,” said Krakowski. For example, everybody in the audience would understand the phrase ‘hold your horses’, but a straight translation into any language would be wrong.”

    But how long will this last? If you ask my colleague Charles Cooper, or “Coop”, he’d say not long. That eventually AI will have reasoning and intellect enough to write a good news story with context. I disagree — I think we’ll always need a human involved in the writing process. (Big Valley explores this on the most recent episode of Pressing Matters.)

    Whether discussing AI ethics, disinformation, or AI-generated content, several speakers spoke to the notion of when the use cases are narrower, it’s an easier problem to solve. This goes for customer data safety, disinformation, and empowering customers vs. replacing them.

    To that end, I’d be remiss not to point out the primary reason I attended TechCrunch Disrupt. Andy Byrne, CEO and co-founder of our client Clari, was a speaker on Thursday at a breakout session focused on the concept of revenue collaboration and governance. As with anything tied to business, AI is becoming a key piece of the equation. And like many of the speakers on the AI stage, Andy is extremely pragmatic and realistic about the technology and its impact.

    As Byrne told Diginomica in a recent interview, “The AI is only applicable to the use case we provide, so I think there’s less risk. But there’s no question that there needs to be an accelerated thinktank put together, given the speed at which AI is being deployed and offered out to the marketplace.”

    ai, conference

    Ashley Paula-Legge

    More posts by Ashley Paula-Legge

    Related Post

    • Top Conversations in Tech: March 2025 Trends

      By Taylor Voges | Comments are Closed

      Step Aside GenAI, AI Agents are Here! Welcome to Top Conversations in Tech, where we isolate the hottest trends, falling stars and shifting market dynamics to help technology marketers maximize their relevance. This month’s dataRead more

    • Top Conversations in Tech: February 2025 Trends

      By Taylor Voges | Comments are Closed

      From Layoffs to Artificial Intelligence and Supply Chains Welcome to Top Conversations in Tech, where we isolate the hottest trends, falling stars and shifting market dynamics to help technology marketers maximize their relevance. This month’sRead more

    • Top Conversations in Tech: January 2025 Trends

      By Taylor Voges | Comments are Closed

      DeepSeek Stole the Show… but Not Really Welcome to Top Conversations in Tech, where we isolate the hottest trends, falling stars and shifting market dynamics to help technology marketers maximize their relevance. This month’s dataRead more

    • DeepSeek: The AI Heard ’Round the World

      By Taylor Voges | Comments are Closed

      Did you expect the week of January 27th to start off with a sharp drop of over $100B+ in Wall Street stocks? For news outlets to cover a seemingly unknown artificial intelligence organization with greatRead more

    • Top Conversations in Tech: December 2024 Trends

      By Taylor Voges | Comments are Closed

      Call Mulder: The Case of Mysterious Drone Sightings Welcome to Top Conversations in Tech, where we isolate the hottest trends, falling stars and shifting market dynamics to help technology marketers maximize their relevance. This month’sRead more

    NextPrevious

    Helping technology companies for ten years
    to grow, win, and lead through effective,
    expert-driven marketing and communications.

    Connect

    I want to hire Big Valley
    hireus@bigvalley.co

    I want to work for Big Valley
    workwithus@bigvalley.co

    Follow us
    • LinkedIn

    Recent Posts

    • To X, or Not to X

      By Arianna Crawford In this post we’ll address X (formerly known as

      13 May, 2025
    • Marketing Leaders: You’re Not Failing. You’re Leading in a Lonely System.

      Why even the best marketing leaders feel stuck—and how to break through.

      12 May, 2025
    • Reconsidering Pillar B2B Social Media Platforms

      By Arianna Crawford The B2B marketing and communications industry is in the

      6 May, 2025
    Copyright 2024 Big Valley Marketing | All Rights Reserved
    • ABOUT
    • SERVICES
      • Brand Strategy
      • Marketing Strategy
      • Content Marketing
      • Corporate Communications
      • Digital and Social Marketing
      • Market Intelligence
    • CASE STUDIES
    • CULTURE
    • RESOURCES
      • Big Valley Marketing Blog
      • Pressing Matters Podcasts
      • Top Conversations in Tech
      • AI Research
    • CONTACT
    Big Valley