While its impact can be over-hyped or exaggerated, AI is arguably fueling the largest technology transformation since the .com era. And is forever changing the way we live and work.
So it was quite fitting that Day Two of TechCrunch Disrupt had an entire industry stage devoted to discussions about AI. A stage at which the audience was packed and standing room only for every session.
Instead of recapping all discussions — which were incredibly interesting — I thought I’d highlight three topics very pertinent to our industry: AI ethics, the technology’s impact on the disinformation economy, and whether AI will take all of our jobs.
Giving Shape To AI Ethics
In the session “Can AI Be Ethical?”, TechCrunch’s Amanda Silberling moderated a lively discussion among Chloé Bakalar, chief ethicist and assistant professor, Meta; Kathy Baxter, principal architect, responsible AI & tech, Salesforce; and Camille Crittenden, executive director, CITRIS & the Banatao Institute.
Silberling addressed the concerns head-on with her opening question, which panelists kept coming back to throughout the discussion: “AI models have been known to be racist, sexist, expensive and unreliable. Why do we still believe that AI is the future?”
While all the panelists agreed that all parties that touch AI — from developers to policy makers — have a responsibility to pay attention to and address these concerns, they also agreed that the technology will lead to positive impact and transformation beyond what we can currently imagine possible. “…it’s going to advance humanity in ways that we don’t even know about yet,” emphasized Crittenden. But, she says, “…the only way that we can ensure that something of this magnitude is done safely is if everyone is leaning in.”
Bakalar agreed, stressing that it’s not just the responsibility of the technology industry. “It’s nonprofits, it’s academia, it’s government, it’s civil rights groups. It’s so many other stakeholders. These are really challenging issues,” she says.
Baxter added, “We need to make sure that all of the wonderful benefits that come with this technology are evenly distributed because too often it’s not just about who benefits but it’s about who pays.”
In my opinion, AI ethics has been a discussion for years, but seemed peripheral. And I hope that with generative AI making the technology more tangible and real to the mainstream consumer, the discussion will continue to occupy center stage more as organizations become more accountable and their bottom lines depend on it.
AI’s Impact on the Disinformation Economy
In addition to AI ethics, one of the ‘dark sides of AI’ that keeps me up at night is its potential to accelerate the disinformation economy beyond our control. TechCrunch’s Kyle Wiggers led a discussion between Sarah Brandt — EVP, Partnerships, NewsGuard, and Andy Parsons, senior director, Content Authenticity Initiative, Adobe, on this exact issue: “How AI Can Both Accelerate and Slow Down the Disinformation Economy.”
Newsguard tracks disinformation and the bad actors spreading it. “We do this using, believe it or not, old school human being journalists. We’re not using any AI to detect misinformation or to flag bad actors, but just basic reporting skills,” says Brandt. Since the public release of ChatGPT, much of Newsguard’s focus has been on the role of generative AI in accelerating disinformation campaigns and helping companies like Microsoft and others fine tune their AI models and provide safeguards for their users against misinformation.
“When it comes to applying these [AI] tools to spread disinformation, the phrase that comes to mind is ‘force multiplier’,” explains Brandt. “[Bad actors can] create disinformation campaigns that are more compelling, more sophisticated, have higher volumes and are cheaper because you can just have one person put in a prompt into a large language model and pump out hundreds if not thousands of compelling articles. Whereas previously, you may have needed to employ an army of people to create that content.”
“Nobody has this exactly figured out 100%,” said Parsons when discussing the different tools and strategies for combatting misinformation, like watermarks and other technologies that can be used to prove provenance. He goes on to predict that in the next five years, we’ll have a new kind of media literacy that relies on understanding the context behind the media and content before sharing it.
I do hope we figure this out, because the consequences of not doing so keep me up at night! As Parson so eloquently said, “If every single thing you read, see, talk about can be called into question then we don’t have truth at all. There’s no common ground to have productive discussions or exchange ideas or, you know, maybe even be creative and have satire and community around those things.”
Conversely, it is because of this risk that Parsons and Brandt are optimistic that governments, industry, and others will come together to solve the problem. While Wiggers and I remain skeptics — time will tell.
On AI Generated Content
“AI Can Write Words — But Can It Understand Them?” This was the topic of discussion between May Habib, co-founder & CEO, Writer, and Ofir Krakowski, co-founder & CEO, Deepdub, moderated by TechCrunch’s Haje Jan Kamps.
As in, is AI coming for all of our jobs? The answer, at least for now, is “NO.”
“The AI is only applicable to the use case we provide, so I think there’s less risk. But there’s no question that there needs to be an accelerated thinktank put together, given the speed at which AI is being deployed and offered out to the marketplace.”
– Andy Byrne, CEO and co-founder of Clari
The consensus is that AI can write words but can’t understand them. Especially when it comes to different cultures and languages. “This is a great way to emphasize the limitation of AI,” said Krakowski. For example, everybody in the audience would understand the phrase ‘hold your horses’, but a straight translation into any language would be wrong.”
But how long will this last? If you ask my colleague Charles Cooper, or “Coop”, he’d say not long. That eventually AI will have reasoning and intellect enough to write a good news story with context. I disagree — I think we’ll always need a human involved in the writing process. (Big Valley explores this on the most recent episode of Pressing Matters.)
Whether discussing AI ethics, disinformation, or AI-generated content, several speakers spoke to the notion of when the use cases are narrower, it’s an easier problem to solve. This goes for customer data safety, disinformation, and empowering customers vs. replacing them.
To that end, I’d be remiss not to point out the primary reason I attended TechCrunch Disrupt. Andy Byrne, CEO and co-founder of our client Clari, was a speaker on Thursday at a breakout session focused on the concept of revenue collaboration and governance. As with anything tied to business, AI is becoming a key piece of the equation. And like many of the speakers on the AI stage, Andy is extremely pragmatic and realistic about the technology and its impact.
As Byrne told Diginomica in a recent interview, “The AI is only applicable to the use case we provide, so I think there’s less risk. But there’s no question that there needs to be an accelerated thinktank put together, given the speed at which AI is being deployed and offered out to the marketplace.”