Ex-Palantir turned politician Alex Bores says AI deepfakes are a ‘solvable problem’ if we bring back a free, decades-old technique
The former Palantir staffer pointed to the 1990s, when people were skeptical about being able to do online banking in a safe and trustworthy way.
New York Assemblymember Alex Bores, a Democrat now running for Congress in Manhattan’s 12th District, argues that one of the most alarming uses of artificial intelligence—highly realistic deepfakes—is less an unsolvable crisis than a failure to deploy an existing fix.
“Can we nerd out about deep fakes? Because this is a solvable problem and one that that I think most people are missing the boat on,” Bores said on a recent episode of Bloomberg’s Odd Lots podcast, hosted by Joe Weisenthal and Tracy Alloway.
Rather than training people to spot visual glitches in fake images or audio, Bores said policymakers and the tech industry should lean on a well-established cryptographic approach similar to what made online banking possible in the 1990s. Back then, skeptics doubted consumers would ever trust financial transactions over the internet. The widespread adoption of HTTPS—using digital certificates to verify that a website is authentic—changed that.
“That was a solvable problem,” Bores said. “That basically same technique works for images, video, and for audio.”
Bores pointed to a “free open-source metadata standard” known as C2PA, short for the Coalition for Content Provenance and Authenticity, which allows creators and platforms to attach tamper-evident credentials to files. The standard can cryptographically record whether a piece of content was captured on a real device, generated by AI, and how it has been edited over time.
“The challenge is the creator has to attach it and so you need to get to a place where that is the default option,” Bores said.
In his view, the goal is a world where most legitimate media carries this kind of provenance data, and should “you see an image and it doesn’t have that cryptographic proof, you should be skeptical.”
Bores said thanks to the shift from HTTP to HTTPS, consumers now instinctively know to distrust a banking site that lacks a secure connection. “It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect, but you can still produce the images.”
AI has become a central political and economic issue, with deepfakes emerging as a particular concern for elections, financial fraud, and online harassment. Bores said some of the most damaging cases involve non-consensual sexual images, including those targeting school-age girls, where even a clearly labeled fake can have real-world consequences. He argued that state-level laws banning deepfake pornography, including in New York, now risk being constrained by a new federal push to preempt state AI rules.