Michael Selig, chairman of the US Commodity Futures Trading Commission, said blockchain could play a key role in authenticating AI-generated content, claiming the technology could help distinguish authentic media from synthetic output as concerns about disinformation grow.
During an appearance on The Pomp Podcast on Thursday, Selig was asked by host Anthony Pompliano about the use of AI-generated memes and images in markets, and whether intent matters or such content should be restricted altogether. He said to Pompliano:
The private markets have solutions – blockchain technology is great. If you can time stamp things and make sure there’s an identifier for each meme or AI-generated post, you can verify if it’s real or AI-generated… Having these technologies here in the US is critical.
He said regulators are focused on maintaining U.S. leadership in crypto, adding that “you can’t have AI without blockchain.”
Source: The Pump Podcast
On how regulators are approaching AI agents as autonomous trading becomes more prevalent in financial markets and authorities are pressured to distinguish between automated tools and fully autonomous agents, and how the latter should be regulated, Selig responded:
I worry that we are overregulating and strangling some of the technology here in the US… I use a minimum effective dose of regulation, where we… make sure we regulate the actors… and not the software developers. It is the software developers who build the tools, but they do not actually handle the financial transactions.
Selig said the CFTC is reviewing how AI models are used in markets, emphasizing that enforcement should focus on participants engaged in financial activities.
Related: AI and stablecoins are winning despite the crypto market slump in 2026
Blockchain and proof-of-personhood tools for AI verification are emerging
A central challenge amid the increase in the use of artificial intelligence is distinguishing real content from synthetic media. Selig’s comments could be seen as a reflection of a broader push among policymakers and developers to use blockchain for content verification and provenance.
One approach is proof-of-personhood systems, which aim to confirm that an account belongs to a real, unique human rather than a bot. The most prominent example is Sam Altman’s World, whose World ID protocol allows users to prove their humanity without revealing personal data. The system uses encrypted biometric iris scans stored on the user’s device, although criticism has been raised about privacy risks and possible coercion.
In March, World launched AgentKit, a toolkit that allows AI agents to prove they are linked to a verified human while interacting with online services. It integrates proof-of-personhood credentials with the x402 micropayment protocol developed by Coinbase and Cloudflare, allowing agents to pay for access while presenting cryptographic proof of human support.
Ethereum co-founder Vitalik Buterin has proposed using cryptography and blockchain to make online systems more verifiable, including through zero-knowledge proofs and onchain timestamps that can help validate how content is generated and distributed without exposing sensitive data.
The proposals come as US policymakers consider broader AI regulation. On March 20, the Trump administration released a national framework calling for a unified federal approach, warning that a patchwork of state laws could hinder innovation and competitiveness.
Magazine: Agent wastes 14 hours of scammers’ time, LLMs ‘poisoned’ by Iran: AI Eye
