The proliferation of AI-created dipfakes has led to the development of the relatively new industry of Trust as a Service (TaaS). Businesses themselves offer solutions that can provide protection against such dipfakes. Vadim Novikov, Advisor to the President of Almaty Management University (AlmaU), discusses why this is better than government attempts to solve the problem.

"It's not me": what are the dangers of dipfakes

On Thursday, November 6, Berkshire Hathaway reported that videos with dipfakes of the company's CEO Warren Buffett, generated by artificial intelligence, are being distributed on YouTube. In them, he allegedly makes statements on market conditions, including giving investment advice. The release came with the headline "It's Not Me."

The image of the "Oracle of Omaha," as Buffett is called, is just one example of how scammers are using AI to create realistic videos of a famous personality. Previously, dipfakes of billionaire Elon Musk and Ark Invest fund founder Cathie Wood appeared online - they "advertised" a platform that stole users' cryptocurrency. In 2024, an Arup employee transferred $25.5 million to scammers by talking to dipfake copies of his management.

In the first quarter of 2025 alone, direct financial losses from dipfakes exceeded $200 million, according to the Deepfake Incident Report. Deloitte predicts that in the U.S., AI-driven fraud losses could grow from $12.3 billion to $40 billion by 2027.

Biometric Update predicts that about 9.9 billion dipfake checks are expected by 2027, generating nearly $4.95 billion in revenue for the companies that do it.

The revolution of artificial intelligence has led to one serious problem - we are facing the phenomenon of "trust inflation". And here it is important to understand: this is not quite the same old problem of "fraud" that can be solved by old laws.

Deepfakes have created two new, existential threats. First, speed collapse: damage is done in minutes, before justice can react. A fake can be sent out in minutes to thousands of people. Such a scale was unattainable for the thimbles of the past. Second, the collapse of provability, or the "liar's dividend": any genuine video can now be declared a fake, and a court can no longer confidently believe the evidence.

How do you fight it?

Government regulators often resort to blunt, indiscriminate bans. This is not a surgeon's precise scalpel, pinpointing a threat, but a blind "hammer" hitting the entire industry.

For example, Denmark plans to propose a radical method - to give citizens "copyright to their own face". In June, the country introduced amendments to its copyright law that would give people rights to their own image. Danish Culture Minister Jakob Engel-Schmidt said that these measures could come into force as early as winter 2025, Tech Policy Press reported.

How effective will this method be? This move is legally ridiculous. It confuses two different legal systems: personal property rights (the right to an image, which is inalienable) and copyright (which is alienable and created to protect the results of creativity).

This "hammer" is not only ineffective, it is dangerous: it creates the risk of turning the individual into a tradable asset.

Moreover, it also creates global regulatory chaos. For example, while Denmark is trying to expand copyright on the "face," the draft law "On Artificial Intelligence" adopted by Kazakhstan's parliament and awaiting signature, on the contrary, narrows it. In the country, AI works are not protected by copyright, but it is planned that this draft will oblige mandatory labeling of the use of AI to generate images of people.

The Danish project ignores a fundamental fact: it creates excessive regulation. It is essentially hitting the tool - the AI, not the perpetrator.

The cost of regulatory error

Why is this "hammer" not just excessive, but economically dangerous? The answer lies in one of the most influential works of economic analysis of the law, Judge Frank Easterbrook's The Limits of Antitrust (1984).

Easterbrook described two types of regulator errors. The first error, aka false alarm: to intervene when it should not have, i.e. to mistakenly "kill" a useful innovation. The second type of error, aka omission: failing to intervene when it should have (missing the real abuse).

Easterbrook's main thesis: these errors are fatally asymmetric. If the regulator made the second kind of mistake, i.e. missed an evil, the market will correct it itself - the damage is temporary. But if it made the first mistake, i.e. it falsely interfered and "killed" the innovation - it is dead as long as the ban is alive. In this case, the damage can last for decades.

In this paradigm, "confidence inflation" is painful but temporary chaos. The market is already reacting to it by creating solutions. The government "hammer" is a mistake of the first kind, risking an irreversible stranglehold on AI innovation.

Against the evolution of market "scalpels"

This chaos is the market's self-healing mechanism. It makes trust an expensive, scarce asset. And the market reacts immediately, creating not one, but an entire ecosystem of solutions.

The first attempt was to "search for fakes" - solutions that detect the use of AI. But this is an arms race that is "impossible to win". Studies show that in real-world conditions (not in the lab), the accuracy of today's detectors drops by 45-50%. Academic research from 2025 only confirms this: dipfakes have already "learned heartbeats" by fooling older biometric detectors that checked for micro-changes in skin color.

So the industry has shifted its focus from looking for fakes to proving authenticity.

The first and foremost answer was the C2PA (Content Credentials) standard promoted by an alliance of giants (Adobe, Microsoft, Intel, BBC). Its essence is to embed a cryptographically signed "digital passport" in a file, which records the entire "chain of custody" of the content: who created it, when it was created, and what edits were made.

Thus, a new industry - Trust as a Service (TaaS) - was born. But C2PA is not a panacea. Rather, it is the first major solution among competing ideas. And as soon as C2PA had vulnerabilities (platforms "skim" metadata tagged with image provenance, as Google researchers admit), the market immediately responded with evolution, creating solutions for other niches:

- Real-time detection, or liveness - checking for the "presence" of a live person on the other side of the screen. Startups like Deepfake Guard c (Deepfake Captcha for Zoom) and Netarx (with its "trust light," a visual indicator in the corner of the screen that turns red when a dipfake is suspected) address the problem of "live" fraud by alerting an employee to an attack during a call, rather than after the money has already left.

- Persistent watermarks. Their goal is to allow AI generators (OpenAI, Google) to fuse an invisible "Made by AI" tag into synthetic content. Startup Steg.AI claims a watermark that survives even after screen capture, solving the problem of "laundering" fakes through compression in messengers.

- Hardware Provenance: Sony builds cryptographic signature right into its cameras (Alpha 9 III).

- Decentralized Provenance: Numbers Protocol and OpenOrigins use blockchain to create "digital passports" of content as an alternative to centralized C2PA.

- TaaS platforms. The market is being segmented. Startups are emerging in Insurtech (insurance against direct financial losses from dipfake attacks like DeepSecure), PR-tech (early warning systems that, like PhonyEye, scan social networks for CEO face fakes), and "digital notary" services like Attestiv, which stores a "cast" of authentic content in blockchain for use in court.

It is a chaotic but productive ecosystem created without the involvement of regulators.

Investment thesis: betting on the ecosystem

"Confidence inflation" is not the end of the world. It is a painful but temporary price of the second kind of error (non-intervention). It is the very "disease" that makes the economic organism produce "antibodies".

The Prooftech market is not just part of the cybersecurity sector. It is an industry that is building the foundation for a new scarce asset - verifiable truth.

The investment thesis is simple. Companies are losing real money and are willing to pay for protection. The entire AI industry is forced to invest in any solution to prove its safety and avoid total government regulation ("hammer"), the cost of which, as Easterbrook has shown, is irreversible.

It's not the people trying to ban dipfakes that will win. Evolutionary selection will win. Sure, it's an imperfect solution: it's a chaos of standards, and platforms (Meta, TikTok) are resisting adoption. But, as in the eternal unwinnable battle against fraud, this whole competing ecosystem is the real market "self-healing".

One bets on it, not because it is perfect, but because it is less wrong. As they say in economics, it is better to be approximately right than exactly wrong, as in the case of the state "hammer".

We are witnessing this bustling TaaS ecosystem building a new economic reality. In it, trust is no longer the air everyone breathes, but the pure oxygen that is sold in cylinders.

This article was AI-translated and verified by a human editor

Share