New Hampshire law creates private right of action for deepfake victims

This article is part of a series sponsored by IAT Insurance Group.
In recent years, there has been no shortage of examples of how deepfake technology has been used in shocking ways:
- A scammer recently used a video to impersonate a financial officer of a multinational company and persuaded an employee to pay the scammer $25 million from the company.
- A disgruntled athletic director at a Maryland high school allegedly produced and circulated a false recording of the school’s principal that contained racist and anti-Semitic remarks.
- There have been reports across the country of deepfake images being used as tools for cyberbullying, such as through face-swapping and “undressing” apps.
These are obvious use cases for deepfakes, designed to combat three main types of content: video, audio, and imagery.
Concerns about deepfakes continue to grow as technology advances and the damage done to victims. Recently, New Hampshire enacted a new law that could have ramifications across the United States.
New Hampshire: Creation of deepfakes could lead to civil and criminal lawsuits against perpetrators
Not mentioned above, but likely a tipping point for deepfake fears, occurred in early 2024 when a deepfake recording of Joe Biden circulated in New Hampshire via a personal robocall, suggesting that New Hampshire voters were not participating in the state’s presidential primaries. select.
That prompted civil lawsuits against the message generators and the telecommunications companies that distributed the calls. The New Hampshire Attorney General also indicted the deepfakes on multiple counts.
A few months later, the governor of New Hampshire signed HB 1432, the first state law to specifically allow deepfake victims to sue privately. From the perspective of regulations:
If anyone knowingly uses any likeness in that person’s video, audio, or any other media to create a deepfake for the purpose of embarrassing, harassing, enticing, defaming, extorting, or otherwise causing any financial or loss, Anyone can bring a lawsuit against it.
The statute also states that the generator of a deepfake is guilty of a Class B felony “if the person knowingly creates, distributes, or presents any similar content in video, audio, or any other media that identifies an individual, thereby constituting a deepfake.” Embarrass an identifiable person, harass, groom, defame, extort or otherwise cause any financial or reputational harm.
The law will come into effect on January 1, 2025.
New Hampshire law could serve as a guide for other states
Even in a divisive era, there will be broad bipartisan incentives to enact more laws that address the deepfakes problem. No politician is immune to the risks posed by these deepfakes, and their constituents may be equally concerned about the adverse effects deepfakes may have.
As of June, 118 bills have been filed in 42 state legislatures containing provisions aimed at regulating election disinformation generated by artificial intelligence, according to the Voting Rights Lab.
It will be interesting to note whether the laws ultimately enacted are drafted broadly to cover conduct arising out of a non-political context, and whether they follow New Hampshire’s lead and allow those affected by deepfakes to take private action. Legislation introduced this spring by New York Governor Kathy Hochul would provide for this private right of action.
Insurance and risk implications
The words private right of action always attract the attention of liability insurance professionals. If civil lawsuits involving deepfakes surge, general liability and homeowner policies, as well as other specialty practice areas, could be implicated.
general liability
Regarding general liability insurance, use cases involving deepfakes should primarily be considered in the context of Coverage B (Personal and Advertising Injury) of ISO Commercial General Liability policies. The definition of “personal and advertising injury” in the ISO CG 00 01 basic policy includes the following two subparagraphs:
d. Publish in any manner, whether orally or in writing, material that defames or defames a person or organization or disparages a person’s or organization’s goods, products or services;
e. Publish information orally or in writing in any manner that infringes upon personal privacy.
Of course, violations involving deepfakes may facilitate claims under this coverage component. Coverage B, unlike Coverage A, may provide some level of coverage for intentional conduct, subject to exclusions. If a company disparages and/or violates the privacy rights of another party through deepfakes, claims may reach the company’s GL carrier.
homeowner
Cyberbullying, which can give rise to civil claims involving invasion of privacy, intentional infliction of emotional distress and negligent entrustment, has been considered an exposure issue in homeowners insurance since the early days of the internet. Most states in the United States have laws establishing parental liability for the wrongdoings of minors.
Because deepfakes (and other AI tools) are more susceptible to abuse by teenagers, this risk will only intensify with the proliferation of apps deploying the technology. Ultimately, determining whether homeowners insurance is in effect depends on the policy language in effect and the jurisdiction of the case.
Professional line
In addition to general liability and home insurance, more specialist lines of business may also be significantly affected, including crime, cyber and D&O policies. Excess policy may also be involved if the verdict tracks recent social inflationary trends and results in a 7 or even 8 figure payout.
Eventually, as deepfake technology continues to improve, the barriers to entry are lowered: anyone with an internet connection can build a deepfakes and hold themselves accountable. Given this dynamic, it is important for risk and insurance professionals to:
- Learn about the use cases for deepfakes and how artificial intelligence technology in general continues to evolve.
- Track how regulations and laws are being developed at the state and federal levels to address deepfakes.
- Be aware of how the policy language reacts in the event of a claim.
theme
legislation