This post is part of a series sponsored by IAT Insurance Group.
There has been no shortage of examples of deepfake technology being used in surprising ways in recent years.
- latest scammer posed As CFO of a multinational company, he convinced employees to pay $25 million from the company to fraudsters via video.
- A disgruntled athletic director at a Maryland high school… created and disseminated a fake audio recording of the school principal containing racist and anti-Semitic comments.
- The report is coming to the surface Deepfake images are being deployed across the country as a cyberbullying tool, including through face-swapping and “undressing” apps.
These are obvious use cases for deepfakes generated to attack three main types of content: video, audio, and images.
Concern about deepfakes continues to grow as technology advances and the number of victims increases. This recently culminated in the enactment of a new law in New Hampshire that could have implications nationwide.
New Hampshire: Generating deepfakes can lead to civil and criminal lawsuits against perpetrators
Not mentioned above, but perhaps the tipping point for deepfake fears, is that in early 2024, deepfake recordings of Joe Biden were spread throughout New Hampshire through individual robocalls, and of voters indicated they would not participate in the state’s presidential primary.
This resulted in a civil lawsuit being filed against the source of the audio and the telecommunications company. distributed phone. New Hampshire Attorney General was indicted A person who created deepfakes on several charges.
A few months later, New Hampshire’s governor signed the bill, HB 1432. This is the first state law to specifically recognize a private right of action for victims of deepfakes. from laws and regulations:
A person who knowingly uses a person’s likeness in video, audio, or other media to create a deepfake for the purpose of embarrassing, harassing, entrapping, defaming, blackmailing, or otherwise causing economic harm. You can file a lawsuit against. Any damage resulting from such use may result in reputational damage to the person.
of laws and regulations It also states that the creator of a deepfake “knowingly creates, distributes, or otherwise creates, distributes, or presents in any media the likeness of an identifiable individual that constitutes a deepfake for the purpose of humiliating the person.” “In case”. harass, entrap, defame, extort, or otherwise cause economic or reputational harm to any individual; ”
of law It goes into effect on January 1, 2025.
New Hampshire law could provide strategy for other states
It is no surprise that, even in divisive times, there is broad bipartisan motivation to surface further legislation to address deepfakes. No politician is safe from the risks posed by deepfakes, and voters are likely to be equally concerned about the negative effects of deepfakes.
As of June, voting rights lab118 bills containing provisions aimed at regulating election disinformation generated by AI were considered in 42 state legislatures.
What will be worth watching is whether the laws that are ultimately enacted are broadly drafted to capture behavior that occurs in non-political contexts, and, following the example of New Hampshire, whether or not the law is broadly drafted to capture behavior that occurs in non-political contexts, and whether it is affected by deepfakes or not, following the example of New Hampshire. The question is whether to recognize the right of private action by people who law proposed This private right of action was enacted by New York Governor Kathy Hochul this spring.
Insurance and risk implications
Private right of action are four words that capture the attention of liability insurance professionals. A surge in civil litigation involving deepfakes could involve general liability and homeowners policies, as well as other specialty business areas.
general responsibilities
When it comes to general liability insurance, use cases related to deepfakes should primarily be considered in the context of Coverage B – Personal and Advertising Injury – of the ISO Commercial General Liability Policy. The definition of “personal and advertising harm” in the ISO CG 00 01 Principles includes two subparagraphs:
d. Publish any material, oral or written, that defames or disparages any person or entity or disparages the goods, products, or services of any person or entity;
e. Publish any content, oral or written, that in any way violates the right to privacy of any person.
It is certainly possible that violations related to deepfakes could facilitate claims based on this piece of coverage. Coverage B is unique to Coverage A in that it may provide a certain level of coverage for intentional acts, depending on the exclusion conditions. If a company disrespects or violates the privacy rights of others through deepfakes, a complaint may reach that company’s GL carrier.
homeowner
Cyberbullying can trigger civil lawsuits, including invasion of privacy, intentional infliction of emotional distress, and negligent entrustment, and has been a topic of discussion under homeowners insurance since the early days of the internet. I’m here. Most states in the United States have laws such as: decide Parental liability for the tortious acts of minors.
This risk will only get worse as deepfakes (and other AI tools) become more readily available for abuse by youth and numerous applications surface that deploy this technology. Ultimately, whether homeowner coverage kicks in depends not only on the jurisdiction of the case, but also on the language of the policy in place.
special line
In addition to general liability and homeowners insurance, more specialized business areas such as crime, cyber, and D&O policies may also be significantly impacted. Excessive policies may also be involved if the judgment tracks recent social inflation trends and results in a seven- or eight-figure payout.
As deepfake technology continues to improve, the barrier to entry will eventually become lower, allowing anyone with an internet connection to build a deepfake and be responsible. Given this situation, it is important for risk and insurance professionals to:
- Understand the use cases for deepfakes, and how artificial intelligence technology in general continues to evolve.
- Track how regulations and laws are being created at both the state and federal level to address deepfakes.
- Pay attention to how the wording of your insurance policy responds when you make a claim.
Topics
law