Resources

Client Alerts, News Articles, Blog Posts, & Multimedia

Everything you need to know about BMD and the industry.

Protecting Your Image in the Age of AI-Generated “Deepfakes”

Client Alert

The rapid evolution of artificial intelligence (AI) has transformed how we create and consume digital content. While AI offers innovative solutions in business, entertainment, and communication, it also poses significant risks. Among the most troubling developments in AI is the proliferation of AI-generated fraudulent content, often called “deepfakes”.

A “deepfake” can be created by malicious actors who manipulate existing, legitimate images, videos, and audio recordings to create new fraudulent content. This fraudulent content is then used to deceive, defraud, and/or impersonate legitimate content from real individuals and brands. For example, our firm recently represented a business professional whose original video content was scraped from the internet, edited using AI, and re-uploaded to a platform where they did not have an account. The altered videos were then used to promote a fraudulent product falsely attributed to our client, leading to reputational harm and consumer confusion.

Currently, the capabilities of AI can be used to create fraudulent content such as:

1. “Deepfake” Videos that Misrepresent Endorsements or Beliefs

AI-generated “deepfake” videos can convincingly manipulate existing, legitimate video footage to make it appear as though a person in the existing footage is saying or doing something they never did in the original video. These fakes are now being used in:

  • fake endorsements where a person appears to promote a product or service they’re not associated with
  • manipulated interviews or speeches that falsely portray an individual as holding controversial or offensive opinions
  • fraudulent ads where an individual is inserted into a fraudulent video to lend credibility to a product/scam

The result is not only reputational harm to the original party but also the potential for legal liability if consumers act on these “deepfakes”.

2. AI “Voice Clones” Used in Fraud And Impersonation

AI voice synthesis tools can now clone a person’s speech patterns, tone, and inflection with remarkable and convincing accuracy. These voice clones are now being used to:

  • place scam calls in which the voice of a trusted colleague, family member, or executive is replicated
  • create fake voicemails or recordings such as fake customer service lines, political robocalls, or misleading audio snippets shared on social media
  • bypass security checks, especially those using voice authentication systems

Because voice is such a personal and persuasive medium, these scams can be particularly effective and often difficult to detect.

3. Repackaged or Stolen Content Misused on Digital Platforms

In many cases, bad actors scrape legitimate, existing content such as videos, podcasts, social media posts, or livestreams from the internet and re-upload them—often out of context—making it seem as though the speaker supports a particular viewpoint or product. The content can also be re-uploaded with an AI narration or branding, suggesting affiliation with companies or causes the original party does not endorse. This not only infringes intellectual property rights but also misleads audiences and can divert income from the rightful content creator.

How to detect AI “Deepfakes”

Despite rapid improvements in AI, many fraudulent AI video, audio, and other content may display subtle flaws such as:

  • Awkward or unnatural facial movements
  • lip-syncing issues (the words spoken do not match the way the person’s mouth is moving)
  • Flat, unnatural, or robotic speech patterns
  • Lighting or background inconsistencies
  • A lack of verification on official social media or websites from the person supposedly involved in the content

When in doubt, search for the original source and consult reputable news outlets and official pages.

What to do Next

If you discover that your image, voice, or content has been used without authorization, you may have both legal and practical remedies. First, report the content to the hosting platform. If your original content has been copied or altered, copyright law may provide grounds for removal. In addition, make sure to preserve the evidence—take screenshots, save links, and document any public confusion, customer complaints, or reputational fallout. Depending on your situation, you may have claims under defamation law, the right of publicity, consumer protection statutes, and/or tort law.

For more information, please contact Susan A. Jacobsen at 216.298.1452 x848 or sajacobsen@bmdllc.com.


Department of Education Proposes Redefinition of “Professional Degree,” Excluding Nursing and Limiting Graduate Loan Borrowing

The U.S. Department of Education has issued a Notice of Proposed Rulemaking that would redefine “professional degree” programs under the One Big Beautiful Bill Act. The proposal excludes nursing from the recognized list and would impose new borrowing limits for graduate students while eliminating the Grad PLUS program. Public comments are due by March 2, 2026.

First-of-Its-Kind Federal Ruling Finds Use of Consumer AI Tool May Destroy Attorney-Client Privilege

On February 10, 2026, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York issued a first-of-its-kind ruling finding that documents generated by a criminal defendant using a consumer AI platform were not protected by attorney-client privilege after being shared with counsel. The court treated the AI tool as a third party, concluding that entering sensitive information into a publicly available platform may waive confidentiality. The ruling also suggests that the work product doctrine may not apply where AI-generated materials are created independently by a client rather than at counsel’s direction. The decision signals that parties should exercise caution when using consumer AI tools in connection with legal matters.

Your Golden Chance for H-1B Lottery Registration - March 2026

USCIS H-1B registration opens March 4–19, 2026. U.S.-based employees on valid nonimmigrant status are exempt from the $100,000 fee for change of status petitions. The new weighted lottery favors higher-skilled and higher-paid employees, improving odds for advanced degree holders and Wage Level 3 or 4 workers.

Invisible Algorithms: The Hidden Role of Artificial Intelligence in USCIS Immigration Processing

The Department of Homeland Security has confirmed that artificial intelligence and machine learning tools are now integrated into numerous operational functions within U.S. Citizenship and Immigration Services (USCIS). These tools are described as mechanisms to improve efficiency, reduce backlogs, and assist officers in managing an unprecedented volume of applications. DHS emphasizes that human adjudicators retain decision-making authority and that AI systems do not independently grant or deny immigration benefits. Find out how AI affects the U.S. immigration process.

OAAPN | Year In Review: 2026 Ohio Board of Nursing and Ohio Law Rules

Find out key changes to Ohio law and the Ohio Board of Nursing rules that have directly impacted APRN practice over the past year, including Psychiatric Inpatient Documents, Intimate Examinations, Signature Authority, Duties Related to Fetal Death, Retail IV Therapy Clinics, Release from Permanent Restrictions, Disciplinary Action, Course on Drugs and Prescriptive Authority, Overdose Reversal Drugs, Office Based Opioid Treatment, Withdrawal Management for Substance Use Disorder, Safe Haven Program, and more.