beyond-takedowns-feature-img

“Houston, we have a problem with AI.” –The Chaos Triggered by AI-driven Infringements in Cyberspace

In the 1995 film ‘Apollo 13,’ the iconic actor Tom Hanks delivered the renowned line, “Houston, we have a problem,” encapsulating the challenges of space exploration. In today’s AI era, brand owners whose intellectual properties have been threatened by AI in cyberspace might echo, “Houston, we have a problem with AI."

Types of AI Infringements and Misuse

There are various types of AI infringements and misuse, including:

AI clone websites and fake domains
Email, text message, and phone phishing
Social media infringements
Marketplace infringements
Digital ads scams
AI clones in livestreams
Now, let's delve into how these infringements are carried out and applied in the real world below.

AI Clone Websites & Fake Domains  

Scammers can use AI to create fake websites that look like legitimate brand sites. These infringement cases are designed to trick users into entering their personal information, such as login credentials, credit card details or purchasing counterfeit goods. Among various techniques, superfake websites and homograph phishing attacks represent two serious cases that can easily evade human detection. Layer Image

Superfake Websites

In the AI-driven era, traditional methods of creating look-alike websites with noticeable flaws are becoming obsolete. Infringers now have access to low-cost AI website builders like 10web (10web.io) or durable (durable.co), enabling them to produce AI clone sites that closely resemble official brand sites with stealthy domain names in a matter of minutes.

Homograph Phishing Attacks

While not originating from AI algorithms, homograph phishing attacks serve as the driver for infringers to direct your customers to flawless AI clone websites.

In addition to typosquatting and combosquatting, which entail misspelled characters or word combinations, homograph phishing mimics legitimate domain names using easily deceptive characters, including non-standard ones. For instance, bad actors can deceive users intending to visit ‘Login.com’ by redirecting them to ‘Lᴑgin.com’, where this counterfeit site employs the Latin small letter sideways ‘ᴑ’ (U+1D11), visually identical to the standard English letter ‘o’ (ASCII: 111).

Pharming/Website Redirect Scams

One of the techniques that assists the spread of AI-generated superfake sites is website redirect. In this type of attack, phishers may install malicious programs such as pharming malware to intercept your customers en route to your official websites and redirect them to fraudulent ones without their knowledge. This manipulation is typically accomplished through DNS spoofing, a sneaky technique where attackers trick customers’ devices going to the wrong website.

Email/Text Message/Phone Phishing

AI is employed to craft convincing phishing emails, and even replicate voices to steal user data and funds. Below are instances of AI-driven infringements in email or mobile contexts:

AI-Generated Message

Bad actors use AI to automate mass creation of deceptive messages, mimicking brand images and tones at scale. This capability exponentially boosts the reach and effectiveness of malicious email or text campaigns aimed at your customers.

Spoofed Email Attack

While not generated by AI, email spoofing is a tactic frequently employed by malicious actors to attract your customers into opening AI-generated emails.   

In email spoofing scenarios, attackers mask their own email address to resemble yours (e.g.,order-update@intl.amazon.com), tricking your customers into divulging sensitive information and exploiting their login credentials for financial gain or unauthorized transactions. In more serious instances, attackers impersonate legitimate email addresses (e.g.,service@paypal.com), embedding harmful links or false information to redirect your customers toward disclosing personal details or contacting a fraudulent customer service number.

Layer Image
Source : Paypal.com

Mobile Phishing

With the increasing use of smartphones, phishers are targeting mobile devices with SMS phishing (smishing) attacks. These attacks may include AI-crafted text messages containing links to malicious websites or prompting your customers to call fraudulent numbers.

AI Voice Scams

AI-powered voice synthesis technology has the capacity to produce lifelike human voices that mimic your brand’s, empowering scammers to solicit sensitive account details from your clientele. While some attacks, such as text-to-speech scams, can be easily thwarted by exploiting technical vulnerabilities like a pause for the machine to respond, other techniques such as voice deepfakes, a form of highly convincing replication, pose a greater challenge.

In a notable instance on “60 Minutes,” security consultant Rachel Tobac employed software to flawlessly replicate the voice of Sharyn Alfonsi, one of the show’s correspondents, successfully deceiving a “60 Minutes” staff member into disclosing Ms. Alfonsi’s passport number.

In the realm of IP/Brand Protection, employing AI voice cloning to mimic the voices of your customer service teams could potentially amount to intellectual property infringements. This is due to its unauthorized utilization of the brand’s unique attributes and identity.

Layer Image

Social Media Scams

Infringers on social media aim to deceive consumers, distribute counterfeit goods, and compromise brand authenticity. Social media infringements begin with bots creating fake profiles using predetermined patterns and data inputs. Subsequently, infringers generate numerous AI-assisted posts that mimic the tones or voices of legitimate brand owners, deceiving customers with misinformation or promoting counterfeit goods.  

For instance, infringers may leverage social media scheduling tools such as Buffer’s AI assistant to craft posts that potentially mimic your brand’s identity.

Layer Image

Source : Buffer

AI Generated Social Media Ads

Bad actors leverage social media platforms to sell counterfeit goods using AI-generated advertisements resembling authentic brands. They exploit platforms such as Facebook, WeChat, and TikTok to feature products in their regular posts and deploy targeted ads to reach a broader audience, tricking unsuspecting users.  To combat AI-driven infringements in social media, it’s important to stay aware of the latest threats and take necessary precautions.

Layer Image

Source : adcreative.ai

Marketplace Infringements

In marketplaces, the infringement of Intellectual Property remains a pressing issue, particularly concerning digital assets like images, games, software, videos, and audio files.  

The advent of AI-generated content exacerbates this problem. Software like Midjourney or Leonardo.ai can swiftly produce unauthorized images based on original files within a few minutes, adding to the complexity of the issue.

Digital Ads Scams

AI-driven fraud in digital ads is a growing concern for advertisers and publishers alike. Here are some examples of AI-driven infringements in digital ads:

Search Engine Optimization (SEO) Poisoning

Regardless of whether the approach involves organic or paid search strategies, bad actors may use AI SEO tools such as Jasper.ai or SEO.ai to create optimized blog posts or text-based ads, enhance searchability, or establish a network of fraudulent backlinks to hike the visibility of a fake website peddling counterfeit merchandise.

Celebrity Impersonation

AI can fabricate endorsements using synthetic images, videos, or voices resembling celebrities or influencers, deceiving users into thinking they endorse a product or service. This tactic leverages celebrities’ reputation to prompt users to interact with fake ads, surveys, or prizes, enabling bad actors to sell counterfeits or steal credentials.

Take, for example, a case involving Taylor Swift in the counterfeit Le Creuset promotional campaign. Using artificial intelligence technology, a synthetic version of Taylor Swift’s voice was created, along with footage of her and clips showcasing Le Creuset Dutch ovens.

Layer Image
Source : Screenshot of Meta’s Ads & CBS News

Fake Shopping Scams

In the digital domain, nefarious entities flood the space with counterfeit digital advertisements, luring users with highly coveted e-merchandise, including virtual in-game items generated by AI software at incredibly low costs.  

By employing generative AI software like 3dfy.ai, infringers can produce lifelike 3D models of game characters resembling genuine brands. Nevertheless, they may vanish with the funds, leaving customers empty-handed, without the expected goods.

AI-Clones in Livestreams

AI clones for livestream shopping or services utilize methods like deepfake or advanced natural language processing (NLP) to imitate human behavior for continuous broadcasting or on-demand services, such as AI psychotherapy. During a 15-hour snack sale hosted by Taiwanese star Calvin Chen (Yi Ru Chen/辰亦儒), viewers were astonished to discover that it wasn’t him but an AI clone, created using cutting-edge deepfake technology.  

Brand protection teams overseeing AI clone-hosted livestreams or therapy sessions face risks including managing counterfeit goods, handling fraudulent orders, and potential harm to clients if AI clones fail to engage with live audiences or provide proper empathetic care.

How Can Hubstream Help You?

Hubstream offers AI-powered case management software designed specifically for brand protection teams. At Hubstream, we dedicate considerable effort to devising ways to aid brand protection teams in deriving meaningful insights from infringement activities and automating tasks where feasible. 

Moreover, Hubstream facilitates seamless collaboration between humans and AI, enabling brand protection teams to combat AI-generated infringements more effectively. Our platform strives to enhance cooperation and streamline efforts towards safeguarding intellectual property rights.

Interested in learning more?