How Threat Actors Are Rizzing Up Your AI for Profit

Your LLM Failed the Vibe Check. Here's Why.

ChatGPT referrer link - AI render

Introduction

AI is reordering search dominance. Conventional wisdom says Google’s traditional search engine is headed for the dustbin, something largely unimaginable even a few years ago. As people (and agents) migrate search habits from Google to LLMs, what happens to referrer monetization models? More importantly for enterprise defenders and risk managers, HOW will traffers and malicious Traffic Distribution Systems (TDS) adapt?

Recorded Future’s Insikt Group recently reported on TAG-124, which operates a TDS designed to redirect unsuspecting web browsers to malicious destinations for malware/ransomware installation, cryptocurrency theft, and more. SocGholish malware, also known as FakeUpdates, employs TDS such as Parrot TDS and Keitaro TDS to filter and redirect unsuspecting users to malicious sites. Additional criminal TDS include Help TDS, Los Pollos TDS, and more. The TDS options and branding are important reminders that threat actors (TAs) have choices when investing in traffic demand generation, which leads to competition and incentivizes first-mover advantage toward LLMs in this malicious services niche.

Help TDS AI-generated summary, courtesy of Recorded Future

Much has lately been made of LLM prompt injection possibilities, but why would cybercriminals invest in complex prompt injection when they can simply flood the web with poisoned content that LLMs eagerly consume and recommend? The migration from search engines to conversational AI doesn't require sophisticated new attacks; it rewards the same content manipulation strategies, amplified through AI's tendency to synthesize and propagate.

Classic Search Engine Optimization (SEO) poisoning already proved resilient at scale: SolarMarker and peers used tiered infrastructure and content farms to meet victims at intent, then route them through filtering gates. The only real change now is the front door: from Search Engine Results Page (SERPs) to AI overviews and chat answers.

Simultaneously, “Generative Engine Optimization” (GEO) and early LLM-optimization (LLMO) research show that content presentation, citations, and entity structure measurably influence which sources appear in AI answers. That creates a new, gameable funnel that criminals can exploit.

Traffic distribution syndicates like TAG-124 already control vast networks of compromised and synthetic websites. These existing assets become exponentially more valuable when LLMs treat them as legitimate sources, transforming criminal infrastructure into AI-recommended destinations.

From SERP Hijacks to Answer Hijacks

The playbook shifts from ranking pages to being cited or embedded in answers a user trusts. Studies and investigations have already shown that AI search can prioritize superficially relevant sources, is vulnerable to hidden content, and can be induced to output attacker-preferred code or links. TDS operators excel at exploiting these seams.

Modern LLMs retrieve real-time information through web searches, processing results to formulate responses. This retrieval-augmented generation (RAG) creates a massive attack surface that TDS operators will exploit through volume and velocity.

Microsoft and others have warned about indirect prompt injection—malicious instructions planted in web content that LLM-powered systems later ingest during browsing and tool use. A TDS that already fingerprints bots and visitors will happily add “LLM-aware” personalities to feed chatbots one thing and humans another.

The math favors TDS operators. OpenAI's GPT-4 web browsing processes approximately 10-20 sources per complex query. Controlling just 2-3 of those sources through SEO manipulation translates to 15-30% influence over the model's response. Current TDS operations already achieve similar ratios in traditional search results.

If generative engines reward crisp citations, entity markup, and quotable stats, then TDS crews will industrialize answer-optimized microsites designed to be pulled verbatim into AI responses. Expect schema.org-heavy pages, FAQ blocks, and quote-bait paragraphs engineered for GEO visibility. The objective isn’t rank; it is inclusion in the answer that becomes the user’s first click.

Generative engines are also a trust amplifier; users treat summarized answers as vetted, improving conversion on soft prompts like “download,” “join Discord,” or “install the helper.” That trust premium is exactly what TDS operators rent to malware payload crews.

GEO poisoning - AI render

Criminal syndicates will adapt existing infrastructure:

The Rhysida and Interlock ransomware groups currently pay $5,000-15,000 monthly for TDS services. That same budget, redirected toward LLM-focused content generation, could produce millions of poisoned articles annually.

Slopsquatting definition courtesy of Recorded Future AI.

Temporal Arbitrage and Zero-Day Content

LLMs exhibit a critical vulnerability: a preference for recent information when answering time-sensitive queries. TDS operators will exploit this through coordinated content bursts.

Consider a typical enterprise scenario: A CFO asks their AI assistant about new tax regulations. The model searches for recent authoritative content, finding dozens of articles published within hours. Three of these articles, hosted on aged domains with legitimate-looking tax advisory branding, contain malicious links to "compliance software" or "regulatory guides."

The speed advantage is decisive. While legitimate publishers take days to analyze and write about new regulations, criminal operations deploy automated content within minutes. By the time authentic sources publish, the poisoned content has already been indexed, retrieved, and potentially recommended thousands of times.

Timeline showing content velocity gap between legitimate and malicious publishers - AI render

The Synthetic Authority Pipeline

Traditional SEO poisoning relied on keyword density and backlinks. LLM poisoning requires synthetic authority, which means content that appears expert-written and peer-validated.

TDS operators are building a three-tier infrastructure:

Tier 1 - Foundation Sites: Compromised university pages, dormant corporate blogs, and abandoned government domains providing historical credibility.

Tier 2 - Amplification Networks: Thousands of AI-generated sites cross-referencing Tier 1 content, creating artificial consensus.

Tier 3 - Payload Delivery: Fresh domains serving malicious content, linked from Tier 2 sites as "additional resources" or "official downloads."

We're observing early indicators of this architecture. Security researchers identified 847 compromised .edu domains in Q3 2024 alone, many hosting content specifically crafted for LLM consumption — technical documentation, API guides, and software tutorials that models preferentially retrieve.

The Recommendation Attack Surface

LLMs don't just retrieve information; they synthesize and recommend. This transformation from passive search results to active suggestions multiplies the impact of poisoned content.

A traditional search engine presents ten blue links. Users evaluate each one, applying skepticism and judgment. An LLM presents a single, authoritative-sounding recommendation: "Based on current best practices, you should download the compliance toolkit from [malicious-site].com."

The psychological impact is profound. Users trust AI recommendations 73% more than search results, according to recent Stanford research. TDS operators will exploit this trust differential through:

Economic Indicators and Underground Markets

The criminal economy is already adapting. Dark web marketplaces show:

TAG-124's infrastructure, currently valued at $50-75 million based on ransomware throughput, could triple in value as LLM adoption accelerates. The same compromised WordPress sites delivering malware through search results will deliver it through AI recommendations, except with higher conversion rates.

Defensive Strategies for the Retrieval Era

Organizations must assume LLMs will recommend malicious content. The defensive perimeter extends beyond corporate networks to include every AI interaction.

Essential controls:

  1. Link provenance verification: Every LLM-recommended URL requires automated reputation checking before user access
  2. Temporal correlation analysis: Identifying suspicious content clusters published simultaneously across multiple domains
  3. Recommendation sandboxing: Isolating and analyzing all AI-suggested downloads in controlled environments
  4. Source transparency requirements: Configuring LLMs to always display retrieved sources, enabling manual verification
  5. Content velocity monitoring: Detecting abnormal publication patterns indicating coordinated poisoning campaigns
  6. URL reputation APIs: Real-time validation of every link through threat intelligence feeds
def validate_llm_links(response):
    urls = extract_urls(response)
    for url in urls:
        domain_age = check_domain_age(url)
        reputation = query_threat_intel(url)
        if domain_age < 180 or reputation < 0.7:
            flag_suspicious(url)
    return filtered_response

Code block: Example Python for LLM response filtering - AI render

Regulatory and Liability Implications

Current frameworks assume human judgment between search and action. When AI recommends malicious sites that compromise critical infrastructure, who bears responsibility?

Courts will likely apply product liability principles to LLM providers, but enforcement remains uncertain. Organizations cannot wait for regulatory clarity. Every AI implementation needs explicit policies addressing:

The Inevitable Evolution

LLM-first discovery doesn’t retire TDS; it supercharges it. The same orchestration that met victims at the top of a SERP will now meet them inside answers. The same economic forces driving TAG-124's current operations will push them toward LLM exploitation through the simplest viable path: content poisoning at massive scale.

Organizations deploying conversational AI without understanding this risk are essentially installing unfiltered pipes to the internet's most dangerous neighborhoods. The question isn't whether criminals will exploit LLM web retrieval because they're already doing it.

Resilient organizations use telemetry, validation, and controls that blunt the funnel, treating AI answers as another high-value referrer, not a trusted gatekeeper. Compliance won’t save you; instrumented telemetry and measured resilience are a good start.

Example Advanced Query to Monitor New IoCs for TDS Malware Validated by the Insikt Group
Intelligence Card® for an IP Address associated with both Parrot TDS and ClearFake