The big picture
SmartKiller runs a quick, layered series of checks before your application code does anything. Good visitors pass through without noticing a thing. Bad ones — bots, scanners, rate abusers — get stopped quietly and shown a clear message with a countdown until they can try again.
Step 1 — Is this a search engine bot?
The very first check is whether the visitor is a legitimate crawler like Googlebot or Bingbot. These are the robots that index your site — you want them. So they're always let through unconditionally, with no rate limits or further checks. SmartKiller recognises 30 crawlers: all of Google's bots, Bing, Yandex, DuckDuckBot, and social media previewers from Slack, Discord, Telegram, WhatsApp, and others.
Step 2 — Is this a known attack tool?
After the good bots, SmartKiller checks for known malicious tools. Scanners like sqlmap (SQL injection), nikto (vulnerability scanning), and masscan (bulk scanning) all identify themselves in their request headers. SmartKiller spots these signatures and blocks them immediately — before they touch anything at all.
Step 3 — What's the history of this IP?
SmartKiller keeps a record of every IP it has dealt with. A quick database lookup reveals the IP's current status:
| Status | What it means | What happens |
|---|---|---|
| Whitelisted | A trusted IP you've approved (e.g. your office) | Let through immediately, no checks at all |
| Tracked | Was blocked before, now being watched | Let through, but every action is logged |
| Blocked | Misbehaved recently — temporary restriction | Shown 429 block page; auto-released after 1 hour |
| Permanently limited | Repeated offender — 5+ violations | Always blocked; only removed manually by you |
Step 4 — Is it sending too many requests?
For normal visitors with no history, SmartKiller checks three things independently:
- General rate limit — is this IP sending an unusually high number of requests in a short window?
- Download limit — has this IP downloaded more files than normal in the last hour?
- Refresh limit — is this IP reloading the same page over and over within seconds?
Keeping these three counters separate means normal visitors are never caught by a tight download limit, and someone refreshing a page too often doesn't get penalised if their total traffic is perfectly fine.
Step 5 — Does the URL look like an attack?
The URL is checked against a configurable list of suspicious patterns — SQL injection strings, path traversal attempts, common scanner probes. A real visitor would never type these. They're almost always automated.
What happens when someone gets blocked?
Blocked visitors see a clean page with a countdown until their block expires. The page sends HTTP 429 — the correct code, which tells search engines this isn't real content. If a legitimate visitor thinks they were blocked by mistake, the page shows your contact info so they can reach out.