The Problem
In 2026, a simple contact form remains one of the most exploited attack surfaces on the web. Bots are more sophisticated, brute-force attacks more frequent, and a CSRF token alone is no longer sufficient. On this site, I implemented a defense-in-depth strategy with four complementary layers.
Layer 1: CSRF Token with Rotation
The classic approach, but correctly implemented. A unique token per session, regenerated after each successful submission, with server-side validation via hash_equals() to prevent timing attacks:
<?php
final class CsrfGuard
{
public function generateToken(): string
{
$token = bin2hex(random_bytes(32));
$_SESSION['csrf_token'] = $token;
$_SESSION['csrf_time'] = time();
return $token;
}
public function validateToken(string $submitted): bool
{
$stored = $_SESSION['csrf_token'] ?? '';
if (!hash_equals($stored, $submitted)) {
return false;
}
// Invalidate after use (one-time token)
unset($_SESSION['csrf_token']);
return true;
}
}
Using hash_equals() is essential: a standard comparison with === can leak timing information exploitable by an attacker.
Layer 2: Invisible Honeypot
A CSS-hidden field (not display:none which some bots detect) traps robots that automatically fill all form fields:
// In the template
<div style="position:absolute;left:-9999px" aria-hidden="true">
<input type="text" name="website" tabindex="-1" autocomplete="off">
</div>
// In the controller
if (!empty($_POST['website'])) {
// It's a bot — return a fake success
http_response_code(200);
exit;
}
The fake success response is intentional: returning a 4xx error would inform the bot it was detected, allowing it to adapt its strategy.
Layer 3: IP-Based Rate Limiting
Without a database, I use the filesystem to limit submissions per IP. Simple, effective, and dependency-free:
final class RateLimiter
{
public function __construct(
private readonly string $storageDir,
private readonly int $maxAttempts = 3,
private readonly int $windowSeconds = 3600,
) {}
public function isAllowed(string $ip): bool
{
$file = $this->storageDir . '/' . md5($ip) . '.json';
if (!file_exists($file)) {
return true;
}
$data = json_decode(file_get_contents($file), true);
$recent = array_filter(
$data['attempts'] ?? [],
fn(int $t) => $t > time() - $this->windowSeconds
);
return count($recent) < $this->maxAttempts;
}
}
In production, a CRON job to clean expired files completes the setup. For high-traffic sites, Redis or Memcached would be preferable.
Layer 4: Time Check (Fast-Bot Detection)
A human takes at least 3 to 5 seconds to fill out a form. A bot does it in milliseconds. We record the timestamp when the form is rendered and verify the duration server-side:
// When generating the form
$_SESSION['form_rendered_at'] = time();
// On submission
$elapsed = time() - ($_SESSION['form_rendered_at'] ?? time());
if ($elapsed < 3) {
// Submission too fast — probably a bot
Logger::warning('Bot detected: form submitted in {s}s', ['s' => $elapsed]);
return $this->fakeSuccess();
}
Orchestration: The Validation Middleware
These four layers are orchestrated in a single middleware that executes them sequentially. If one fails, the request is rejected (or a fake success is returned for bots):
final class FormSecurityMiddleware
{
public function process(Request $request): ValidationResult
{
// 1. Rate limit
if (!$this->rateLimiter->isAllowed($request->ip())) {
return ValidationResult::blocked('rate_limit');
}
// 2. Honeypot
if (!empty($request->post('website'))) {
return ValidationResult::silent();
}
// 3. Time check
if ($this->isSubmittedTooFast()) {
return ValidationResult::silent();
}
// 4. CSRF
if (!$this->csrf->validateToken($request->post('_token', ''))) {
return ValidationResult::blocked('csrf');
}
return ValidationResult::passed();
}
}
Conclusion
None of these layers is individually foolproof. CSRF alone doesn't block bots. A honeypot alone doesn't withstand targeted bots. But combined, they form a defense-in-depth that makes exploitation prohibitively expensive. This is the fundamental principle of security: raise the cost of attack until it's no longer profitable.
For a portfolio site without a database, this approach offers an excellent security-to-complexity ratio. For a critical application, add a WAF, progressive CAPTCHA validation, and centralized logging in a SIEM.