Skip to content

Storage (Cloudflare R2)

Complete guide for Cloudflare R2 object storage integration.

Overview

Eventify uses Cloudflare R2 for all object storage (images, videos, documents). R2 is S3-compatible with zero egress fees.

Key Information

  • Account ID: c0931e5b2b1b8ae8ed6e39fb2fc0a2f0
  • Endpoint: https://c0931e5b2b1b8ae8ed6e39fb2fc0a2f0.r2.cloudflarestorage.com
  • Region: auto (required for R2)
  • SDK: AWS SDK v3 (@aws-sdk/client-s3)

Custom Domains

Environment R2 Bucket Custom Domain Purpose
Development eventify-storage-dev https://static.dev.eventify.today Local dev
Staging eventify-storage-staging https://static.staging.eventify.today Staging
Production eventify-storage-production https://static.eventify.today Live site

Bucket Structure

eventify-storage-{env}/
├── user-profile/           # User profile images
│   └── temp/              # Temporary uploads (auto-cleanup)
├── premises/
│   └── static/
│       ├── types/         # Premise type preview images
│       └── map-bg.jpg     # Map background
├── marketing/
│   ├── advantages/        # B2B advantage cards
│   └── background-b2b.jpg # Hero background
├── company_logos/         # Organization logos
├── about/                 # About page documents
├── faq/                   # FAQ images by language
└── [venue-uploads]/       # User-uploaded venue/premise media

Upload Pattern: Two-Phase with Auto-Cleanup

Problem

Direct uploads cause storage bloat: 1. User selects file → Immediate upload to R2 2. User abandons form → File persists forever 3. Result: Every selection = permanent storage

Solution

Phase 1: Temporary Upload

When user selects file: - Upload to: {group}/temp/{userId}/{uuid}.{ext} - No database record created - File marked as temporary

Phase 2: Commit on Submit

When form is successfully submitted: - Copy from temp → permanent location - Delete temp file - Save permanent URL to database

Auto-Cleanup

Cloudflare Worker cron (daily at 2 AM UTC): - Scans all */temp/ folders - Deletes files older than 24 hours - Prevents storage bloat

Implementation

Upload to temp:

// src/lib/actions/file-upload/actions.ts
const presignedUrl = await uploadImageAction(
  file,
  "user-profile",
  { tempUpload: true } // ← Uploads to temp folder
);

Commit on form submit:

// src/lib/actions/auth/update-user-profile.ts
if (image && image.includes("/temp/")) {
  const tempPath = new URL(image).pathname.slice(1);
  const filename = tempPath.split("/").pop();
  const permanentKey = `user-profile/${filename}`;
  
  // Copy to permanent location
  await s3Client.send(new CopyObjectCommand({
    Bucket: bucketName,
    CopySource: `${bucketName}/${tempPath}`,
    Key: permanentKey,
  }));
  
  // Delete temp file
  await s3Client.send(new DeleteObjectCommand({
    Bucket: bucketName,
    Key: tempPath,
  }));
}

Cleanup worker:

// workers/cleanup-temp-files.ts
export default {
  async scheduled(event: ScheduledEvent, env: Env) {
    const bucket = env.R2;
    const now = Date.now();
    const MAX_AGE_MS = 24 * 60 * 60 * 1000; // 24 hours

    // List all objects with "temp/" prefix
    const listed = await bucket.list({ prefix: "temp/" });
    
    for (const object of listed.objects) {
      const age = now - object.uploaded.getTime();
      
      if (age > MAX_AGE_MS) {
        await bucket.delete(object.key);
        console.log(`Deleted: ${object.key}`);
      }
    }
  }
};

File Upload Flow

1. Client-Side Upload

Components request presigned URL, then upload directly to R2:

// src/components/common/form/ProfileImagePicker/ProfileImagePicker.tsx
const handleFileSelect = async (file: File) => {
  // Request presigned URL from server
  const { urls } = await uploadImageAction(
    { type: file.type, size: file.size },
    "user-profile",
    { tempUpload: true }
  );

  // Upload directly to R2
  const publicUrl = await safeUploadToBucket({
    urls,
    files: [file],
  });

  setResourceSrc(publicUrl); // Display preview
};

2. Server Action

Generates presigned URL for direct client upload:

// src/lib/actions/file-upload/actions.ts
export async function uploadImageAction(
  data: ImageActionData,
  groupKey: GroupKey,
  options: { tempUpload?: boolean } = {},
) {
  const session = await getSession();
  const userId = session?.user?.id;

  // Build path (temp or permanent)
  const basePath = options.tempUpload && userId
    ? `${groupKey}/temp/${userId}`
    : groupKey;

  const key = `${basePath}/${uuid()}.${extension}`;

  // Generate presigned PUT URL (15min expiry)
  const command = new PutObjectCommand({
    Bucket: process.env.R2_BUCKET_NAME,
    Key: key,
    ContentType: data.type,
  });

  const presignedUrl = await getSignedUrl(s3Client, command, {
    expiresIn: 900, // 15 minutes
  });

  return { urls: [presignedUrl] };
}

3. Client Upload Utility

Uploads file to presigned URL and returns public URL:

// src/lib/shared/utils/aws/index.ts
export async function safeUploadToBucket({
  urls,
  files,
}: {
  urls: string[];
  files: File[];
}) {
  const responses = await Promise.all(
    urls.map(async (presignedUrl, i) => {
      const file = files[i];

      // Upload to R2
      const response = await fetch(presignedUrl, {
        method: "PUT",
        body: file,
        headers: {
          "Content-Type": file.type, // REQUIRED for CORS
        },
      });

      if (!response.ok) {
        throw new Error(`Upload failed: ${response.status}`);
      }

      // Convert presigned URL to public URL
      const url = new URL(presignedUrl);
      const path = url.pathname.slice(1); // Remove leading /
      
      return `${process.env.R2_PUBLIC_URL}/${path}`;
    })
  );

  return responses.length === 1 ? responses[0] : responses;
}

Centralized URL Generation

All R2 public URLs use helper functions from src/lib/shared/config/r2-storage.ts:

// Generic helper
export function getR2PublicUrl(path: string): string {
  return `${process.env.R2_PUBLIC_URL}/${path}`;
}

// Domain-specific helpers
export function getPremiseTypePreviewUrl(type: string, extension = "webp") {
  return getR2PublicUrl(`premises/static/types/${type}.${extension}`);
}

export function getMarketingAssetUrl(filename: string) {
  return getR2PublicUrl(`marketing/${filename}`);
}

export function getCompanyLogoUrl(filename: string) {
  return getR2PublicUrl(`company_logos/${filename}`);
}

export function getAboutDocumentUrl(filename: string) {
  return getR2PublicUrl(`about/${filename}`);
}

export function getFaqImageUrl(path: string) {
  return getR2PublicUrl(`faq/${path}`);
}

Usage:

import { getPremiseTypePreviewUrl } from "@/lib/shared/config/r2-storage";

const imageUrl = getPremiseTypePreviewUrl("sports-hall", "webp");
// Returns: https://static.eventify.today/premises/static/types/sports-hall.webp

Configuration

S3 Client Setup

// src/s3client.ts
import { S3Client } from "@aws-sdk/client-s3";

export const s3Client = new S3Client({
  region: process.env.R2_REGION || "auto",
  endpoint: process.env.R2_ENDPOINT_URL,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
});

Environment Variables

Wrangler (wrangler.jsonc):

{
  "r2_buckets": [
    {
      "binding": "STORAGE_BUCKET",
      "bucket_name": "eventify-storage-dev"
    }
  ],
  "vars": {
    "R2_PUBLIC_URL": "https://static.dev.eventify.today",
    "R2_ENDPOINT_URL": "https://c0931e5b2b1b8ae8ed6e39fb2fc0a2f0.r2.cloudflarestorage.com",
    "R2_REGION": "auto",
    "R2_BUCKET_NAME": "eventify-storage-dev"
  }
}

Local Development (.dev.vars):

R2_ACCESS_KEY_ID="your-access-key"
R2_SECRET_ACCESS_KEY="your-secret-key"
R2_PUBLIC_URL="https://static.dev.eventify.today"
R2_ENDPOINT_URL="https://c0931e5b2b1b8ae8ed6e39fb2fc0a2f0.r2.cloudflarestorage.com"
R2_REGION="auto"
R2_BUCKET_NAME="eventify-storage-dev"

CORS Configuration

R2 buckets require CORS for client-side uploads.

Critical CORS Requirements

⚠️ Common Issue: Presigned URL uploads fail if Content-Type header doesn't match.

Fix (already applied):

fetch(presignedUrl, {
  method: "PUT",
  body: file,
  headers: {
    "Content-Type": file.type, // ← REQUIRED
  },
})

CORS Policy (Cloudflare Dashboard)

  1. Go to R2 bucket → Settings → CORS Policy
  2. Add rule:

Development (permissive):

Allowed Origins: *
Allowed Methods: GET, PUT, POST, DELETE, HEAD
Allowed Headers: *
Expose Headers: ETag
Max Age: 3600

Production (restrictive):

Allowed Origins:
  - https://eventify.today
  - https://staging.eventify.today
  - https://dev.eventify.today
Allowed Methods: GET, PUT, POST, DELETE, HEAD
Allowed Headers: *
Expose Headers: ETag
Max Age: 3600

Rules: - No trailing slashes: https://eventify.today ✅ not https://eventify.today/ ❌ - Exact matches only (protocol + domain + port) - Use wildcard * for development only

Build Configuration Fix

Problem

CI builds failed with:

Failed to start remote proxy session
You must be logged in to use wrangler dev

Cause: initOpenNextCloudflareForDev() tried to connect to Wrangler during builds.

Solution

Conditional initialization in next.config.mjs:

const skipCloudflareInit = 
  process.env.SKIP_CLOUDFLARE_INIT === 'true' || 
  process.env.npm_lifecycle_event?.includes('build');

if (!skipCloudflareInit) {
  import("@opennextjs/cloudflare").then(({ initOpenNextCloudflareForDev }) => {
    initOpenNextCloudflareForDev();
  }).catch((err) => {
    console.warn('Failed to initialize Cloudflare bindings:', err);
  });
}

Build scripts set SKIP_CLOUDFLARE_INIT=true:

{
  "build": "SKIP_CLOUDFLARE_INIT=true npx panda codegen && npx prisma generate && next build",
  "build:opennext": "SKIP_CLOUDFLARE_INIT=true opennextjs-cloudflare build"
}

Troubleshooting

CORS Errors

Symptoms: - Access-Control-Allow-Origin header is missing - 413 Payload Too Large

Solutions: 1. Verify CORS policy allows your origin 2. Check Content-Type header matches presigned URL 3. Ensure no trailing slashes in origin URLs

Upload Fails with 403

Cause: Presigned URL expired (15min limit) or incorrect credentials.

Solution: - Regenerate presigned URL - Verify R2 credentials in environment variables

Files Not Appearing

Cause: Wrong bucket or custom domain not configured.

Solution: 1. Verify bucket name in environment variables 2. Check custom domain CNAME in Cloudflare DNS 3. Ensure R2 bucket has public access (via custom domain)

Temp Files Not Cleaned

Cause: Cleanup worker not running.

Solution: 1. Check worker deployment: wrangler deploy workers/cleanup-temp-files.ts 2. Verify cron trigger in wrangler.jsonc 3. Check worker logs: wrangler tail cleanup-temp-files