Pre-Signed URLs: Uploading Files Without Touching Your Servers
User uploads a 10MB image. The request hits your API server. Your server reads the entire file into memory, then forwards it to S3. Meanwhile, that server thread is blocked, your memory spikes, and three other requests are waiting.
I’ve seen a single bulk upload take down an API server. Not because of any bug, just because it ran out of memory buffering files it didn’t need to touch.
The Pre-Signed URL Pattern#
Instead of proxying uploads through your server, let the client upload directly to object storage. Your server just generates a temporary, authenticated URL.
@GetMapping("/upload-url")
public UploadResponse getUploadUrl(@RequestParam String filename) {
String key = "uploads/" + UUID.randomUUID() + "/" + filename;
URL presignedUrl = s3Presigner.presignPutObject(
PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(15))
.putObjectRequest(b -> b.bucket("my-bucket").key(key))
.build()
).url();
return new UploadResponse(presignedUrl.toString(), key);
}
Client gets the URL, uploads directly to S3. Your server never sees the file bytes. A 10MB upload that used to consume server memory and a thread for 30 seconds now costs you one lightweight API call.
Upload-Then-Confirm#
The client uploads first, then tells your server “I uploaded file X.” Your server validates the file exists in S3, saves the metadata to the database, and now the file is part of your system.
@PostMapping("/confirm-upload")
public void confirmUpload(@RequestBody ConfirmRequest req) {
// Verify the file actually exists in S3
s3Client.headObject(b -> b.bucket("my-bucket").key(req.getKey()));
// Save metadata
fileRepo.save(new FileRecord(req.getKey(), req.getUserId(), Instant.now()));
}
This two-step flow means your server only handles small JSON payloads. The heavy lifting (transferring bytes) happens between the client and S3 directly.
Security Considerations#
Pre-signed URLs are scoped: specific bucket, specific key, specific HTTP method, short TTL. A URL generated for uploading uploads/abc/photo.jpg can’t be used to overwrite config/production.yml. The 15-minute expiry means leaked URLs are useless quickly.
You still need rate limiting on the URL generation endpoint. Without it, someone can request thousands of upload URLs and flood your storage.
At Oracle, we used a similar pattern for log file collection from network functions. Originally, each service uploaded logs through a central collector. The collector was constantly memory-pressured. Moving to pre-signed URLs for direct S3 uploads eliminated the bottleneck entirely. The collector just tracked metadata.
What I’m Learning#
Pre-signed URLs are one of those patterns where the “before” architecture seems reasonable until you see the alternative. Why proxy bytes through your server when you don’t need to? The less your API servers handle, the better they scale horizontally.
The pattern works for downloads too. Generate a read URL, hand it to the client. Your CDN and multi-level cache handle the rest.
How do you handle file uploads? Still proxying through your API layer?