How to Stream Large File Uploads to AWS S3 in Laravel

Eduar Bastidas • July 5, 2025

tips

Handling multi‑gigabyte uploads in a stateless app is painful: TCP throughput caps slow single‑request uploads, server disks fill, and Lambda containers vanish between requests. Modern teams therefore push the heavy bits straight from the browser to Amazon S3. S3M—a lean wrapper around S3's multipart and presigned‑URL APIs—removes the boilerplate. S3M works with any JavaScript front‑end, but in this post I'll give you some examples using Vue so you can see the flow end‑to‑end without locking you into a specific framework.

Why multipart + presigned URLs?

Amazon limits a single PUT to 5 GB. Multipart uploads slice the object, let slices fly in parallel, and re‑assemble the completed object inside S3. Presigned URLs add a time‑boxed signature, allowing the browser to upload directly to S3 while your API remains stateless and credential‑free. In practice the flow has four distinct calls: initiate, sign, upload parts, complete.

Prerequisites

1 – Install the helper

1composer require mreduar/s3m

Add the Blade directive before your compiled JS so the global s3m() helper is injected:

1{{-- resources/views/layouts/app.blade.php --}}
2<!doctype html>
3<html>
4 <head>
5 @s3m {{-- pushes the small JS bridge into the page --}}
6 @vite('resources/js/app.js')
7 </head>
8 <body class="antialiased">
9 @yield('content')
10 </body>
11</html>

The directive publishes a 3‑kB script that negotiates presigned URLs when you call s3m(file, options) on the client.

2 – Publish and tweak config

1php artisan vendor:publish --provider="MrEduar\S3M\S3MServiceProvider"

config/s3m.php exposes sensible defaults—10 MB chunks, four parallel PUTs, three automatic retries per part. When your audience has slow upstream links, dial the part_size down (the minimum is 5 MB except for the last part) to shorten retry times.

3 – Gate uploads with a policy

S3M calls Laravel's authorization layer before handing out any presigned URLs. Create a policy if you don't already have one:

1php artisan make:policy UserPolicy --model=User
1public function uploadFiles(User $user): bool
2{
3 return $user->plan()->allows('large_upload');
4}

This guarantees that an attacker can't obtain a signed URL unless the current user meets your business rules.

4 – Expose a controller endpoint

While S3M can wire routes for you, most teams prefer an explicit controller to attach domain metadata:

1Route::post('/api/profile-photo', ProfilePhotoController::class);

Inside you can move the temporary object out of tmp/ after the browser confirms completion:

1Storage::copy($request->key, Str::after($request->key, 'tmp/'));

You now hold the stable S3 key that maps to the uploaded file.

5 – Front‑end example (Vue)

The helper works with any framework; swap the snippet for React, Alpine, or vanilla JS as needed. Below is a Vue Composition‑API component that streams the selected file:

1<script setup>
2import { ref } from 'vue'
3import axios from 'axios'
4 
5const progress = ref(0)
6 
7function upload(e) {
8 const file = e.target.files[0]
9 
10 s3m(file, {
11 progress: p => progress.value = p
12 }).then(({ uuid, key, bucket }) => axios.post('/api/profile-photo', {
13 uuid, key, bucket,
14 name: file.name,
15 content_type: file.type,
16 }))
17}
18</script>
19 
20<template>
21 <input type="file" @change="upload" />
22 <progress :value="progress" max="100" class="w-full" />
23</template>

Under the hood s3m() performs initiate → get signed parts → parallel PUTs → complete in fewer than 150 lines of unobtrusive JavaScript.

6 – Make the upload permanent

Every object lands in tmp/ so abandoned uploads can be purged by an S3 lifecycle rule after 24 h. A service class might promote the file once your app accepts it:

1public function promote(string $key): string
2{
3 $finalKey = Str::after($key, 'tmp/')
4 Storage::disk('s3')->copy($key, $finalKey);
5 return $finalKey; // stable key without the tmp/ prefix
6}

Pro tips for production

Closing thoughts

With S3M you glue a single Blade directive on the front end and one controller on the back end, yet you gain a resumable, parallel‑chunked upload pipeline that never blocks PHP workers and never exposes AWS credentials to the client. Adapt the Vue snippet to any framework—or even plain JavaScript—and you'll stream large files to S3 with confidence.