binarybeachio: presigned PUT for uploads (R2/B2 don't implement PostObject)

== WHY (KEEP THIS — IT'S WHY THE FORK EXISTS) ==

Vanilla Plane's upload flow uses AWS S3 PostObject (presigned POST +
multipart/form-data + signed-policy-document). Cloudflare R2 AND
Backblaze B2 — the two most common self-host S3-compatible backends —
both return HTTP 501 NotImplemented for PostObject. Empirically verified
2026-04-30 against B2 s3.us-west-004.backblazeb2.com from inside Plane's
own prod api container, replicating Plane's exact boto3 call:

  PUT against B2:  200 OK
  POST against B2: 501 NotImplemented "This API call is not supported."
  POST against R2: 501 NotImplemented (failure that started this thread)

The error code is `NotImplemented` (not `SignatureDoesNotMatch` etc),
meaning the server rejects the verb itself — no boto3 config, addressing-
style flag, or signature variant fixes it. Tested both path-style and
virtual-hosted-style URLs against B2; both fail identically for POST.

This patch rewrites the upload flow to use presigned PUT, which is
universally supported (R2, B2, AWS S3 native, MinIO, Wasabi, etc).

== WHAT (FIVE-FILE BACKEND, FIVE-FILE FRONTEND) ==

Backend:
* apps/api/plane/settings/storage.py — S3Storage.generate_presigned_post
  now mints a presigned PUT URL via generate_presigned_url(HttpMethod="PUT").
  Method name kept for caller compat. Response shape:
  {url, method: "PUT", fields: {Content-Type, key}}.
* apps/api/plane/utils/openapi/responses.py — example response updated.
* apps/api/plane/tests/unit/settings/test_storage.py — 2 tests updated to
  assert the new boto3 call.

Frontend:
* packages/types/src/file.ts — TFileSignedURLResponse.upload_data adds
  optional method?: "PUT" | "POST"; drops AWS POST-form-data fields.
* packages/services/src/file/helper.ts — generateFileUploadPayload now
  returns a TFileUploadRequest descriptor (url+method+body+headers) that
  dispatches on method. POST branch kept for upstream parity but the
  fork backend never emits POST.
* packages/services/src/file/file-upload.service.ts +
  apps/web/core/services/file-upload.service.ts — uploadFile signature
  changes from (url, FormData, progress?) to (payload, progress?).
* 5 caller sites updated (apps/web/core/services/file.service.ts x3,
  issue_attachment.service.ts x1, sites-file.service.ts x1).

== TRADEOFFS ACCEPTED ==

* Lost: signed `content-length-range` enforcement at the storage layer.
  Server-side validation in the API view still rejects oversized requests
  with 413 before minting the URL, so a determined client could only
  over-upload by misreporting size, capped at the bucket's own size limit.
* Different request shape on the wire (PUT with raw binary body vs POST
  with multipart form). Externally invisible to users.

== ROLLBACK ==

If this becomes a maintenance nightmare:

  git revert <this-commit-sha>
  # rebuild + push images, swap compose tags, redeploy

After revert, uploads will only work against backends that implement
PostObject (MinIO, AWS S3 native). R2 and B2 will return 501 again.

== FULL DECISION RECORD ==

binarybeachio repo: docs/features/storage-upload-flow.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
binarybeach 2026-04-30 17:56:52 -10:00
parent 7c21b985d9
commit 9fb1ad44cd
10 changed files with 131 additions and 89 deletions

View file

@ -7,6 +7,8 @@
import axios from "axios";
// api service
import { APIService } from "../api.service";
// helpers
import type { TFileUploadRequest } from "./helper";
/**
* Service class for handling file upload operations
@ -21,18 +23,17 @@ export class FileUploadService extends APIService {
}
/**
* Uploads a file to the specified signed URL
* @param {string} url - The URL to upload the file to
* @param {FormData} data - The form data to upload
* @returns {Promise<void>} Promise resolving to void
* @throws {Error} If the request fails
* Uploads a file to the presigned URL using the request descriptor produced
* by `generateFileUploadPayload`. BB-PATCH: dispatches on `payload.method`
* (PUT for the fork default, POST kept for upstream-Plane parity).
*/
async uploadFile(url: string, data: FormData): Promise<void> {
async uploadFile(payload: TFileUploadRequest): Promise<void> {
this.cancelSource = axios.CancelToken.source();
return this.post(url, data, {
headers: {
"Content-Type": "multipart/form-data",
},
return this.request({
method: payload.method,
url: payload.url,
data: payload.body,
headers: payload.headers,
cancelToken: this.cancelSource.token,
withCredentials: false,
})

View file

@ -49,17 +49,52 @@ const validateFilename = (filename: string): string | null => {
return null;
};
// BB-PATCH (binarybeachio fork): upload payload is now a request descriptor
// (url+method+body+headers), not raw FormData. The fork mints presigned PUT
// URLs because R2/B2 don't implement PostObject — see backend storage.py
// docstring + docs/features/storage-upload-flow.md.
export type TFileUploadRequest = {
url: string;
method: "PUT" | "POST";
body: File | FormData;
headers: Record<string, string>;
};
/**
* @description from the provided signed URL response, generate a payload to be used to upload the file
* @description Build a request descriptor for uploading the file using the
* presigned URL returned by the API. Dispatches on `upload_data.method`:
* - "PUT" (fork default): raw file body + Content-Type header
* - "POST" (vanilla AWS S3 path, kept for upstream parity): multipart form
* @param {TFileSignedURLResponse} signedURLResponse
* @param {File} file
* @returns {FormData} file upload request payload
* @returns {TFileUploadRequest}
*/
export const generateFileUploadPayload = (signedURLResponse: TFileSignedURLResponse, file: File): FormData => {
const formData = new FormData();
Object.entries(signedURLResponse.upload_data.fields).forEach(([key, value]) => formData.append(key, value));
formData.append("file", file);
return formData;
export const generateFileUploadPayload = (
signedURLResponse: TFileSignedURLResponse,
file: File
): TFileUploadRequest => {
const data = signedURLResponse.upload_data;
if (data.method === "POST") {
const formData = new FormData();
Object.entries(data.fields).forEach(
([key, value]) => value != null && formData.append(key, value as string)
);
formData.append("file", file);
return {
url: data.url,
method: "POST",
body: formData,
headers: { "Content-Type": "multipart/form-data" },
};
}
return {
url: data.url,
method: "PUT",
body: file,
headers: {
"Content-Type": file.type || data.fields["Content-Type"] || "application/octet-stream",
},
};
};
/**

View file

@ -88,7 +88,7 @@ export class SitesFileService extends FileService {
.then(async (response) => {
const signedURLResponse: TFileSignedURLResponse = response?.data;
const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file);
await this.fileUploadService.uploadFile(signedURLResponse.upload_data.url, fileUploadPayload);
await this.fileUploadService.uploadFile(fileUploadPayload);
await this.updateAssetUploadStatus(anchor, signedURLResponse.asset_id);
return signedURLResponse;
})

View file

@ -20,19 +20,19 @@ export type TFileEntityInfo = {
export type TFileMetaData = TFileMetaDataLite & TFileEntityInfo;
// BB-PATCH (binarybeachio fork): upload now uses presigned PUT (not POST).
// `method` and the trimmed `fields` shape reflect that. See backend
// plane/settings/storage.py docstring + docs/features/storage-upload-flow.md
// in binarybeachio for the full decision record.
export type TFileSignedURLResponse = {
asset_id: string;
asset_url: string;
upload_data: {
url: string;
method?: "PUT" | "POST";
fields: {
"Content-Type": string;
key: string;
"x-amz-algorithm": string;
"x-amz-credential": string;
"x-amz-date": string;
policy: string;
"x-amz-signature": string;
};
};
};