Compare commits

..

No commits in common. "main" and "upstream" have entirely different histories.

19 changed files with 89 additions and 878 deletions

View file

@ -1,17 +0,0 @@
# bb-plane-fork local-test env — copy to `.env.bb-local` and fill in.
# Gitignored. Used by docker-compose.bb-local.yml.
# Bucket-4 trusted-JWT endpoint (apps/api/plane/authentication/views/app/trusted.py).
# Activated when this URL is set; unset → endpoint returns 404 (regression-safe
# default; vanilla upstream behavior preserved out of the box).
#
# Production points at the in-cluster bridge service:
# http://auth-bridge-<uuid>:3000/.well-known/bb-bridge.pub.pem
# Local dev typically points at a manually-served PEM (e.g. via `python3 -m http.server`)
# or at the production bridge for read-only key fetch testing:
# https://bridge.binarybeach.io/.well-known/bb-bridge.pub.pem
BB_BRIDGE_PUBLIC_KEY_URL=
# When BB_BRIDGE_PUBLIC_KEY_URL is unset, the trusted endpoint is disabled and
# Plane behaves like upstream-vanilla (email+password sign-in, the four
# stock OAuth providers). That's the right default for purely-local hacking.

6
.gitignore vendored
View file

@ -41,9 +41,6 @@ pnpm-debug.log*
.env.test.local .env.test.local
.env.production.local .env.production.local
# binarybeachio fork-local test env (Zitadel OIDC client creds)
.env.bb-local
# Vercel # Vercel
.vercel .vercel
@ -113,6 +110,3 @@ build/
.react-router/ .react-router/
temp/ temp/
scripts/ scripts/
# binarybeachio: Cloudflare Wrangler local dev cache (when used for *.binarybeach.io DNS work)
.wrangler/

View file

@ -1,199 +0,0 @@
# bb-plane-fork — binarybeachio customizations of Plane
This file is the canonical contract between this fork and the binarybeachio platform repo. It exists so anyone (or any agent) on a fresh session can answer "what's customized, why, and how do I refresh from upstream" without reading code.
**Fork repo convention** (template — same shape for every Path B fork in binarybeachio):
```
upstream remote → original project on github.com (read-only, merge-source)
origin remote → git.binarybeach.io/binarybeach/bb-<name>-fork (where we push)
github mirror → github.com/binarybeachllc/bb-<name>-fork (push-mirror, off-site backup)
upstream branch — clean mirror of upstream's default branch, never modified
main branch — our customizations on top of latest upstream tag we've integrated
update/<v> — short-lived integration branch when pulling a new upstream version
```
`git log main..upstream` = "upstream changes I haven't pulled in"
`git log upstream..main` = "binarybeachio's customizations"
---
## Upstream
| Field | Value |
|---|---|
| Project | Plane (open-source project management) |
| Upstream repo | https://github.com/makeplane/plane |
| Upstream default branch | `preview` |
| Currently integrated upstream version | **v1.3.0** (release commit `cf696d2`) |
| License | AGPL-3.0-only (we MUST publish source of any deployed customizations — public Forgejo + push-mirror to GitHub satisfies this) |
## Why we forked (post-2026-05-04 platform-architecture pivot)
Plane's first-party OIDC support is gated behind the **Pro/Business commercial edition** (Pro tier minimum 25 users = $338+/mo). The community edition's `/god-mode/authentication/oidc` page is a frontend stub — the backend handler returns 404. Plane CE has GitHub/GitLab/Gitea/Google OAuth providers but no native OIDC, no SAML, and no trusted-proxy-header auth.
We integrate Plane into the binarybeachio platform via the **architecture's Bucket 4 pattern**: a single additive trusted-JWT endpoint that the platform's auth-bridge calls after oauth2-proxy validates a Zitadel session at the edge. See:
- `binarybeachio/docs/architecture/01-platform-architecture.md` for the bucket taxonomy and bridge contract
- `binarybeachio/docs/architecture/bridge-jwt-replay-protection.md` for the JWT replay-protection contract
- `binarybeachio/docs/services/plane/migration-plan.md` for the full per-service migration write-up
The previous shape of this fork (in-place patching of `provider/oauth/github.py` to repurpose GitHub OAuth as Zitadel OIDC) was reverted on 2026-05-04 in favor of the Bucket-4 additive endpoint, which has a smaller fork surface, fewer hot files to track on upstream merges, and centralizes OIDC handling in the auth-bridge instead of duplicating it per-app. The pre-revert source state is preserved on the `pre-migration-2026-05-04` branch for reference.
## What's customized (the inventory — keep current)
Three logical patch groups across the repo. Touch surface is intentionally minimal.
### Patch 1: Bucket-4 trusted-JWT entry-point (additive — 1 new file + 2 line additions)
| File | Change | Risk on upgrade |
|---|---|---|
| `apps/api/plane/authentication/views/app/trusted.py` | **New file**. Django `View` that validates a bridge-issued RS256 JWT, atomically claims its `jti` in shared-redis (replay protection), find-or-creates the User, and calls `user_login(request, user, is_app=True)` to set the Django session cookie. PEM is fetched at runtime from `BB_BRIDGE_PUBLIC_KEY_URL` (avoids the env-PEM corruption issue Coolify has with backslash-escaped keys). Endpoint is implicitly disabled (returns 404) when the env is unset. | **Low.** Depends only on `User` model, `user_login`, `post_user_auth_workflow`, and `get_safe_redirect_url` — all stable upstream APIs. PyJWT and `requests` are existing deps. |
| `apps/api/plane/authentication/urls.py` | 1-line addition appending `path("sign-in-trusted/", TrustedSignInEndpoint.as_view(), name="sign-in-trusted")` to the urlpatterns list. | **Low.** Pure append; no existing routes modified. |
| `apps/api/plane/authentication/views/__init__.py` | 1-line addition exporting `TrustedSignInEndpoint`. | **Low.** Pure append. |
| `apps/api/plane/authentication/adapter/error.py` | Adds 7 error codes in the 60006099 range (reserved for fork additions). Pure dict-additions; no existing entries renumbered. | **None.** |
The full bridge ↔ Plane contract:
- Bridge mints `RS256` JWT signed with `BRIDGE_SIGNING_KEY` (private). Claims: `iss=bb-bridge`, `aud=plane`, `iat`, `exp` (now+60s), `jti` (UUIDv4), `sub`, `email`, `first_name`, `last_name`, `tenant`.
- Bridge 302s the user's browser to `https://pm.<tenant>.binarybeach.io/auth/sign-in-trusted/?token=<jwt>&next_path=<rd>`.
- Plane's view: fetches public key from `BB_BRIDGE_PUBLIC_KEY_URL` (cached 5 min), verifies signature + claims, atomically `SETNX bb_bridge_jti:<jti>` in shared-redis with TTL = `exp - now + 30s`, find-or-creates User by email, calls `user_login()`, 302s to `next_path`.
- Replay protection is **fail closed**: if shared-redis is unavailable, the request is rejected. Operator break-glass uses the email+password sign-in (vanilla upstream code) which doesn't depend on either Redis or the bridge.
### Patch 2: Presigned PUT for uploads (R2/B2 don't implement PostObject)
| File | Change | Risk on upgrade |
|---|---|---|
| `apps/api/plane/settings/storage.py` | `S3Storage.generate_presigned_post(...)` rewritten to mint a presigned PUT URL via `generate_presigned_url(HttpMethod="PUT")`. Method name preserved for caller compat. | **Medium.** If Plane's upload flow changes upstream, conflict surface grows. Candidate for upstream PR. |
| `apps/api/plane/utils/openapi/responses.py` | OpenAPI example response updated to PUT shape. | **Low.** |
| `apps/api/plane/tests/unit/settings/test_storage.py` | 2 tests retargeted to assert `generate_presigned_url` boto3 call. | **Low.** |
| `packages/types/src/file.ts` | `TFileSignedURLResponse.upload_data` adds `method?: "PUT" \| "POST"`, drops AWS POST-form-data fields. | **Low.** |
| `packages/services/src/file/helper.ts` | `generateFileUploadPayload(...)` returns a `TFileUploadRequest` descriptor; dispatches PUT/POST. | **Medium.** |
| `packages/services/src/file/file-upload.service.ts` + `apps/web/core/services/file-upload.service.ts` | `uploadFile(...)` signature changed to `(payload, progress?)`. Uses `axios.request({method, url, data, headers})`. | **Medium.** |
| `apps/web/core/services/file.service.ts`, `apps/web/core/services/issue/issue_attachment.service.ts`, `packages/services/src/file/sites-file.service.ts` | 5 caller sites updated to pass `TFileUploadRequest` to `uploadFile`. | **Low.** |
Decision record at `binarybeachio/docs/features/storage-upload-flow.md`. Patch 2 is independent of Patch 1 — `git revert <storage-PUT sha>` undoes it cleanly.
### Patch 3: Brand asset (kept as dormant; entry-point UX is Traefik-driven)
| File | Change | Risk on upgrade |
|---|---|---|
| `apps/web/app/assets/logos/binarybeach-logo.png` | New asset. Currently unreferenced; preserved for future AGPL §13 footer-link addition or other branding work. | **None.** |
The previous fork's GitHub-button rebrand patch (`apps/web/core/hooks/oauth/core.tsx`) was reverted on 2026-05-04. Sign-in entry-point UX is now driven by a Traefik `redirectregex` middleware applied to the per-tenant Plane router that 302s `/sign-in*`, `/sign-up*`, `/accounts/sign-in*` to `https://bridge.binarybeach.io/handoff?app=plane&tenant=<slug>&...`. Pure infrastructure config; no source modification needed for the redirect.
Files **not** changed (deliberately):
- `apps/api/plane/authentication/provider/oauth/github.py` — upstream-clean. Vanilla GitHub OAuth still works if configured via god-mode UI.
- `apps/api/plane/authentication/views/app/github.py` and the gitlab/gitea/google equivalents — all upstream-clean.
- `apps/admin/...` — god-mode UI unchanged.
- `apps/space/...` — public-share OAuth unchanged. Authenticated public boards continue to use email+password sign-in (vanilla upstream). When a tenant needs SSO for shared boards, add a sibling `views/space/trusted.py` (estimated ~80 LOC, mirrors the app/ view).
## Required runtime config
Set on the patched `plane-backend` container (binarybeachio sets these in `infrastructure/plane/.env`):
```bash
# Activates the trusted-JWT endpoint. URL points at the in-cluster bridge
# service's public-key endpoint. Unset → endpoint returns 404 (regression-safe).
BB_BRIDGE_PUBLIC_KEY_URL=http://auth-bridge-<uuid>:3000/.well-known/bb-bridge.pub.pem
```
Bridge-side configuration (in `binarybeachio/infrastructure/auth-bridge/.env`):
```bash
ADAPTER_PLANE_BINARYBEACH_BASE_URL=https://pm.binarybeach.binarybeach.io
# BRIDGE_SIGNING_KEY is loaded centrally by bridge-key.ts; the matching
# public key is served at /.well-known/bb-bridge.pub.pem and consumed by
# Plane via BB_BRIDGE_PUBLIC_KEY_URL.
```
Plane god-mode admin UI (`/god-mode/authentication/...`):
- All four upstream OAuth providers (GitHub/GitLab/Gitea/Google) can be left disabled. The trusted-JWT entry-point is the SSO front door.
- Email+password sign-in remains available as the break-glass admin path. Per-tenant `bb-admin` user is seeded with a permanent password from `_shared/.env.bb-admin`.
## Cross-fork conventions adopted
This fork pulls in binarybeachio's [session lifecycle convention](https://git.binarybeach.io/binarybeach/binarybeachio-platform/src/branch/main/docs/features/session-lifecycle.md) — 15-min idle timeout, slide-on-activity. Applied automatically by `bootstrap.py` at deploy. To override for this fork specifically, set `SESSION_COOKIE_AGE` / `ADMIN_SESSION_COOKIE_AGE` / `SESSION_SAVE_EVERY_REQUEST` in `infrastructure/plane/.env` (per-app .env beats convention).
## Refresh from upstream — the procedure
When a new Plane release lands and we want to integrate:
```bash
git fetch upstream
# Sync the upstream mirror branch (never touched by us)
git switch upstream
git reset --hard upstream/preview # or @v1.4.0 if we track tags
git push origin upstream
# Integration branch
git switch main
git switch -c update/v1.4.0
git merge upstream # likely conflict-free since fork is additive
# Hand-test:
# 1. Local stack via docker-compose.bb-local.yml — confirm sign-in works.
# 2. Trusted endpoint with a hand-minted JWT (helper script TBD; for now,
# mint via a node REPL using bridge-key.ts:signBridgeJwt).
# 3. Vanilla email+password regression test.
# Once happy:
git switch main
git merge --ff-only update/v1.4.0
git branch -d update/v1.4.0
git push origin main
# Then on laptop: rebuild + tag + push images (see "Build" below)
# Then in binarybeachio repo: bump tag in infrastructure/plane/docker-compose.yml
# Then: py infrastructure/_shared/bootstrap.py to trigger the Coolify deploy
```
## Build — which images to rebuild and how
Per binarybeachio architecture doc §7.4 ("only rebuild what we touched"), this fork only requires rebuilding **two of the six** Plane images:
| Image | Customized? | Source | Build target |
|---|---|---|---|
| `plane-backend` | YES (Patch 1 + Patch 2) | `apps/api/Dockerfile.api` | `git.binarybeach.io/binarybeach/plane-backend:v1.3.0-mine.<n>` |
| `plane-frontend` (aka web) | YES (Patch 2 frontend bits only) | `apps/web/Dockerfile.web` | `git.binarybeach.io/binarybeach/plane-frontend:v1.3.0-mine.<n>` |
| `plane-space` | no | upstream `makeplane/plane-space:v1.3.0` | (no rebuild) |
| `plane-admin` | no | upstream `makeplane/plane-admin:v1.3.0` | (no rebuild) |
| `plane-live` | no | upstream `makeplane/plane-live:v1.3.0` | (no rebuild) |
| `plane-proxy` | no | upstream `makeplane/plane-proxy:v1.3.0` | (no rebuild) |
Tag scheme per architecture §6 #7: `<upstream-version>-mine.<n>`. Push immutable tag + `:latest`:
```bash
# from C:\Users\maxwe\GitHubRepos\bb-plane-fork
docker build -t git.binarybeach.io/binarybeach/plane-backend:v1.3.0-mine.2 \
-t git.binarybeach.io/binarybeach/plane-backend:latest \
-f apps/api/Dockerfile.api .
docker push git.binarybeach.io/binarybeach/plane-backend:v1.3.0-mine.2
docker push git.binarybeach.io/binarybeach/plane-backend:latest
docker build -t git.binarybeach.io/binarybeach/plane-frontend:v1.3.0-mine.2 \
-t git.binarybeach.io/binarybeach/plane-frontend:latest \
-f apps/web/Dockerfile.web .
docker push git.binarybeach.io/binarybeach/plane-frontend:v1.3.0-mine.2
docker push git.binarybeach.io/binarybeach/plane-frontend:latest
```
`mine.<n>` resets to `mine.1` on every upstream version bump; increments per local rebuild within the same upstream version.
## License compliance
Plane is AGPL-3.0-only. The license requires us to provide the source of any modified version we deploy or offer over a network. Our compliance:
1. **Forgejo source**`git.binarybeach.io/binarybeach/bb-plane-fork` is publicly readable.
2. **GitHub mirror** — push-mirror to `github.com/binarybeachllc/bb-plane-fork`.
3. **In-product source link** — TODO. AGPL §13 requires "prominent" notice to network users; a footer suffices. Tracked separately.
## Test plan (manual, until we have CI)
1. **Local build smoke**: both images build cleanly.
2. **Local stack**: `docker compose -f docker-compose.bb-local.yml --env-file .env.bb-local up -d` (with `BB_BRIDGE_PUBLIC_KEY_URL` unset) → vanilla email+password sign-in works (regression check).
3. **Trusted-JWT happy path**: with `BB_BRIDGE_PUBLIC_KEY_URL` pointing at production bridge, hand-mint a JWT (claims: `iss=bb-bridge`, `aud=plane`, valid `exp`, fresh `jti`, valid email), `GET /auth/sign-in-trusted/?token=<jwt>&next_path=/`, expect 302 to `/` with sessionid cookie set.
4. **Trusted-JWT replay rejection**: hit the same URL with the same token twice. First → 302 + sessionid. Second → 302 to error redirect with `TRUSTED_JWT_TOKEN_REPLAYED`.
5. **Trusted-JWT disabled regression**: unset `BB_BRIDGE_PUBLIC_KEY_URL`, hit `/auth/sign-in-trusted/`, expect 404.
6. **Production deploy**: bump tag in `binarybeachio/infrastructure/plane/docker-compose.yml``py infrastructure/_shared/bootstrap.py` → verify on `pm.binarybeach.binarybeach.io`.

View file

@ -71,17 +71,6 @@ AUTHENTICATION_ERROR_CODES = {
"RATE_LIMIT_EXCEEDED": 5900, "RATE_LIMIT_EXCEEDED": 5900,
# Unknown # Unknown
"AUTHENTICATION_FAILED": 5999, "AUTHENTICATION_FAILED": 5999,
# binarybeachio fork addition (Bucket-4 trusted-JWT entry-point) — see
# views/app/trusted.py and BINARYBEACHIO.md. Codes 6000-6099 are reserved
# for fork additions to keep them outside the upstream-allocated 5000-5999
# range and reduce upstream-merge collision risk.
"TRUSTED_JWT_ENDPOINT_DISABLED": 6000,
"TRUSTED_JWT_TOKEN_MISSING": 6001,
"TRUSTED_JWT_TOKEN_INVALID": 6002,
"TRUSTED_JWT_TOKEN_EXPIRED": 6003,
"TRUSTED_JWT_TOKEN_REPLAYED": 6004,
"TRUSTED_JWT_REPLAY_STORE_DOWN": 6005,
"TRUSTED_JWT_KEY_FETCH_FAILED": 6006,
} }

View file

@ -44,8 +44,6 @@ from .views import (
GiteaOauthInitiateEndpoint, GiteaOauthInitiateEndpoint,
GiteaCallbackSpaceEndpoint, GiteaCallbackSpaceEndpoint,
GiteaOauthInitiateSpaceEndpoint, GiteaOauthInitiateSpaceEndpoint,
# binarybeachio fork addition — see views/app/trusted.py.
TrustedSignInEndpoint,
) )
urlpatterns = [ urlpatterns = [
@ -152,7 +150,4 @@ urlpatterns = [
GiteaCallbackSpaceEndpoint.as_view(), GiteaCallbackSpaceEndpoint.as_view(),
name="space-gitea-callback", name="space-gitea-callback",
), ),
# binarybeachio fork addition — Bucket-4 trusted-JWT entry-point.
# See views/app/trusted.py and BINARYBEACHIO.md.
path("sign-in-trusted/", TrustedSignInEndpoint.as_view(), name="sign-in-trusted"),
] ]

View file

@ -41,7 +41,3 @@ from .space.password_management import (
ResetPasswordSpaceEndpoint, ResetPasswordSpaceEndpoint,
) )
from .app.password_management import ForgotPasswordEndpoint, ResetPasswordEndpoint from .app.password_management import ForgotPasswordEndpoint, ResetPasswordEndpoint
# binarybeachio fork addition (Bucket-4 trusted-JWT entry-point) — see
# views/app/trusted.py and BINARYBEACHIO.md.
from .app.trusted import TrustedSignInEndpoint

View file

@ -1,275 +0,0 @@
# Copyright (c) 2023-present Plane Software, Inc. and contributors
# SPDX-License-Identifier: AGPL-3.0-only
# See the LICENSE file for details.
#
# binarybeachio fork addition — see BINARYBEACHIO.md at repo root.
#
# Bucket-4 trusted-JWT entry-point. Validates a short-lived RS256 JWT signed
# by the binarybeachio auth-bridge (private key BRIDGE_SIGNING_KEY), enforces
# single-use replay protection via shared-redis SETNX (per the contract in
# `binarybeachio/docs/architecture/bridge-jwt-replay-protection.md`), then
# finds-or-creates the corresponding User and starts a Django session via
# the existing user_login() helper.
#
# Endpoint behavior when not configured:
# - If BB_BRIDGE_PUBLIC_KEY_URL env is unset → 404 (endpoint disabled).
# Vanilla upstream behavior is preserved out-of-the-box; the trusted-JWT
# entry-point only exists in deployments that explicitly opt in.
#
# Public-key transport:
# - Fetched at request time from BB_BRIDGE_PUBLIC_KEY_URL (typically
# `http://auth-bridge-<uuid>:3000/.well-known/bb-bridge.pub.pem`).
# - Cached in-process for 5 minutes; auto-refreshed on signature failure
# to handle bridge key rotation transparently.
# - This sidesteps the env-PEM corruption issue: putting RSA PEMs through
# Coolify's .env writer escapes backslashes (`\n` → `\\n`), which
# corrupts the multi-line PEM. HTTP fetch never traverses that path.
# See bb-activepieces-fork/.../trusted-jwt-verifier.ts module-doc for
# the original write-up.
#
# Replay protection:
# - Bridge mints with a UUIDv4 `jti` claim.
# - This view atomically SETNX `bb_bridge_jti:<jti>` in shared-redis with
# TTL = (exp - now) + 30s clock-skew tolerance.
# - Fail closed: if Redis is unavailable, REJECT. Auth correctness >
# auth availability; break-glass admin (email+password) covers operator
# access during a Redis outage.
import logging
import os
import time
from typing import Optional, Tuple
from urllib.parse import urlparse
import jwt as pyjwt
import redis
import requests
from django.http import HttpResponseRedirect, HttpResponseNotFound
from django.views import View
from plane.authentication.adapter.error import (
AUTHENTICATION_ERROR_CODES,
AuthenticationException,
)
from plane.authentication.utils.host import base_host
from plane.authentication.utils.login import user_login
from plane.authentication.utils.redirection_path import get_redirection_path
from plane.authentication.utils.user_auth_workflow import post_user_auth_workflow
from plane.db.models import User
from plane.settings.redis import redis_instance
from plane.utils.path_validator import get_safe_redirect_url
log = logging.getLogger("plane.authentication.trusted")
# Audience the bridge sets in JWTs minted for Plane (signBridgeJwt(..., audience: 'plane')).
_EXPECTED_AUDIENCE = "plane"
# Issuer the bridge sets (every adapter shares this).
_EXPECTED_ISSUER = "bb-bridge"
# Replay-store key prefix per bridge-jwt-replay-protection.md.
_JTI_KEY_PREFIX = "bb_bridge_jti:"
# Clock-skew tolerance applied to exp/iat checks.
_CLOCK_SKEW_SECONDS = 30
# Public-key cache (in-process). Keyed on URL so test/dev with multiple
# bridges per process is safe. _key_cache: {url: (pem, fetched_at_epoch)}.
_KEY_CACHE_TTL_SECONDS = 5 * 60
_key_cache: dict[str, Tuple[str, float]] = {}
def _bridge_public_key_url() -> Optional[str]:
"""Returns the configured bridge public-key URL, or None if disabled.
The endpoint is implicitly disabled (returns 404) when this env is unset
the regression-safe default for builds shipped without the bridge wired up.
"""
return os.environ.get("BB_BRIDGE_PUBLIC_KEY_URL") or None
def _fetch_bridge_public_key(url: str, *, force_refresh: bool = False) -> str:
"""Fetch (and cache) the bridge's public key PEM. Refetches on signature
failure or after the cache TTL elapses. Falls back to stale cache if a
refresh fails temporarily-unreachable bridge shouldn't brick logins."""
now = time.time()
cached = _key_cache.get(url)
if not force_refresh and cached and (now - cached[1]) < _KEY_CACHE_TTL_SECONDS:
return cached[0]
try:
resp = requests.get(url, timeout=3.0, headers={"accept": "application/x-pem-file"})
resp.raise_for_status()
pem = resp.text
if "-----BEGIN PUBLIC KEY-----" not in pem:
raise ValueError(f"non-PEM body from {url} (first 80: {pem[:80]!r})")
_key_cache[url] = (pem, now)
return pem
except Exception as exc:
if cached:
log.warning("bridge public-key fetch failed; using stale cache", extra={"url": url, "err": str(exc)})
return cached[0]
raise
def _consume_jti(jti: str, exp_epoch: int) -> Tuple[bool, Optional[str]]:
"""Atomically mark a `jti` consumed in shared-redis. Returns (first_use, error_code).
- (True, None) not previously consumed; admit the request.
- (False, code) either already consumed (TRUSTED_JWT_TOKEN_REPLAYED) or
the replay store is unavailable (TRUSTED_JWT_REPLAY_STORE_DOWN). Either
way, REJECT the request (fail closed).
TTL = (exp - now) + 30s clock-skew tolerance, with a 30s minimum floor for
edge cases where exp is already past at consumption time (signature still
valid under clock-skew tolerance).
"""
if not jti or not exp_epoch:
return False, "TRUSTED_JWT_TOKEN_INVALID"
try:
client = redis_instance()
except Exception as exc:
log.error("replay store init failed", extra={"err": str(exc)})
return False, "TRUSTED_JWT_REPLAY_STORE_DOWN"
try:
ttl = max(int(exp_epoch - time.time()) + _CLOCK_SKEW_SECONDS, 30)
# SET key value NX EX ttl -- returns True on first-set, None if already set.
ok = client.set(_JTI_KEY_PREFIX + jti, "1", nx=True, ex=ttl)
if ok is None:
return False, "TRUSTED_JWT_TOKEN_REPLAYED"
return True, None
except redis.RedisError as exc:
log.error("replay store SETNX failed", extra={"err": str(exc), "jti": jti})
return False, "TRUSTED_JWT_REPLAY_STORE_DOWN"
def _redirect_with_error(request, error_code: str, error_message: str, next_path: str) -> HttpResponseRedirect:
"""Surface the failure as a Plane-style redirect to the host with error params."""
exc = AuthenticationException(
error_code=AUTHENTICATION_ERROR_CODES[error_code],
error_message=error_message,
)
return HttpResponseRedirect(
get_safe_redirect_url(
base_url=base_host(request=request, is_app=True),
next_path=next_path,
params=exc.get_error_dict(),
)
)
def _verify_with_retry(token: str, public_key_url: str) -> dict:
"""Verify the JWT, refetching the bridge key once on signature failure to
transparently handle bridge key rotation. Other verify failures (expired,
wrong issuer/audience, malformed) do NOT trigger a refetch those are
tampering or clock issues, not key drift."""
pem = _fetch_bridge_public_key(public_key_url)
try:
return pyjwt.decode(
token,
pem,
algorithms=["RS256"],
audience=_EXPECTED_AUDIENCE,
issuer=_EXPECTED_ISSUER,
leeway=_CLOCK_SKEW_SECONDS,
options={"require": ["exp", "iat", "sub", "email", "jti"]},
)
except pyjwt.InvalidSignatureError:
log.warning("trusted-jwt signature failed; refetching bridge key", extra={"url": public_key_url})
pem = _fetch_bridge_public_key(public_key_url, force_refresh=True)
return pyjwt.decode(
token,
pem,
algorithms=["RS256"],
audience=_EXPECTED_AUDIENCE,
issuer=_EXPECTED_ISSUER,
leeway=_CLOCK_SKEW_SECONDS,
options={"require": ["exp", "iat", "sub", "email", "jti"]},
)
class TrustedSignInEndpoint(View):
"""GET /auth/sign-in-trusted/?token=<jwt>&next_path=<rel-path>
The bridge 302s the browser here after a successful oauth2-proxy session
is established. We verify the JWT, claim its `jti` to prevent replay,
find-or-create the User, and call user_login() to set the Django session
cookie. Then 302 the user to next_path on the same host.
"""
def get(self, request):
public_key_url = _bridge_public_key_url()
if not public_key_url:
# Endpoint disabled — bridge not wired up in this deployment.
return HttpResponseNotFound()
# Validate next_path on every exit — even error redirects honor it so
# the user lands somewhere sensible. get_safe_redirect_url further
# constrains to the trusted base host.
next_path = request.GET.get("next_path") or "/"
token = request.GET.get("token")
if not token:
return _redirect_with_error(request, "TRUSTED_JWT_TOKEN_MISSING", "TRUSTED_JWT_TOKEN_MISSING", next_path)
try:
claims = _verify_with_retry(token, public_key_url)
except pyjwt.ExpiredSignatureError:
return _redirect_with_error(request, "TRUSTED_JWT_TOKEN_EXPIRED", "TRUSTED_JWT_TOKEN_EXPIRED", next_path)
except pyjwt.InvalidTokenError as e:
log.warning("trusted-jwt invalid", extra={"err_class": e.__class__.__name__})
return _redirect_with_error(request, "TRUSTED_JWT_TOKEN_INVALID", f"TRUSTED_JWT_TOKEN_INVALID: {e.__class__.__name__}", next_path)
except Exception as e:
log.error("trusted-jwt key fetch failed", extra={"err": str(e)})
return _redirect_with_error(request, "TRUSTED_JWT_KEY_FETCH_FAILED", "TRUSTED_JWT_KEY_FETCH_FAILED", next_path)
# Replay enforcement — atomic SETNX in shared-redis. Fail closed.
first_use, replay_err = _consume_jti(claims.get("jti", ""), int(claims.get("exp", 0)))
if not first_use:
log.warning(
"trusted-jwt rejected by replay-store",
extra={"jti": claims.get("jti"), "sub": claims.get("sub"), "code": replay_err},
)
return _redirect_with_error(request, replay_err or "TRUSTED_JWT_TOKEN_REPLAYED", replay_err or "TRUSTED_JWT_TOKEN_REPLAYED", next_path)
email = (claims.get("email") or "").strip().lower()
if not email:
return _redirect_with_error(request, "TRUSTED_JWT_TOKEN_INVALID", "TRUSTED_JWT_TOKEN_NO_EMAIL", next_path)
# Find-or-create. Plane's User model uses email as a unique natural key;
# other OAuth providers do the same lookup via the OauthAdapter base.
# We mirror that behavior here without going through OauthAdapter — this
# endpoint is a NEW entry-point, not a fifth OAuth provider.
user, created = User.objects.get_or_create(
email=email,
defaults={
"first_name": claims.get("first_name") or claims.get("given_name") or "",
"last_name": claims.get("last_name") or claims.get("family_name") or "",
"is_password_autoset": True,
},
)
# Plane's existing post-auth workflow (default workspace, invitations, etc.)
post_user_auth_workflow(user=user, is_signup=created, request=request)
# Set Django session cookie via the existing helper.
user_login(request=request, user=user, is_app=True)
# NOTE: do NOT name extra keys after LogRecord built-in attributes
# (`name`, `created`, `levelname`, `module`, `message`, etc.) —
# Logger.makeRecord raises KeyError("Attempt to overwrite %r in LogRecord")
# on collision. Use is_signup instead of created.
log.info(
"trusted-jwt sign-in",
extra={
"jti": claims.get("jti"),
"sub": claims.get("sub"),
"email": email,
"tenant": claims.get("tenant"),
"is_signup": created,
},
)
target = next_path or get_redirection_path(user=user)
return HttpResponseRedirect(
get_safe_redirect_url(
base_url=base_host(request=request, is_app=True),
next_path=target,
params={},
)
)

View file

@ -63,58 +63,40 @@ class S3Storage(S3Boto3Storage):
) )
def generate_presigned_post(self, object_name, file_type, file_size, expiration=None): def generate_presigned_post(self, object_name, file_type, file_size, expiration=None):
"""Generate a presigned URL for browser-direct upload. """Generate a presigned URL to upload an S3 object"""
BB-PATCH (binarybeachio fork): method name preserved for caller
compat, but this now mints a presigned PUT URL not POST.
Why: Cloudflare R2 and Backblaze B2 the two most common self-host
S3-compatible backends do NOT implement S3 PostObject. Both return
HTTP 501 NotImplemented for the bucket-form POST verb that vanilla
Plane uses. Confirmed empirically against both backends 2026-04-30.
Rolling our own backend support isn't tractable; PUT is universally
supported (R2, B2, AWS S3, MinIO, Wasabi, etc).
Tradeoff: we lose signed enforcement of `content-length-range`. Size
is still validated server-side at presign time via the `file_size`
param (see callers: 413 raised before we get here), so a determined
client could only over-upload by misreporting the size they'd be
capped by the bucket's max-file-size at worst.
See docs/features/storage-upload-flow.md in the binarybeachio repo
for the full decision record + rollback procedure (`git revert` this
commit and rebuild the images).
"""
if expiration is None: if expiration is None:
expiration = self.signed_url_expiration expiration = self.signed_url_expiration
# Default to application/octet-stream when caller passes empty/None. fields = {"Content-Type": file_type}
# The file-type library Plane's frontend uses returns "" for unsniffable
# formats (plain text, .json, etc.), which would sign a presigned URL conditions = [
# with `Content-Type=""`. Browsers can't reliably send an empty {"bucket": self.aws_storage_bucket_name},
# Content-Type header, so the SigV4 signature would never match and PUT ["content-length-range", 1, file_size],
# would 403. We resolve this by signing a definite default; the {"Content-Type": file_type},
# frontend then sends the signed value verbatim (see helper.ts). ]
signed_content_type = file_type or "application/octet-stream"
# Add condition for the object name (key)
if object_name.startswith("${filename}"):
conditions.append(["starts-with", "$key", object_name[: -len("${filename}")]])
else:
fields["key"] = object_name
conditions.append({"key": object_name})
# Generate the presigned POST URL
try: try:
url = self.s3_client.generate_presigned_url( # Generate a presigned URL for the S3 object
"put_object", response = self.s3_client.generate_presigned_post(
Params={ Bucket=self.aws_storage_bucket_name,
"Bucket": self.aws_storage_bucket_name, Key=object_name,
"Key": object_name, Fields=fields,
"ContentType": signed_content_type, Conditions=conditions,
},
ExpiresIn=expiration, ExpiresIn=expiration,
HttpMethod="PUT",
) )
# Handle errors
except ClientError as e: except ClientError as e:
print(f"Error generating presigned PUT URL: {e}") print(f"Error generating presigned POST URL: {e}")
return None return None
return { return response
"url": url,
"method": "PUT",
"fields": {"Content-Type": signed_content_type, "key": object_name},
}
def _get_content_disposition(self, disposition, filename=None): def _get_content_disposition(self, disposition, filename=None):
"""Helper method to generate Content-Disposition header value""" """Helper method to generate Content-Disposition header value"""

View file

@ -63,15 +63,13 @@ class TestS3StorageSignedURLExpiration:
) )
@patch("plane.settings.storage.boto3") @patch("plane.settings.storage.boto3")
def test_generate_presigned_post_uses_default_expiration(self, mock_boto3): def test_generate_presigned_post_uses_default_expiration(self, mock_boto3):
"""Test that generate_presigned_post uses the configured default expiration """Test that generate_presigned_post uses the configured default expiration"""
BB-PATCH: generate_presigned_post now mints a presigned PUT URL under
the hood (R2/B2 don't implement PostObject). Test asserts the
underlying generate_presigned_url call rather than generate_presigned_post.
"""
# Mock the boto3 client and its response # Mock the boto3 client and its response
mock_s3_client = Mock() mock_s3_client = Mock()
mock_s3_client.generate_presigned_url.return_value = "https://test-url.com" mock_s3_client.generate_presigned_post.return_value = {
"url": "https://test-url.com",
"fields": {},
}
mock_boto3.client.return_value = mock_s3_client mock_boto3.client.return_value = mock_s3_client
# Create S3Storage instance # Create S3Storage instance
@ -81,10 +79,9 @@ class TestS3StorageSignedURLExpiration:
storage.generate_presigned_post("test-object", "image/png", 1024) storage.generate_presigned_post("test-object", "image/png", 1024)
# Assert that the boto3 method was called with the default expiration (3600) # Assert that the boto3 method was called with the default expiration (3600)
mock_s3_client.generate_presigned_url.assert_called_once() mock_s3_client.generate_presigned_post.assert_called_once()
call_kwargs = mock_s3_client.generate_presigned_url.call_args[1] call_kwargs = mock_s3_client.generate_presigned_post.call_args[1]
assert call_kwargs["ExpiresIn"] == 3600 assert call_kwargs["ExpiresIn"] == 3600
assert call_kwargs["HttpMethod"] == "PUT"
@patch.dict( @patch.dict(
os.environ, os.environ,
@ -99,14 +96,13 @@ class TestS3StorageSignedURLExpiration:
) )
@patch("plane.settings.storage.boto3") @patch("plane.settings.storage.boto3")
def test_generate_presigned_post_uses_custom_expiration(self, mock_boto3): def test_generate_presigned_post_uses_custom_expiration(self, mock_boto3):
"""Test that generate_presigned_post uses custom expiration from env variable """Test that generate_presigned_post uses custom expiration from env variable"""
BB-PATCH: see test_generate_presigned_post_uses_default_expiration for
why this asserts generate_presigned_url instead of generate_presigned_post.
"""
# Mock the boto3 client and its response # Mock the boto3 client and its response
mock_s3_client = Mock() mock_s3_client = Mock()
mock_s3_client.generate_presigned_url.return_value = "https://test-url.com" mock_s3_client.generate_presigned_post.return_value = {
"url": "https://test-url.com",
"fields": {},
}
mock_boto3.client.return_value = mock_s3_client mock_boto3.client.return_value = mock_s3_client
# Create S3Storage instance with SIGNED_URL_EXPIRATION=60 # Create S3Storage instance with SIGNED_URL_EXPIRATION=60
@ -116,10 +112,9 @@ class TestS3StorageSignedURLExpiration:
storage.generate_presigned_post("test-object", "image/png", 1024) storage.generate_presigned_post("test-object", "image/png", 1024)
# Assert that the boto3 method was called with custom expiration (60) # Assert that the boto3 method was called with custom expiration (60)
mock_s3_client.generate_presigned_url.assert_called_once() mock_s3_client.generate_presigned_post.assert_called_once()
call_kwargs = mock_s3_client.generate_presigned_url.call_args[1] call_kwargs = mock_s3_client.generate_presigned_post.call_args[1]
assert call_kwargs["ExpiresIn"] == 60 assert call_kwargs["ExpiresIn"] == 60
assert call_kwargs["HttpMethod"] == "PUT"
@patch.dict( @patch.dict(
os.environ, os.environ,

View file

@ -415,11 +415,12 @@ GENERIC_ASSET_UPLOAD_SUCCESS_RESPONSE = OpenApiResponse(
name="Generic Asset Upload Response", name="Generic Asset Upload Response",
value={ value={
"upload_data": { "upload_data": {
"url": "https://s3.amazonaws.com/bucket-name/workspace-id/uuid-filename.pdf?X-Amz-Signature=...", "url": "https://s3.amazonaws.com/bucket-name",
"method": "PUT",
"fields": { "fields": {
"Content-Type": "application/pdf",
"key": "workspace-id/uuid-filename.pdf", "key": "workspace-id/uuid-filename.pdf",
"AWSAccessKeyId": "AKIA...",
"policy": "eyJ...",
"signature": "abc123...",
}, },
}, },
"asset_id": "550e8400-e29b-41d4-a716-446655440000", "asset_id": "550e8400-e29b-41d4-a716-446655440000",

Binary file not shown.

Before

Width:  |  Height:  |  Size: 788 KiB

View file

@ -6,8 +6,6 @@
import type { AxiosRequestConfig } from "axios"; import type { AxiosRequestConfig } from "axios";
import axios from "axios"; import axios from "axios";
// plane services
import type { TFileUploadRequest } from "@plane/services";
// services // services
import { APIService } from "@/services/api.service"; import { APIService } from "@/services/api.service";
@ -18,18 +16,16 @@ export class FileUploadService extends APIService {
super(""); super("");
} }
// BB-PATCH: dispatches on payload.method (PUT for fork default, POST kept
// for upstream-Plane parity). See packages/services/src/file/helper.ts.
async uploadFile( async uploadFile(
payload: TFileUploadRequest, url: string,
data: FormData,
uploadProgressHandler?: AxiosRequestConfig["onUploadProgress"] uploadProgressHandler?: AxiosRequestConfig["onUploadProgress"]
): Promise<void> { ): Promise<void> {
this.cancelSource = axios.CancelToken.source(); this.cancelSource = axios.CancelToken.source();
return this.request({ return this.post(url, data, {
method: payload.method, headers: {
url: payload.url, "Content-Type": "multipart/form-data",
data: payload.body, },
headers: payload.headers,
cancelToken: this.cancelSource.token, cancelToken: this.cancelSource.token,
withCredentials: false, withCredentials: false,
onUploadProgress: uploadProgressHandler, onUploadProgress: uploadProgressHandler,

View file

@ -83,7 +83,11 @@ export class FileService extends APIService {
.then(async (response) => { .then(async (response) => {
const signedURLResponse: TFileSignedURLResponse = response?.data; const signedURLResponse: TFileSignedURLResponse = response?.data;
const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file); const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file);
await this.fileUploadService.uploadFile(fileUploadPayload, uploadProgressHandler); await this.fileUploadService.uploadFile(
signedURLResponse.upload_data.url,
fileUploadPayload,
uploadProgressHandler
);
await this.updateWorkspaceAssetUploadStatus(workspaceSlug.toString(), signedURLResponse.asset_id); await this.updateWorkspaceAssetUploadStatus(workspaceSlug.toString(), signedURLResponse.asset_id);
return signedURLResponse; return signedURLResponse;
}) })
@ -156,7 +160,11 @@ export class FileService extends APIService {
.then(async (response) => { .then(async (response) => {
const signedURLResponse: TFileSignedURLResponse = response?.data; const signedURLResponse: TFileSignedURLResponse = response?.data;
const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file); const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file);
await this.fileUploadService.uploadFile(fileUploadPayload, uploadProgressHandler); await this.fileUploadService.uploadFile(
signedURLResponse.upload_data.url,
fileUploadPayload,
uploadProgressHandler
);
await this.updateProjectAssetUploadStatus(workspaceSlug, projectId, signedURLResponse.asset_id); await this.updateProjectAssetUploadStatus(workspaceSlug, projectId, signedURLResponse.asset_id);
return signedURLResponse; return signedURLResponse;
}) })
@ -182,7 +190,7 @@ export class FileService extends APIService {
.then(async (response) => { .then(async (response) => {
const signedURLResponse: TFileSignedURLResponse = response?.data; const signedURLResponse: TFileSignedURLResponse = response?.data;
const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file); const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file);
await this.fileUploadService.uploadFile(fileUploadPayload); await this.fileUploadService.uploadFile(signedURLResponse.upload_data.url, fileUploadPayload);
await this.updateUserAssetUploadStatus(signedURLResponse.asset_id); await this.updateUserAssetUploadStatus(signedURLResponse.asset_id);
return signedURLResponse; return signedURLResponse;
}) })

View file

@ -55,7 +55,11 @@ export class IssueAttachmentService extends APIService {
.then(async (response) => { .then(async (response) => {
const signedURLResponse: TIssueAttachmentUploadResponse = response?.data; const signedURLResponse: TIssueAttachmentUploadResponse = response?.data;
const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file); const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file);
await this.fileUploadService.uploadFile(fileUploadPayload, uploadProgressHandler); await this.fileUploadService.uploadFile(
signedURLResponse.upload_data.url,
fileUploadPayload,
uploadProgressHandler
);
await this.updateIssueAttachmentUploadStatus(workspaceSlug, projectId, issueId, signedURLResponse.asset_id); await this.updateIssueAttachmentUploadStatus(workspaceSlug, projectId, issueId, signedURLResponse.asset_id);
return signedURLResponse.attachment; return signedURLResponse.attachment;
}) })

View file

@ -1,216 +0,0 @@
# bb-plane-fork local-test compose — binarybeachio
# ---------------------------------------------------------------------------
# Spins up a Plane stack on the laptop using:
# - OUR PATCHED images (plane-backend, plane-frontend) built from this fork
# - Upstream-vanilla images for the other 4 services (per architecture
# doc §7.4 — only build what we touched)
# - Ephemeral local Postgres + Redis + RabbitMQ + MinIO (NOT shared-postgres;
# this is a destructible dev stack — `docker compose down -v` wipes everything)
# - Hosted Zitadel (auth.binarybeach.io) for the OIDC flow
#
# Build first, then run:
#
# # Build patched images locally
# docker build -t plane-backend:bb-local -f apps/api/Dockerfile.api apps/api/
# docker build -t plane-frontend:bb-local -f apps/web/Dockerfile.web .
#
# # Bring up
# docker compose -f docker-compose.bb-local.yml --env-file .env.bb-local up -d
#
# # Watch logs
# docker compose -f docker-compose.bb-local.yml logs -f api worker
#
# # Visit http://localhost:8888 — log in with email+password (break-glass-style)
# # or, with BB_BRIDGE_PUBLIC_KEY_URL set, exercise the trusted-JWT endpoint
# # by hand: GET http://localhost:8888/auth/sign-in-trusted/?token=<jwt>
#
# Required env (.env.bb-local — gitignored):
# BB_BRIDGE_PUBLIC_KEY_URL= # leave unset for vanilla email+password testing
# ---------------------------------------------------------------------------
x-db-env: &db-env
PGHOST: plane-db
PGDATABASE: plane
POSTGRES_USER: plane
POSTGRES_PASSWORD: plane
POSTGRES_DB: plane
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
x-redis-env: &redis-env
REDIS_HOST: plane-redis
REDIS_PORT: 6379
REDIS_URL: redis://plane-redis:6379/
x-mq-env: &mq-env
RABBITMQ_HOST: plane-mq
RABBITMQ_PORT: 5672
RABBITMQ_DEFAULT_USER: plane
RABBITMQ_DEFAULT_PASS: plane
RABBITMQ_DEFAULT_VHOST: plane
RABBITMQ_USER: plane
RABBITMQ_PASSWORD: plane
RABBITMQ_VHOST: plane
x-minio-env: &minio-env
MINIO_ROOT_USER: access-key
MINIO_ROOT_PASSWORD: secret-key
x-aws-s3-env: &aws-s3-env
AWS_REGION: ""
AWS_ACCESS_KEY_ID: access-key
AWS_SECRET_ACCESS_KEY: secret-key
AWS_S3_ENDPOINT_URL: http://plane-minio:9000
AWS_S3_BUCKET_NAME: uploads
x-proxy-env: &proxy-env
APP_DOMAIN: localhost:8888
FILE_SIZE_LIMIT: 5242880
CERT_EMAIL: ""
# Plane proxy's Caddy parser requires a syntactically valid CA URL even
# when not actually using ACME (we serve plain HTTP locally).
CERT_ACME_CA: https://acme-v02.api.letsencrypt.org/directory
CERT_ACME_DNS: ""
LISTEN_HTTP_PORT: 80
LISTEN_HTTPS_PORT: 443
BUCKET_NAME: uploads
SITE_ADDRESS: ":80"
x-live-env: &live-env
API_BASE_URL: http://api:8000
LIVE_SERVER_SECRET_KEY: bb-local-test-live-secret-do-not-reuse
x-app-env: &app-env
WEB_URL: http://localhost:8888
CORS_ALLOWED_ORIGINS: http://localhost:8888
DEBUG: 1
GUNICORN_WORKERS: 1
USE_MINIO: 1
DATABASE_URL: postgresql://plane:plane@plane-db/plane
SECRET_KEY: bb-local-test-django-secret-do-not-reuse-anywhere-real
AMQP_URL: amqp://plane:plane@plane-mq:5672/plane
API_KEY_RATE_LIMIT: 60/minute
MINIO_ENDPOINT_SSL: 0
LIVE_SERVER_SECRET_KEY: bb-local-test-live-secret-do-not-reuse
# === binarybeachio fork: Bucket-4 trusted-JWT entry-point ===
# When BB_BRIDGE_PUBLIC_KEY_URL is set, /auth/sign-in-trusted/ is enabled
# and verifies bridge-issued JWTs against the URL-served PEM. When unset,
# the endpoint returns 404 and Plane behaves like upstream-vanilla.
# See apps/api/plane/authentication/views/app/trusted.py.
BB_BRIDGE_PUBLIC_KEY_URL: ${BB_BRIDGE_PUBLIC_KEY_URL:-}
# === binarybeachio session-lifecycle convention (15 min idle, slide-on-activity) ===
# Canonical: binarybeachio/infrastructure/_shared/.env.session-convention
SESSION_COOKIE_AGE: ${SESSION_COOKIE_AGE:-900}
ADMIN_SESSION_COOKIE_AGE: ${ADMIN_SESSION_COOKIE_AGE:-900}
SESSION_SAVE_EVERY_REQUEST: ${SESSION_SAVE_EVERY_REQUEST:-1}
services:
api:
image: plane-backend:bb-local
command: ./bin/docker-entrypoint-api.sh
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
- plane-db
- plane-redis
- plane-mq
worker:
image: plane-backend:bb-local
command: ./bin/docker-entrypoint-worker.sh
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
- api
beat-worker:
image: plane-backend:bb-local
command: ./bin/docker-entrypoint-beat.sh
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
- api
migrator:
image: plane-backend:bb-local
command: ./bin/docker-entrypoint-migrator.sh
restart: "no"
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
- plane-db
- plane-redis
web:
image: plane-frontend:bb-local
depends_on:
- api
- worker
space:
image: makeplane/plane-space:v1.3.0
depends_on:
- api
- worker
- web
admin:
image: makeplane/plane-admin:v1.3.0
depends_on:
- api
- web
live:
image: makeplane/plane-live:v1.3.0
environment:
<<: [*live-env, *redis-env]
depends_on:
- api
- web
plane-db:
image: postgres:15.7-alpine
command: postgres -c 'max_connections=1000'
environment:
<<: *db-env
volumes:
- bb-local-pgdata:/var/lib/postgresql/data
plane-redis:
image: valkey/valkey:7.2.11-alpine
volumes:
- bb-local-redisdata:/data
plane-mq:
image: rabbitmq:3.13.6-management-alpine
environment:
<<: *mq-env
volumes:
- bb-local-rmqdata:/var/lib/rabbitmq
plane-minio:
image: minio/minio:latest
command: server /export --console-address ":9090"
environment:
<<: *minio-env
volumes:
- bb-local-minio:/export
proxy:
image: makeplane/plane-proxy:v1.3.0
environment:
<<: *proxy-env
ports:
- "8888:80"
depends_on:
- web
- api
- space
- admin
- live
volumes:
bb-local-pgdata:
bb-local-redisdata:
bb-local-rmqdata:
bb-local-minio:

View file

@ -7,8 +7,6 @@
import axios from "axios"; import axios from "axios";
// api service // api service
import { APIService } from "../api.service"; import { APIService } from "../api.service";
// helpers
import type { TFileUploadRequest } from "./helper";
/** /**
* Service class for handling file upload operations * Service class for handling file upload operations
@ -23,17 +21,18 @@ export class FileUploadService extends APIService {
} }
/** /**
* Uploads a file to the presigned URL using the request descriptor produced * Uploads a file to the specified signed URL
* by `generateFileUploadPayload`. BB-PATCH: dispatches on `payload.method` * @param {string} url - The URL to upload the file to
* (PUT for the fork default, POST kept for upstream-Plane parity). * @param {FormData} data - The form data to upload
* @returns {Promise<void>} Promise resolving to void
* @throws {Error} If the request fails
*/ */
async uploadFile(payload: TFileUploadRequest): Promise<void> { async uploadFile(url: string, data: FormData): Promise<void> {
this.cancelSource = axios.CancelToken.source(); this.cancelSource = axios.CancelToken.source();
return this.request({ return this.post(url, data, {
method: payload.method, headers: {
url: payload.url, "Content-Type": "multipart/form-data",
data: payload.body, },
headers: payload.headers,
cancelToken: this.cancelSource.token, cancelToken: this.cancelSource.token,
withCredentials: false, withCredentials: false,
}) })

View file

@ -49,58 +49,17 @@ const validateFilename = (filename: string): string | null => {
return null; return null;
}; };
// BB-PATCH (binarybeachio fork): upload payload is now a request descriptor
// (url+method+body+headers), not raw FormData. The fork mints presigned PUT
// URLs because R2/B2 don't implement PostObject — see backend storage.py
// docstring + docs/features/storage-upload-flow.md.
export type TFileUploadRequest = {
url: string;
method: "PUT" | "POST";
body: File | FormData;
headers: Record<string, string>;
};
/** /**
* @description Build a request descriptor for uploading the file using the * @description from the provided signed URL response, generate a payload to be used to upload the file
* presigned URL returned by the API. Dispatches on `upload_data.method`:
* - "PUT" (fork default): raw file body + Content-Type header
* - "POST" (vanilla AWS S3 path, kept for upstream parity): multipart form
* @param {TFileSignedURLResponse} signedURLResponse * @param {TFileSignedURLResponse} signedURLResponse
* @param {File} file * @param {File} file
* @returns {TFileUploadRequest} * @returns {FormData} file upload request payload
*/ */
export const generateFileUploadPayload = ( export const generateFileUploadPayload = (signedURLResponse: TFileSignedURLResponse, file: File): FormData => {
signedURLResponse: TFileSignedURLResponse,
file: File
): TFileUploadRequest => {
const data = signedURLResponse.upload_data;
if (data.method === "POST") {
const formData = new FormData(); const formData = new FormData();
Object.entries(data.fields).forEach( Object.entries(signedURLResponse.upload_data.fields).forEach(([key, value]) => formData.append(key, value));
([key, value]) => value != null && formData.append(key, value as string)
);
formData.append("file", file); formData.append("file", file);
return { return formData;
url: data.url,
method: "POST",
body: formData,
headers: { "Content-Type": "multipart/form-data" },
};
}
// Content-Type MUST exactly match what the backend signed in the presigned
// PUT URL — the AWS SigV4 signature includes Content-Type as a signed header.
// If we send the browser's `file.type` (which guesses from extension) but
// the backend signed `fileMetaData.type` (from the file-type library, which
// sniffs file magic bytes), they often disagree and R2 returns 403
// SignatureDoesNotMatch. Always prefer the signed value.
const signedContentType =
data.fields["Content-Type"] || file.type || "application/octet-stream";
return {
url: data.url,
method: "PUT",
body: file,
headers: { "Content-Type": signedContentType },
};
}; };
/** /**

View file

@ -88,7 +88,7 @@ export class SitesFileService extends FileService {
.then(async (response) => { .then(async (response) => {
const signedURLResponse: TFileSignedURLResponse = response?.data; const signedURLResponse: TFileSignedURLResponse = response?.data;
const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file); const fileUploadPayload = generateFileUploadPayload(signedURLResponse, file);
await this.fileUploadService.uploadFile(fileUploadPayload); await this.fileUploadService.uploadFile(signedURLResponse.upload_data.url, fileUploadPayload);
await this.updateAssetUploadStatus(anchor, signedURLResponse.asset_id); await this.updateAssetUploadStatus(anchor, signedURLResponse.asset_id);
return signedURLResponse; return signedURLResponse;
}) })

View file

@ -20,19 +20,19 @@ export type TFileEntityInfo = {
export type TFileMetaData = TFileMetaDataLite & TFileEntityInfo; export type TFileMetaData = TFileMetaDataLite & TFileEntityInfo;
// BB-PATCH (binarybeachio fork): upload now uses presigned PUT (not POST).
// `method` and the trimmed `fields` shape reflect that. See backend
// plane/settings/storage.py docstring + docs/features/storage-upload-flow.md
// in binarybeachio for the full decision record.
export type TFileSignedURLResponse = { export type TFileSignedURLResponse = {
asset_id: string; asset_id: string;
asset_url: string; asset_url: string;
upload_data: { upload_data: {
url: string; url: string;
method?: "PUT" | "POST";
fields: { fields: {
"Content-Type": string; "Content-Type": string;
key: string; key: string;
"x-amz-algorithm": string;
"x-amz-credential": string;
"x-amz-date": string;
policy: string;
"x-amz-signature": string;
}; };
}; };
}; };