[Website] Support CURLFile uploads#3341
Merged
adamziel merged 16 commits intoWordPress:trunkfrom Mar 6, 2026
Merged
Conversation
PHP's curl sends "Expect: 100-continue" for POST bodies larger than 1024 bytes (e.g. CURLFile uploads). The fetch() API does not support this header and rejects the request with "expect header not supported". This commit strips the Expect header in parseRequestHeaders() before creating the fetch Request object. It also fixes header value parsing to correctly handle values containing ": " (e.g. URLs in Location headers) by using indexOf instead of split. Adds tests for CURLFile uploads in both the Node.js networking layer and the browser's TCPOverFetchWebsocket, including a test that simulates the Expect: 100-continue delayed body pattern. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Contributor
There was a problem hiding this comment.
Pull request overview
Fixes PHP CURLFile uploads in the browser runtime by preventing fetch() from receiving the unsupported Expect: 100-continue header.
Changes:
- Make HTTP header parsing more robust (split on first delimiter rather than
String.split). - Strip the
Expectheader before issuing requests viafetch(). - Add multipart upload tests (including an
Expect: 100-continuescenario) for web + node runtimes.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| packages/php-wasm/web/src/lib/tcp-over-fetch-websocket.ts | Updates request header parsing and removes Expect to avoid fetch() rejection. |
| packages/php-wasm/web/src/lib/tcp-over-fetch-websocket.spec.ts | Adds /upload test endpoint and multipart upload coverage (incl. Expect scenario). |
| packages/php-wasm/node/src/test/php-networking.spec.ts | Adds a node-side upload server and a PHP CURLFile upload regression test. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
PHP's CURLFile uploads through the CORS proxy were hanging due to four interconnected bugs in the networking stack. First, PHP curl sends "Expect: 100-continue" for POST bodies larger than 1024 bytes, then pauses waiting for a server response before sending the body. Our code waited for the body before fetching, creating a deadlock. We now detect this header, strip it, and send back "HTTP/1.1 100 Continue" to unblock curl. Second, when the full request body arrived together with the headers in a single chunk (common for small POST bodies), the body stream never closed because pull() kept waiting for more upstream data. We now close the stream immediately when Content-Length is satisfied. Third, teeRequest converted bodies to ReadableStream branches, and Chrome's streaming upload failed with ERR_ALPN_NEGOTIATION_FAILED against the CORS proxy. We now buffer the body into an ArrayBuffer before the CORS proxy retry. Fourth, the CORS proxy PHP read from php://input which is empty for multipart/form-data requests since PHP auto-parses them. We now reconstruct the body from $_POST and $_FILES using CURLFile. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When reconstructing a multipart/form-data body from $_POST/$_FILES, the CORS proxy was forwarding the original Content-Type header from the browser, which contained a boundary that didn't match the newly generated body. PHP curl generates its own boundary when using CURLOPT_POSTFIELDS with an array, so the original Content-Type and Content-Length must be stripped to let curl set matching headers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instead of always buffering the request body into an ArrayBuffer before sending to the CORS proxy, try streaming first. This works in production where the CORS proxy supports HTTP/2. In development where Vite proxies over HTTP/1.1, Chrome rejects streaming uploads with ERR_ALPN_NEGOTIATION_FAILED – in that case, we fall back to buffering. This avoids exhausting memory on large file uploads. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Chrome requires HTTP/2 for streaming request bodies (ReadableStream with duplex: 'half'). Vite's dev server only speaks HTTP/1.1, so streaming uploads through it fail with ERR_ALPN_NEGOTIATION_FAILED. This adds a small Node.js HTTP/2 reverse proxy that sits in front of the PHP CORS proxy server. On first run, it generates a self-signed certificate for localhost and caches it. The dev and CI CORS proxy URL now points to this HTTP/2 server (https://localhost:5264) instead of going through Vite's proxy. With this change, we no longer need to buffer request bodies into ArrayBuffers before sending to the CORS proxy – streaming works for both small and large uploads. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The PHP built-in server responds with transfer-encoding: chunked, which is an HTTP/1.1 connection-specific header forbidden in HTTP/2 (RFC 9113 section 8.2.2). Strip connection-specific headers from upstream responses before forwarding them over HTTP/2. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add ignoreHTTPSErrors to the shared Playwright config so Firefox also accepts the self-signed certificate on the HTTP/2 CORS proxy. Chromium already had --ignore-certificate-errors, but Firefox needs Playwright's ignoreHTTPSErrors setting instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
adamziel
commented
Mar 6, 2026
| return $len; | ||
| } | ||
|
|
||
| if(in_array($name, ['strict-transport-security', 'content-security-policy', 'upgrade-insecure-requests'], true)) { |
Collaborator
Author
There was a problem hiding this comment.
Hmm this matters for the local dev server but not so much in production.
…TP/1.1 The dev CORS proxy no longer needs an HTTP/2 reverse proxy with self-signed certificates. Instead, request bodies are buffered into an ArrayBuffer before calling fetch() when the target is HTTP/1.1, which avoids the streaming body limitations that originally motivated the HTTP/2 proxy. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
a83722c to
65d252b
Compare
Body buffering was happening inside parseHttpRequest(), which blocked before returning. But 100 Continue is sent by the callers AFTER parseHttpRequest() returns. This caused a deadlock: parseHttpRequest waited for body data that wouldn't arrive until 100 Continue was sent. Move body buffering to fetchOverHTTP(), after the 100 Continue response, so curl can send the body before we try to buffer it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What it does
Fixes CURLFile uploads (and large POST requests in general) hanging indefinitely in the browser.
Rationale
PHP curl sends an
Expect: 100-continueheader for POST bodies larger than 1024 bytes. It pauses after the headers and waits for a100 Continueresponse before transmitting the body. The TCP-over-fetch layer — which translates PHP's raw HTTP bytes into browserfetch()calls — didn't understand this protocol. It sat waiting for the body that curl was holding back, creating a deadlock that hung the page forever.On top of that, the CORS proxy couldn't forward multipart uploads at all. PHP's built-in server consumes
php://inputwhen it parsesmultipart/form-datainto$_POSTand$_FILES, so the proxy was reading an empty stream.Implementation
tcp-over-fetch-websocket.ts)Expect: 100-continuesupport. When PHP curl sends this header, the layer now detects it, strips it (thefetch()API doesn't support it), and sends backHTTP/1.1 100 Continueto unblock curl before issuing the actualfetch().Body buffering for HTTP/1.1. Chrome does not support using a ReadableStream request body with HTTP/1.1 requests. If we just always set
duplex: 'half', we'll get an ERR_ALPN_NEGOTIATION_FAILED error as Chrome will refuse to use duplex over HTTP/1.1 and will switch to HTTP/2. A HTTP/1.1-only server, however, will still reply with a HTTP/1.1 response, causing that ALPN error.We do not know upfront what kind of server we're talking to, so we make a guess. Most servers do not support HTTP >= 2 without TLS, so we can assume that anything starting with
http://requires buffering the body stream. This solves the ALPN negotiation problem on the local dev server.There will, inevitably, be some ancient HTTP/1.1+TLS servers on the internet that will fall into the
duplex: halftrap. This is not a big problem, though, since those requests will fail and be retried over the CORS proxy which runs alongside Playground and speaks either HTTP/1.1 in the local dev server or HTTP/2+ in production.multipart/form-datarequests in the CORS proxy (cors-proxy.php). It no longer relies onphp://inputthat isn't populated for multipart requests, but reconstructs the body from$_POSTand$_FILESusingCURLFile.cache.put()GET/HEADresponses inoffline-mode-cache.ts, otherwisecache.put()throws errors.Testing instructions
Run the TCP-over-fetch unit tests:
npx nx test php-wasm-web --testFile=tcp-over-fetch-websocket.spec.tsTests cover body stream closing,
Expect: 100-continuehandling, and HTTP/1.1 body buffering.