feat(jsonrpc): add resource restrict for jsonrpc#6728
feat(jsonrpc): add resource restrict for jsonrpc#6728317787106 wants to merge 15 commits intotronprotocol:developfrom
Conversation
| private int maxSubTopics = 1000; | ||
| private int maxBlockFilterNum = 50000; | ||
| private int maxBatchSize = 100; | ||
| private int maxResponseSize = 25 * 1024 * 1024; |
There was a problem hiding this comment.
[SHOULD] Use a memory-size config type for maxResponseSize
private int maxResponseSize = 25 * 1024 * 1024 is a byte-quantity field, but it is read as a raw int so the config file has to spell out 26214400 instead of a human-readable 25M / 25MiB. The project's config conventions call for getMemorySize() for size-class settings — keeping int here makes the value error-prone for operators (the inline comment // 25 MB = 25 * 1024 * 1024 B in config.conf is an early symptom). maxBatchSize and maxAddressSize are count-class and int is fine for them.
Suggestion: change maxResponseSize to a String field and parse it with getMemorySize(), so HOCON values like 25M work; keep the count-class fields as int.
There was a problem hiding this comment.
Using getMemorySize() increases the cognitive burden for users; using explicit integer values better conveys the intended meaning.
| BufferedResponseWrapper bufferedResp = new BufferedResponseWrapper( | ||
| resp, parameter.getJsonRpcMaxResponseSize()); | ||
|
|
||
| try { |
There was a problem hiding this comment.
[SHOULD] Wrapping every handler exception as IOException breaks the JSON-RPC over HTTP 200 contract
catch (Exception e) throw new IOException("RPC execution failed", e) rethrows every RuntimeException from rpcServer.handle. The parent RateLimiterServlet.service only catches ServletException | IOException and re-throws, so the servlet container emits an HTTP 500 with no JSON-RPC body. jsonrpc4j's ErrorResolver would normally convert internal exceptions into a structured error response on HTTP 200 — that contract is now lost. ETH-compatible clients (web3.js / ethers / web3j) treat HTTP 500 as a transport failure and will retry, amplifying load on the node under stress.
Suggestion: drop the catch (let the original IOException path from rpcServer.handle propagate so jsonrpc4j's structured error path stays intact); or log the cause and emit -32603 Internal error via writeJsonRpcError so the HTTP 200 + JSON-RPC error contract is preserved.
There was a problem hiding this comment.
Add -32603 Internal error for RuntimeException, other IOException will be rethrown.
| try { | ||
| body = readBody(req.getInputStream()); | ||
| rootNode = MAPPER.readTree(body); | ||
| } catch (IOException e) { |
There was a problem hiding this comment.
[SHOULD] Empty or whitespace-only body can make readTree return null and NPE the next line
For zero-length / whitespace-only input, MAPPER.readTree(byte[]) can return null (depending on Jackson version and parser feature flags) rather than a MissingNode. Line 99 then dereferences rootNode.isArray() and throws NullPointerException. The NPE is not caught by the IOException clause at line 95 — it bubbles into the catch (Exception e) at line 109, gets wrapped as IOException, and the client sees HTTP 500 instead of the -32700 Parse error the parse path was supposed to return.
Suggestion: after MAPPER.readTree(body), treat rootNode == null || rootNode.isMissingNode() as a parse error and emit -32700 via writeJsonRpcError.
| if (rootNode.isArray() && batchSize > 0 && rootNode.size() > batchSize) { | ||
| writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, | ||
| "Batch size " + rootNode.size() + " exceeds the limit of " + batchSize, null); | ||
| return; |
There was a problem hiding this comment.
[SHOULD] Batch-size rejection returns a single error object, not a JSON-RPC 2.0 batch array
When rootNode.isArray() is true and the batch exceeds maxBatchSize, the current code calls writeJsonRpcError(...) which writes a single object response. JSON-RPC 2.0 requires that a batch request be answered with a JSON array of responses. Standard ETH-compatible clients (web3.js, ethers.js, web3j) parsing a batch response as an array will fail or silently drop the entire result, instead of surfacing the structured -32005 error to the caller.
Suggestion: when rejecting an over-sized batch, write a single-element array [{jsonrpc:"2.0", error:{code:-32005, message:...}, id:null}] so it round-trips through batch-aware clients. Same applies to the response-too-large path on line 117 if the original request was an array.
There was a problem hiding this comment.
Good catch, use a single-element array when batch exceeds maxBatchSize or response is too large.
| } | ||
|
|
||
| @Override | ||
| public void setStatus(int sc) { |
There was a problem hiding this comment.
[SHOULD] Override getStatus and intercept setHeader/addHeader for Content-Length
Header capture currently only covers setStatus, setContentType, setContentLength(int|long). Two gaps:
-
getStatus()is not overridden. InheritedHttpServletResponseWrapper.getStatus()returns the underlying response's status (stillSC_OKuntilcommitToResponseruns). Any logging filter / metrics interceptor that reads status via the wrapper before commit will see a stale value. -
setHeader(name, value)/addHeader(name, value)pass through to the underlying response. jsonrpc4j currently usessetContentLengthso this is latent — but any downstream filter or library upgrade that writes Content-Length viasetHeaderwould commit a Content-Length to the actual response before the size check runs.
Suggestion: override getStatus() to return this.status; intercept setHeader / addHeader for Content-Length (case-insensitive) so they go through the same buffering / overflow check as setContentLength.
There was a problem hiding this comment.
Thanks four your review:
getStatus()is overridden;- Add addtional check of
content-lengthforsetHeaderandaddHeader. Really, it will be overrideen byactual.setContentLength(buffer.size());so there is little neceesay.
| public int jsonRpcMaxBlockFilterNum = 50000; | ||
| @Getter | ||
| @Setter | ||
| public int jsonRpcMaxBatchSize = 100; |
There was a problem hiding this comment.
[SHOULD] Validate non-negative range for the new size-limit fields at config load
The three new fields (jsonRpcMaxBatchSize, jsonRpcMaxResponseSize, jsonRpcMaxAddressSize) are read via Args.applyNodeConfig with no range validation. The > 0 guards in the call sites mean a negative value silently becomes a permanent 'no limit' state — that is fine if <= 0 is the documented contract, but neither reference.conf nor config.conf says so explicitly, only > 0 otherwise no limit. Operators reading the comment may assume only 0 disables the limit; setting -1 (a common 'unset' sentinel) silently has the same effect, while Integer.MIN_VALUE is also accepted with no warning.
Suggestion: validate value >= 0 in Args.applyNodeConfig (reject startup with a clear error on negative values), and update the reference/config comments to spell out the exact 'disabled' semantics — e.g. # 0 disables the limit; negative values are rejected at startup.
There was a problem hiding this comment.
The comment specifies it already, but I can optimze it as <=0 means no limit in config.conf.
| } | ||
|
|
||
| @Override | ||
| public ServletInputStream getInputStream() { |
There was a problem hiding this comment.
[SHOULD] getInputStream() and getReader() should be mutually exclusive per servlet spec
Servlet 3.1 spec (§ 5.4 / § 5.5) requires that once one of getInputStream() / getReader() has been called on a request, the other must throw IllegalStateException. This wrapper returns a fresh stream/reader from the cached byte array on every call and allows arbitrary interleaving. jsonrpc4j only calls one today, so the divergence is latent — but any future filter that reads the body through the other accessor would silently double-read with no error, which is exactly the kind of bug the spec wants to prevent.
Suggestion: track which accessor was used first (boolean field) and throw IllegalStateException on the second.
There was a problem hiding this comment.
At present, jsonrpc4j only invokes one of them; this is a potential issue rather than an existing bug. Adding the relevant checks is actually redundant, but i will try it.
| * | ||
| * <p>Header-mutating methods ({@code setStatus}, {@code setContentType}) are buffered here and | ||
| * only forwarded to the real response via {@link #commitToResponse()}, preventing a timed-out | ||
| * handler thread from racing with the timeout error writer. |
There was a problem hiding this comment.
[NIT] Class javadoc references a timeout race that has no implementation
The class javadoc says headers are buffered to prevent 'a timed-out handler thread from racing with the timeout error writer' — but this PR has no timeout / cancellation logic, and overflow is not volatile. The comment implies a concurrency guarantee that the code does not provide, which is misleading for future maintainers who might rely on it.
Suggestion: drop the timeout-race wording, or convert the implication into an actual constraint (single-threaded handler assumption documented; or volatile flag if multi-threaded use is intended).
There was a problem hiding this comment.
Good catch, remove time-out related doc. I will add volatile to overflow though it's not necessary.
| body = readBody(req.getInputStream()); | ||
| rootNode = MAPPER.readTree(body); | ||
| } catch (IOException e) { | ||
| writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null); |
There was a problem hiding this comment.
[NIT] Don't collapse transport IOException with JSON parse errors
The single catch (IOException e) at line 95 maps every IO failure to -32700 Parse error. readBody can throw IOException for legitimate transport issues (client aborted, socket reset, read timeout); none of those are parse errors. Only Jackson's JsonProcessingException (an IOException subclass) should map to -32700. Mixing them makes server-side logs less useful for diagnosing real client/network issues.
Suggestion: catch JsonProcessingException separately for -32700, and either let other IOExceptions propagate or map them to a distinct code (and log).
| # The maximum number of allowed topics within a topic criteria, default: 1000, >0 otherwise no limit | ||
| maxSubTopics = 1000 | ||
| # Allowed maximum number for blockFilter | ||
| # Allowed maximum number for blockFilter, default: 50000, >0 otherwise no limit |
There was a problem hiding this comment.
[NIT] Two small follow-ups on the new jsonrpc config comments
A pair of small documentation issues introduced in the new jsonrpc block:
-
Duplicated
default: 100. ThemaxBatchSizeline reads# Allowed batch size, default: 100, default: 100, >0 otherwise no limit— thedefault: 100clause is repeated. -
Inline math comment is easy to misread.
maxResponseSize = 26214400 // 25 MB = 25 * 1024 * 1024 B— operators skimming this line may read the value as25rather than26214400. Also, the surrounding keys all use#comments;//is the only one of its kind in this file.
Suggestion: drop the duplicated default: 100; convert // to #; consider aligning all three new keys' comments with the <=0 means no limit wording used by neighbours like maxBlockRange.
There was a problem hiding this comment.
Drop the duplicated comments. Using <=0 means no limit. Thanks for your careful cancern.
| return writer; | ||
| } | ||
|
|
||
| public void commitToResponse() throws IOException { |
There was a problem hiding this comment.
[NIT] Make commitToResponse idempotent or fail-fast on second call
After commitToResponse(), the wrapper still holds the buffered bytes; calling it a second time would write the same body twice. The current call site only commits once so there's no live bug, but the contract is implicit and a future refactor could trip on it.
Suggestion: either clear the buffer at the end of commitToResponse, or set a committed flag and throw IllegalStateException on a second call.
There was a problem hiding this comment.
Add a varible boolean committed to specify whether it has been write though writing twice will never happen in jsonrpc.
|
|
||
| private static final ObjectMapper MAPPER = new ObjectMapper(); | ||
|
|
||
| enum JsonRpcError { |
There was a problem hiding this comment.
[NIT] Make JsonRpcError enum visibility explicit
enum JsonRpcError { ... final int code; } has no explicit visibility modifier; both the enum and code default to package-private. If there's no reason to expose this enum outside the class, tighten it to private. If tests in the same package will assert against JsonRpcError.RESPONSE_TOO_LARGE.code, keep package-private but add a one-line comment so future readers know it's intentional.
Suggestion: mark the enum and code private, or document the package-private decision in a one-line comment.
There was a problem hiding this comment.
There is not related testcase now, adding the modifier private is OK
| BufferedResponseWrapper bufferedResp = new BufferedResponseWrapper( | ||
| resp, parameter.getJsonRpcMaxResponseSize()); | ||
|
|
||
| try { |
There was a problem hiding this comment.
[NIT] Document the user-visible behavior change of wrapping rpcServer.handle exceptions
Independent of the structural concern (separate [SHOULD] comment about HTTP 200 vs 500), the wrap-as-IOException change also bypasses the existing RateLimiterServlet.service's catch (Exception unexpected) { logger.error(...) } path, so the standard Http Api {}, Method:{} error log line no longer fires. This is a quiet observability regression worth at least a log line and a PR-description bullet.
Suggestion: log the original cause inside the catch before rethrowing, and add a one-line note to the PR description about the new error-mapping.
There was a problem hiding this comment.
Add the log by using logger.error("RPC execution failed", e);. But actually there may be too many error stack if the node is acctacked.
…r twice; add several methods of HttpServletRequestWrapper
What does this PR do?
Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes #6632
Changes:
Batch size limit (
node.jsonrpc.maxBatchSize, default: 100)-32005(exceed limit).maxBatchSize ≤ 0(no limit).Response size limit (
node.jsonrpc.maxResponseSize, default: 25 MB)BufferedResponseWrapper: interceptsgetOutputStream()andgetWriter()writes into an in-memory buffer. When a write would exceed the configured limit, it sets anoverflowflag and resets the buffer instead of continuing to accumulate bytes, bounding worst-case memory usage to at mostmaxResponseSize.CachedBodyRequestWrapper: replays the pre-read request body via bothgetInputStream()andgetReader(), so the body can be inspected before being forwarded toJsonRpcServer.isOverflow()and — if set — discards the partial buffer and returns error code-32003(response too large).Address list limit (
node.jsonrpc.maxAddressSize, default: 1000)LogFilter, validates theaddressarray length ineth_getLogs/eth_newFilterrequests.JsonRpcInvalidParamsException.Structured JSON-RPC error responses
writeJsonRpcErrorusesObjectMapperto build error responses safely, avoiding JSON injection from error messages.-32700parse error,-32005exceed limit,-32003response too large.Why are these changes required?
maxResponseSizeand fails fast rather than buffering the entire response before checking.Configuration
This PR has been tested by:
BufferedResponseWrapperTest)