Skip to content

feat(jsonrpc): add resource restrict for jsonrpc#6728

Open
317787106 wants to merge 15 commits intotronprotocol:developfrom
317787106:hotfix/restrict_jsonrpc_size
Open

feat(jsonrpc): add resource restrict for jsonrpc#6728
317787106 wants to merge 15 commits intotronprotocol:developfrom
317787106:hotfix/restrict_jsonrpc_size

Conversation

@317787106
Copy link
Copy Markdown
Collaborator

@317787106 317787106 commented Apr 28, 2026

What does this PR do?

Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes #6632

Changes:

  1. Batch size limit (node.jsonrpc.maxBatchSize, default: 100)

    • Validates the array length of batch JSON-RPC requests before dispatching.
    • Requests exceeding the limit are rejected with error code -32005 (exceed limit).
    • The check is skipped when maxBatchSize ≤ 0 (no limit).
  2. Response size limit (node.jsonrpc.maxResponseSize, default: 25 MB)

    • Introduces BufferedResponseWrapper: intercepts getOutputStream() and getWriter() writes into an in-memory buffer. When a write would exceed the configured limit, it sets an overflow flag and resets the buffer instead of continuing to accumulate bytes, bounding worst-case memory usage to at most maxResponseSize.
    • Introduces CachedBodyRequestWrapper: replays the pre-read request body via both getInputStream() and getReader(), so the body can be inspected before being forwarded to JsonRpcServer.
    • After the handler returns, the servlet checks isOverflow() and — if set — discards the partial buffer and returns error code -32003 (response too large).
  3. Address list limit (node.jsonrpc.maxAddressSize, default: 1000)

    • In LogFilter, validates the address array length in eth_getLogs / eth_newFilter requests.
    • Requests exceeding the limit are rejected with JsonRpcInvalidParamsException.
  4. Structured JSON-RPC error responses

    • writeJsonRpcError uses ObjectMapper to build error responses safely, avoiding JSON injection from error messages.
    • Error codes follow the JSON-RPC 2.0 spec: -32700 parse error, -32005 exceed limit, -32003 response too large.

Why are these changes required?

  • Without limits, a client can send an arbitrarily large batch, trigger an expensive query with many addresses, or force the node to serialize a massive response — all of which cause unbounded memory growth.
  • The response buffer caps worst-case allocation to maxResponseSize and fails fast rather than buffering the entire response before checking.

Configuration

node {
  jsonrpc {
    # Max JSON-RPC batch array size; 0 = no limit
    maxBatchSize = 100
    # Max response body in bytes (default 25 MB)
    maxResponseSize = 26214400
    # Max address entries in eth_getLogs / eth_newFilter
    maxAddressSize = 1000
  }
}

This PR has been tested by:

  • Unit tests (BufferedResponseWrapperTest)
  • Manual testing

@halibobo1205 halibobo1205 added this to the GreatVoyage-v4.8.2 milestone Apr 29, 2026
@halibobo1205 halibobo1205 added topic:json-rpc topic:api rpc/http related issue labels Apr 29, 2026
private int maxSubTopics = 1000;
private int maxBlockFilterNum = 50000;
private int maxBatchSize = 100;
private int maxResponseSize = 25 * 1024 * 1024;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Use a memory-size config type for maxResponseSize

private int maxResponseSize = 25 * 1024 * 1024 is a byte-quantity field, but it is read as a raw int so the config file has to spell out 26214400 instead of a human-readable 25M / 25MiB. The project's config conventions call for getMemorySize() for size-class settings — keeping int here makes the value error-prone for operators (the inline comment // 25 MB = 25 * 1024 * 1024 B in config.conf is an early symptom). maxBatchSize and maxAddressSize are count-class and int is fine for them.

Suggestion: change maxResponseSize to a String field and parse it with getMemorySize(), so HOCON values like 25M work; keep the count-class fields as int.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using getMemorySize() increases the cognitive burden for users; using explicit integer values better conveys the intended meaning.

BufferedResponseWrapper bufferedResp = new BufferedResponseWrapper(
resp, parameter.getJsonRpcMaxResponseSize());

try {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Wrapping every handler exception as IOException breaks the JSON-RPC over HTTP 200 contract

catch (Exception e) throw new IOException("RPC execution failed", e) rethrows every RuntimeException from rpcServer.handle. The parent RateLimiterServlet.service only catches ServletException | IOException and re-throws, so the servlet container emits an HTTP 500 with no JSON-RPC body. jsonrpc4j's ErrorResolver would normally convert internal exceptions into a structured error response on HTTP 200 — that contract is now lost. ETH-compatible clients (web3.js / ethers / web3j) treat HTTP 500 as a transport failure and will retry, amplifying load on the node under stress.

Suggestion: drop the catch (let the original IOException path from rpcServer.handle propagate so jsonrpc4j's structured error path stays intact); or log the cause and emit -32603 Internal error via writeJsonRpcError so the HTTP 200 + JSON-RPC error contract is preserved.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add -32603 Internal error for RuntimeException, other IOException will be rethrown.

try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
} catch (IOException e) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Empty or whitespace-only body can make readTree return null and NPE the next line

For zero-length / whitespace-only input, MAPPER.readTree(byte[]) can return null (depending on Jackson version and parser feature flags) rather than a MissingNode. Line 99 then dereferences rootNode.isArray() and throws NullPointerException. The NPE is not caught by the IOException clause at line 95 — it bubbles into the catch (Exception e) at line 109, gets wrapped as IOException, and the client sees HTTP 500 instead of the -32700 Parse error the parse path was supposed to return.

Suggestion: after MAPPER.readTree(body), treat rootNode == null || rootNode.isMissingNode() as a parse error and emit -32700 via writeJsonRpcError.

if (rootNode.isArray() && batchSize > 0 && rootNode.size() > batchSize) {
writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
"Batch size " + rootNode.size() + " exceeds the limit of " + batchSize, null);
return;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Batch-size rejection returns a single error object, not a JSON-RPC 2.0 batch array

When rootNode.isArray() is true and the batch exceeds maxBatchSize, the current code calls writeJsonRpcError(...) which writes a single object response. JSON-RPC 2.0 requires that a batch request be answered with a JSON array of responses. Standard ETH-compatible clients (web3.js, ethers.js, web3j) parsing a batch response as an array will fail or silently drop the entire result, instead of surfacing the structured -32005 error to the caller.

Suggestion: when rejecting an over-sized batch, write a single-element array [{jsonrpc:"2.0", error:{code:-32005, message:...}, id:null}] so it round-trips through batch-aware clients. Same applies to the response-too-large path on line 117 if the original request was an array.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, use a single-element array when batch exceeds maxBatchSize or response is too large.

}

@Override
public void setStatus(int sc) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Override getStatus and intercept setHeader/addHeader for Content-Length

Header capture currently only covers setStatus, setContentType, setContentLength(int|long). Two gaps:

  1. getStatus() is not overridden. Inherited HttpServletResponseWrapper.getStatus() returns the underlying response's status (still SC_OK until commitToResponse runs). Any logging filter / metrics interceptor that reads status via the wrapper before commit will see a stale value.

  2. setHeader(name, value) / addHeader(name, value) pass through to the underlying response. jsonrpc4j currently uses setContentLength so this is latent — but any downstream filter or library upgrade that writes Content-Length via setHeader would commit a Content-Length to the actual response before the size check runs.

Suggestion: override getStatus() to return this.status; intercept setHeader / addHeader for Content-Length (case-insensitive) so they go through the same buffering / overflow check as setContentLength.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks four your review:

  • getStatus() is overridden;
  • Add addtional check of content-length for setHeader and addHeader. Really, it will be overrideen by actual.setContentLength(buffer.size()); so there is little neceesay.

public int jsonRpcMaxBlockFilterNum = 50000;
@Getter
@Setter
public int jsonRpcMaxBatchSize = 100;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Validate non-negative range for the new size-limit fields at config load

The three new fields (jsonRpcMaxBatchSize, jsonRpcMaxResponseSize, jsonRpcMaxAddressSize) are read via Args.applyNodeConfig with no range validation. The > 0 guards in the call sites mean a negative value silently becomes a permanent 'no limit' state — that is fine if <= 0 is the documented contract, but neither reference.conf nor config.conf says so explicitly, only > 0 otherwise no limit. Operators reading the comment may assume only 0 disables the limit; setting -1 (a common 'unset' sentinel) silently has the same effect, while Integer.MIN_VALUE is also accepted with no warning.

Suggestion: validate value >= 0 in Args.applyNodeConfig (reject startup with a clear error on negative values), and update the reference/config comments to spell out the exact 'disabled' semantics — e.g. # 0 disables the limit; negative values are rejected at startup.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment specifies it already, but I can optimze it as <=0 means no limit in config.conf.

}

@Override
public ServletInputStream getInputStream() {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] getInputStream() and getReader() should be mutually exclusive per servlet spec

Servlet 3.1 spec (§ 5.4 / § 5.5) requires that once one of getInputStream() / getReader() has been called on a request, the other must throw IllegalStateException. This wrapper returns a fresh stream/reader from the cached byte array on every call and allows arbitrary interleaving. jsonrpc4j only calls one today, so the divergence is latent — but any future filter that reads the body through the other accessor would silently double-read with no error, which is exactly the kind of bug the spec wants to prevent.

Suggestion: track which accessor was used first (boolean field) and throw IllegalStateException on the second.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At present, jsonrpc4j only invokes one of them; this is a potential issue rather than an existing bug. Adding the relevant checks is actually redundant, but i will try it.

*
* <p>Header-mutating methods ({@code setStatus}, {@code setContentType}) are buffered here and
* only forwarded to the real response via {@link #commitToResponse()}, preventing a timed-out
* handler thread from racing with the timeout error writer.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Class javadoc references a timeout race that has no implementation

The class javadoc says headers are buffered to prevent 'a timed-out handler thread from racing with the timeout error writer' — but this PR has no timeout / cancellation logic, and overflow is not volatile. The comment implies a concurrency guarantee that the code does not provide, which is misleading for future maintainers who might rely on it.

Suggestion: drop the timeout-race wording, or convert the implication into an actual constraint (single-threaded handler assumption documented; or volatile flag if multi-threaded use is intended).

Copy link
Copy Markdown
Collaborator Author

@317787106 317787106 May 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, remove time-out related doc. I will add volatile to overflow though it's not necessary.

body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
} catch (IOException e) {
writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Don't collapse transport IOException with JSON parse errors

The single catch (IOException e) at line 95 maps every IO failure to -32700 Parse error. readBody can throw IOException for legitimate transport issues (client aborted, socket reset, read timeout); none of those are parse errors. Only Jackson's JsonProcessingException (an IOException subclass) should map to -32700. Mixing them makes server-side logs less useful for diagnosing real client/network issues.

Suggestion: catch JsonProcessingException separately for -32700, and either let other IOExceptions propagate or map them to a distinct code (and log).

# The maximum number of allowed topics within a topic criteria, default: 1000, >0 otherwise no limit
maxSubTopics = 1000
# Allowed maximum number for blockFilter
# Allowed maximum number for blockFilter, default: 50000, >0 otherwise no limit
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Two small follow-ups on the new jsonrpc config comments

A pair of small documentation issues introduced in the new jsonrpc block:

  1. Duplicated default: 100. The maxBatchSize line reads # Allowed batch size, default: 100, default: 100, >0 otherwise no limit — the default: 100 clause is repeated.

  2. Inline math comment is easy to misread. maxResponseSize = 26214400 // 25 MB = 25 * 1024 * 1024 B — operators skimming this line may read the value as 25 rather than 26214400. Also, the surrounding keys all use # comments; // is the only one of its kind in this file.

Suggestion: drop the duplicated default: 100; convert // to #; consider aligning all three new keys' comments with the <=0 means no limit wording used by neighbours like maxBlockRange.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drop the duplicated comments. Using <=0 means no limit. Thanks for your careful cancern.

return writer;
}

public void commitToResponse() throws IOException {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Make commitToResponse idempotent or fail-fast on second call

After commitToResponse(), the wrapper still holds the buffered bytes; calling it a second time would write the same body twice. The current call site only commits once so there's no live bug, but the contract is implicit and a future refactor could trip on it.

Suggestion: either clear the buffer at the end of commitToResponse, or set a committed flag and throw IllegalStateException on a second call.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a varible boolean committed to specify whether it has been write though writing twice will never happen in jsonrpc.


private static final ObjectMapper MAPPER = new ObjectMapper();

enum JsonRpcError {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Make JsonRpcError enum visibility explicit

enum JsonRpcError { ... final int code; } has no explicit visibility modifier; both the enum and code default to package-private. If there's no reason to expose this enum outside the class, tighten it to private. If tests in the same package will assert against JsonRpcError.RESPONSE_TOO_LARGE.code, keep package-private but add a one-line comment so future readers know it's intentional.

Suggestion: mark the enum and code private, or document the package-private decision in a one-line comment.

Copy link
Copy Markdown
Collaborator Author

@317787106 317787106 May 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is not related testcase now, adding the modifier private is OK

BufferedResponseWrapper bufferedResp = new BufferedResponseWrapper(
resp, parameter.getJsonRpcMaxResponseSize());

try {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Document the user-visible behavior change of wrapping rpcServer.handle exceptions

Independent of the structural concern (separate [SHOULD] comment about HTTP 200 vs 500), the wrap-as-IOException change also bypasses the existing RateLimiterServlet.service's catch (Exception unexpected) { logger.error(...) } path, so the standard Http Api {}, Method:{} error log line no longer fires. This is a quiet observability regression worth at least a log line and a PR-description bullet.

Suggestion: log the original cause inside the catch before rethrowing, and add a one-line note to the PR description about the new error-mapping.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the log by using logger.error("RPC execution failed", e);. But actually there may be too many error stack if the node is acctacked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

topic:api rpc/http related issue topic:json-rpc

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

[Feature] Introduce resource limits for JSON-RPC (batch size, response size, address size, timeout)

4 participants