Skip to content

Comments

perf - Reduce TSO waker churn#529

Open
mingley wants to merge 4 commits intotikv:masterfrom
mingley:mingley/tso-waker-criterion
Open

perf - Reduce TSO waker churn#529
mingley wants to merge 4 commits intotikv:masterfrom
mingley:mingley/tso-waker-criterion

Conversation

@mingley
Copy link

@mingley mingley commented Feb 6, 2026

Summary

This PR hardens TSO sender/response coordination in src/pd/timestamp.rs to remove a lock-contention lost-wake risk while preserving throughput optimizations.

Quality objective:

  • eliminate possible sender stalls under lock contention,
  • keep wake/waker overhead low on hot paths,
  • preserve timestamp correctness and backpressure behavior.

Rationale

Why this change exists

The prior optimization path used a custom lock-contention handshake (try_lock + shared waiting flag + custom wake coordination). That reduced contention overhead but still left an interleaving where sender progress could depend on a future external wake, creating stall risk.

This change removes that fragile handshake and relies on the async mutex waiter path for lock-contention wakeup guarantees.

Deep Dive: Concurrency Model

There are two coordination loops:

  • Sender loop: TsoRequestStream::poll_next
  • Response loop: handle_tso_responses

Shared state:

  • pending_requests: Mutex<VecDeque<RequestGroup>>
  • sending_future_waker: AtomicWaker

Updated behavior:

  1. Lock contention in sender path now uses pending_requests.lock().poll(cx).
    This places the task on the mutex wait queue and gives deterministic wakeup when the lock becomes available.

  2. Response path wake policy remains conditional:

  • compute was_full before allocate_timestamps,
  • wake sender only on full -> non-full transition.
  1. No lock-contention side-channel flag is used anymore.
    This removes a race-prone protocol and narrows wake logic to queue-capacity transitions.

Why this does not break backward compatibility

  • No public API/type/signature changes.
  • Request/response protobuf semantics unchanged.
  • Timestamp allocation logic unchanged (allocate_timestamps invariants are preserved).
  • Backpressure contract unchanged (MAX_PENDING_COUNT still bounds in-flight groups).
  • Background task error/termination semantics remain explicit and unchanged from current branch behavior.

Net effect: internal scheduling becomes safer under contention without changing externally observable client API behavior.

Benchmark Results

Measured with a temporary standalone Rust microbenchmark harness (/tmp/pr529_tso_bench) to compare policy-level overhead for pre-PR logic (before) and latest PR logic (after). Harness was run in --release mode and not committed to this repository.

Benchmark configuration:

  • 40 measured samples (8 warmup samples)
  • 30,000,000 iterations per sample
  • Full-queue transition period: 1 in 1024 iterations
  • Measurement: ns/op (median, p05, p95)

Response Wake Policy

Variant Median ns/op p05 ns/op p95 ns/op
before (always wake) 1.726378 1.553182 1.897283
after (conditional wake) 0.243229 0.237556 0.258467

Speedup (median): 7.10x

Self-Waker Registration Policy

Variant Median ns/op p05 ns/op p95 ns/op
before (always register) 2.422287 2.358501 2.468619
after (register only when full) 0.255310 0.234999 0.266172

Speedup (median): 9.49x

Notes:

  • These numbers isolate internal wake/waker policy overhead; they are not end-to-end PD/TSO RPC latency.
  • Results validate directional impact of hot-path scheduling changes.

File Scope

  • src/pd/timestamp.rs
  • Cargo.toml (branch-level context change already present in PR)

Testing Done

Executed locally:

  1. cargo +1.93.0 test pd::timestamp --lib -> pass (14 passed)
  2. cargo +1.93.0 test -> pass (67 unit passed, 49 doc passed)
  3. Benchmark harness rerun (cargo +1.93.0 run --release in /tmp/pr529_tso_bench) -> metrics above

Focused concurrency coverage in src/pd/timestamp.rs includes:

  • poll_next_waits_on_mutex_when_lock_is_contended
  • poll_next_registers_self_waker_when_pending_queue_is_full
  • poll_next_does_not_register_self_waker_when_queue_not_full
  • handle_tso_responses_wakes_sender_when_queue_transitions_from_full
  • handle_tso_responses_does_not_wake_sender_when_queue_was_not_full
  • handle_tso_responses_wakes_sender_once_for_each_full_to_non_full_transition
  • timestamp allocation invariant and error-path tests

Compatibility

No public API surface change is introduced by this PR.

Summary by CodeRabbit

  • Tests

    • Added comprehensive unit tests for timestamp request handling, queue behavior, and edge cases.
  • Chores

    • Optimized async task management and response processing for improved reliability.

@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 6, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign you06 for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added contribution This PR is from a community contributor. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. dco-signoff: no Indicates the PR's author has not signed dco. first-time-contributor Indicates that the PR was contributed by an external member and is a first-time contributor. labels Feb 6, 2026
@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 6, 2026

Welcome @mingley!

It looks like this is your first PR to tikv/client-rust 🎉.

I'm the bot to help you request reviewers, add labels and more, See available commands.

We want to make sure your contribution gets all the attention it needs!



Thank you, and welcome to tikv/client-rust. 😃

@coderabbitai
Copy link

coderabbitai bot commented Feb 6, 2026

Warning

Rate limit exceeded

@mingley has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 5 minutes and 16 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 10f685c and 7530dbf.

📒 Files selected for processing (1)
  • src/pd/timestamp.rs

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

A trailing newline is added to Cargo.toml. The TSO timestamp module is refactored to replace inline response handling with a dedicated handler function, adjust wake-up semantics to only wake on queue state transitions, introduce batch observation, and add comprehensive unit tests covering new behavior.

Changes

Cohort / File(s) Summary
Configuration
Cargo.toml
Added trailing newline to end of file.
TSO Response Handling Refactor
src/pd/timestamp.rs
Replaced spawning a background thread with an async task. Introduced handle_tso_responses function to consolidate response processing logic. Modified wake-up semantics to conditionally wake sender only on queue fullness state transitions. Added observe_tso_batch invocation during batch formation. Enhanced lifecycle management to check for zero remaining pending requests on termination. Expanded test coverage with extensive unit tests for wake semantics, lock contention, batch observation, and edge cases.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 A thousand lines of rabbit care,
Where threads once spawned, tasks now fly fair,
Wake-ups wake wisely, not every time—
Batches observed in async rhyme,
Tests sprouted thick like garden fare! 🌱

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.42% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'perf - Reduce TSO waker churn' directly reflects the main change: refactoring the TSO sender/response coordination mechanism to reduce waker churn while preserving throughput.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/pd/timestamp.rs (1)

78-82: ⚠️ Potential issue | 🟡 Minor

Background task errors are silently discarded due to missing JoinHandle handling.

The Result<()> return type and explicit Ok(()) at line 116 align cleanly with the ? usage inside the function. However, the JoinHandle returned by tokio::spawn(run_tso(...)) at line 62 is never stored or awaited. Since errors can occur at both the pd_client.tso() call (line 99) and within allocate_timestamps() (line 105), failures in the background task will go unnoticed and the connection closure will only be discovered when callers receive a channel-closed error instead of the root cause.

Consider storing the JoinHandle and handling its potential error, or spawn a task that logs/propagates failures.

🧹 Nitpick comments (3)
benches/tso_waker_policy.rs (2)

20-36: The "old" and "new" response benchmarks have asymmetric work, which is expected but worth noting.

In response_policy_old, wake() is called unconditionally on every iteration, while in response_policy_new, it's called only on the full→non-full transition (~once per 1024 iterations). The reported speedup primarily measures the cost of not calling wake(), rather than the overhead of the conditional check itself. This is fine for validating the optimization's effect, but the doc and PR description should be clear that the speedup reflects the amortized skip rate under this specific simulation pattern.

Also applies to: 38-57


9-11: Consider documenting what FULL_EVERY and FULL_WINDOW represent.

These simulation parameters control how often the queue becomes full in the benchmark, directly affecting the measured speedup ratio. A brief comment explaining their role would help future readers understand and tune the benchmark.

doc/tso_waker_criterion.md (1)

1-58: Consider noting that results should be re-run when the benchmark or production code changes.

Hardcoded benchmark results in committed documentation risk becoming stale as the code evolves. Consider adding a note that these numbers are a point-in-time snapshot and should be re-collected after significant changes to timestamp.rs or the benchmark itself.

@mingley mingley force-pushed the mingley/tso-waker-criterion branch from c2fdbf5 to a66b5e6 Compare February 6, 2026 05:47
@ti-chi-bot ti-chi-bot bot added dco-signoff: yes Indicates the PR's author has signed the dco. and removed dco-signoff: no Indicates the PR's author has not signed dco. labels Feb 6, 2026
@mingley
Copy link
Author

mingley commented Feb 6, 2026

Addressed the CodeRabbit feedback in commit cd6aa8f:

  1. Background TSO task errors are now surfaced via logs
  • Updated src/pd/timestamp.rs so the spawned task wraps run_tso(...) and logs any Err(...) with context (warn!) instead of silently discarding it.
  1. Benchmark assumptions are now explicit
  • Added comments in benches/tso_waker_policy.rs for FULL_EVERY / FULL_WINDOW.
  • Added inline notes clarifying old vs new wake behavior (unconditional vs transition-only wake).
  1. Benchmark doc now includes interpretation/staleness guidance
  • Updated doc/tso_waker_criterion.md to state these numbers are point-in-time and should be re-run after changes to src/pd/timestamp.rs or the benchmark.
  • Added a note that the response benchmark speedup reflects amortized skipped wake calls under this simulation pattern.

@mingley
Copy link
Author

mingley commented Feb 6, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 6, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@mingley
Copy link
Author

mingley commented Feb 6, 2026

Addressed local review finding #1 in bba9a1c:

  • src/pd/timestamp.rs: changed stream loop from while let Some(Ok(resp)) to while let Some(resp) + let resp = resp?; so gRPC stream errors are propagated out of run_tso and logged by the spawned-task wrapper instead of being silently treated as normal termination.

@mingley mingley changed the title Reduce TSO waker churn and quantify impact with Criterion perf - Reduce TSO waker churn and quantify impact with Criterion Feb 6, 2026
@ti-chi-bot ti-chi-bot bot added dco-signoff: no Indicates the PR's author has not signed dco. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. dco-signoff: yes Indicates the PR's author has signed the dco. labels Feb 9, 2026
@mingley mingley force-pushed the mingley/tso-waker-criterion branch from 839c263 to 8d5bb72 Compare February 9, 2026 20:13
@ti-chi-bot ti-chi-bot bot added dco-signoff: yes Indicates the PR's author has signed the dco. dco-signoff: no Indicates the PR's author has not signed dco. and removed dco-signoff: no Indicates the PR's author has not signed dco. dco-signoff: yes Indicates the PR's author has signed the dco. labels Feb 9, 2026
@mingley mingley force-pushed the mingley/tso-waker-criterion branch from 558b5b5 to cd0eaa5 Compare February 11, 2026 12:43
@ti-chi-bot ti-chi-bot bot added dco-signoff: yes Indicates the PR's author has signed the dco. and removed dco-signoff: no Indicates the PR's author has not signed dco. labels Feb 11, 2026
@mingley mingley force-pushed the mingley/tso-waker-criterion branch from cd0eaa5 to 5e37adb Compare February 11, 2026 13:22
Signed-off-by: Michael Ingley <michael.ingley@gmail.com>
@mingley mingley force-pushed the mingley/tso-waker-criterion branch from 5e37adb to a1312ac Compare February 11, 2026 13:22
@mingley mingley changed the title perf - Reduce TSO waker churn and quantify impact with Criterion perf - Reduce TSO waker churn Feb 13, 2026
Michael Ingley and others added 2 commits February 12, 2026 20:46
Signed-off-by: Michael Ingley <mingley@linkedin.com>
Signed-off-by: Michael Ingley <michael.ingley@gmail.com>
@mingley
Copy link
Author

mingley commented Feb 24, 2026

@coderabbitai review\n\nPlease run a fresh review on the latest head (). Focus on lock-contention wake safety and the new response-loop wake tests.

@mingley
Copy link
Author

mingley commented Feb 24, 2026

@coderabbitai review\n\nPlease run a fresh review on the latest head: 10f685c. Focus on lock-contention wake safety and the new response-loop wake tests.

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

@mingley: I'll run a fresh review on the latest head, with specific focus on:

  1. Lock-contention wake safety - examining the sender_waiting_on_lock flag coordination between TsoRequestStream::poll_next and run_tso to ensure no missed wakeups when sender is blocked on mutex contention
  2. New response-loop wake tests - reviewing the unit test coverage for wake behavior, particularly the scenarios testing queue fullness transitions and lock-contention signaling
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

@mingley: I'll perform a fresh review of commit 10f685c, focusing on:

  1. Lock-contention wake safety - examining the sender_waiting_on_lock coordination between TsoRequestStream::poll_next and run_tso
  2. New response-loop wake tests - validating the test coverage for the wake semantics and edge cases
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/pd/timestamp.rs`:
- Around line 61-67: TimestampOracle::new currently calls tokio::spawn directly
(spawning run_tso with cluster_id, pd_client, request_rx) which will panic if no
Tokio runtime exists; change TimestampOracle::new to guard the spawn by calling
tokio::runtime::Handle::try_current() and, if a handle is present, use
handle.spawn(...) to run run_tso, otherwise return a clear error (or change the
constructor to return a Result) or document the runtime requirement; ensure you
update the TimestampOracle::new signature/return path and callers accordingly so
missing runtime becomes a controlled error rather than a panic.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8d5bb72 and 10f685c.

📒 Files selected for processing (2)
  • Cargo.toml
  • src/pd/timestamp.rs
✅ Files skipped from review due to trivial changes (1)
  • Cargo.toml

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/pd/timestamp.rs (1)

161-168: ⚠️ Potential issue | 🔴 Critical

Lost-wake risk: stack-local MutexLockFuture drops its waiter from the mutex queue between polls.

Each poll_next call creates a new MutexLockFuture on the stack (line 161). When the lock is contended, poll(cx) returns Pending, registers a waiter in tokio's Mutex queue, and the function returns at line 167. As the function returns, the stack-local future is dropped — and with it, tokio's Mutex removes the waiter from its queue.

Consequence: when the response handler releases the lock, there is no waiter to notify. The stream has no registered waker — poll_recv was never called (so the channel has no waker), and self_waker was not registered either. The stream stalls indefinitely until an unrelated external event re-polls it.

The existing test (poll_next_waits_on_mutex_when_lock_is_contended, line 468) does not catch this because it manually re-polls the stream after dropping the guard, rather than verifying that the waker was automatically invoked when the lock was released.

Fix: persist the lock future across polls so the waiter remains in the queue. Store it as a field (e.g., Option<Pin<Box<dyn Future<Output = MutexGuard<…>>>>>) or refactor to a two-phase state machine using tokio::sync::Mutex::lock_owned with a pinned-boxed future field.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/pd/timestamp.rs` around lines 161 - 168, The code creates a stack-local
Mutex lock future inside poll_next (the local pending_requests/MuexLockFuture
created at let pending_requests = this.pending_requests.lock();
pin_mut!(pending_requests);) which is dropped between polls and causes
lost-wake; persist the lock future across polls instead of recreating it each
call: add a field on the stream (e.g., Option<Pin<Box<dyn Future<Output =
tokio::sync::OwnedMutexGuard<...>>>>> or Option<Pin<Box<dyn Future<Output =
tokio::sync::MutexGuard<...>>>>>), move the pending_requests lock future into
that field when first polling, poll that pinned boxed future on subsequent
poll_next invocations, and only take the resulting guard out of the field when
it is Ready; alternatively implement a two-phase state machine using
tokio::sync::Mutex::lock_owned with a stored pinned future to ensure the waiter
remains registered across returns from poll_next.
♻️ Duplicate comments (1)
src/pd/timestamp.rs (1)

61-67: tokio::spawn without runtime guard — previously flagged.

The prior review noted that calling tokio::spawn in a synchronous function without verifying an active Tokio runtime will panic if none exists. This concern remains unaddressed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/pd/timestamp.rs` around lines 61 - 67, The background task uses
tokio::spawn directly which will panic if there's no active Tokio runtime;
change the call to first check for an existing runtime with
tokio::runtime::Handle::try_current() and, if present, use handle.spawn(async
move { if let Err(err) = run_tso(...).await { warn!(...) } }); if try_current()
returns Err, create a dedicated runtime (e.g.
tokio::runtime::Builder::new_current_thread().enable_all().build()) and use that
runtime.spawn(...) or run the future on that runtime inside a std::thread so the
task runs without requiring a caller-provided runtime; reference run_tso and the
current tokio::spawn usage when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/pd/timestamp.rs`:
- Around line 467-487: The test poll_next_waits_on_mutex_when_lock_is_contended
currently manually re-polls the stream after dropping the lock and so doesn't
assert that the Mutex actually fired the stored waker; update the test to drop
the lock_guard and then assert that wake_counter.wakes has increased (i.e., the
waker was invoked) instead of relying solely on an immediate manual poll;
specifically, in poll_next_waits_on_mutex_when_lock_is_contended use
wake_counter (from wake_counter_context) to wait/assert the atomic wakes count
increments after drop(lock_guard) (optionally with a short spin/timeout to avoid
flakiness) and then finally poll the pinned stream to confirm readiness — this
will validate the automatic wake-up behavior of the Mutex/MutexLockFuture rather
than a manual re-poll.

---

Outside diff comments:
In `@src/pd/timestamp.rs`:
- Around line 161-168: The code creates a stack-local Mutex lock future inside
poll_next (the local pending_requests/MuexLockFuture created at let
pending_requests = this.pending_requests.lock(); pin_mut!(pending_requests);)
which is dropped between polls and causes lost-wake; persist the lock future
across polls instead of recreating it each call: add a field on the stream
(e.g., Option<Pin<Box<dyn Future<Output = tokio::sync::OwnedMutexGuard<...>>>>>
or Option<Pin<Box<dyn Future<Output = tokio::sync::MutexGuard<...>>>>>), move
the pending_requests lock future into that field when first polling, poll that
pinned boxed future on subsequent poll_next invocations, and only take the
resulting guard out of the field when it is Ready; alternatively implement a
two-phase state machine using tokio::sync::Mutex::lock_owned with a stored
pinned future to ensure the waiter remains registered across returns from
poll_next.

---

Duplicate comments:
In `@src/pd/timestamp.rs`:
- Around line 61-67: The background task uses tokio::spawn directly which will
panic if there's no active Tokio runtime; change the call to first check for an
existing runtime with tokio::runtime::Handle::try_current() and, if present, use
handle.spawn(async move { if let Err(err) = run_tso(...).await { warn!(...) }
}); if try_current() returns Err, create a dedicated runtime (e.g.
tokio::runtime::Builder::new_current_thread().enable_all().build()) and use that
runtime.spawn(...) or run the future on that runtime inside a std::thread so the
task runs without requiring a caller-provided runtime; reference run_tso and the
current tokio::spawn usage when making the change.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8d5bb72 and 10f685c.

📒 Files selected for processing (2)
  • Cargo.toml
  • src/pd/timestamp.rs
✅ Files skipped from review due to trivial changes (1)
  • Cargo.toml

Signed-off-by: Michael Ingley <michael.ingley@gmail.com>
@mingley
Copy link
Author

mingley commented Feb 24, 2026

@coderabbitai please re-review the latest updates in 7530dbf; both major findings are addressed and threads are resolved.

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

@mingley: I'll perform a fresh review of commit 7530dbf. Thank you for addressing the major findings and resolving the threads.

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contribution This PR is from a community contributor. dco-signoff: yes Indicates the PR's author has signed the dco. first-time-contributor Indicates that the PR was contributed by an external member and is a first-time contributor. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant