Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
PR SummaryMedium Risk Overview Updates streaming + persistence to carry per-block Reviewed by Cursor Bugbot for commit 7f3b7ed. Bugbot is set up for automated code reviews on this repo. Configure here. |
Greptile SummaryThis PR introduces a collapsible thinking-block UI that surfaces LLM reasoning to users during and after streaming. Timestamps are threaded through the full content-block lifecycle — streaming context, persistence, and display — so elapsed time can be calculated and blocks shorter than 3 seconds can be suppressed as visual noise. Confidence Score: 5/5Safe to merge; only P2 style findings remain All remaining findings are P2 (a stale comment that says "5s" while the constant is 3000 ms, and overflow-y-scroll vs overflow-y-auto). No logic errors, data-loss risks, or broken code paths were found. Timing propagation, flush ordering, and auto-scroll extension all look correct. thinking-block.tsx — stale "5s" comment and overflow-y-scroll style choice Important Files Changed
Sequence DiagramsequenceDiagram
participant Stream as Stream (go/stream.ts)
participant Handler as tool.ts / handlers
participant Context as StreamingContext
participant UseChat as use-chat.ts
participant MsgContent as message-content.tsx
participant ThinkBlock as ThinkingBlock
Stream->>Context: text event (channel=thinking) → ensureThinkingBlock()
Note over Context: currentThinkingBlock accumulates content + timestamp
Stream->>Handler: tool or subagent lifecycle event
Handler->>Context: flushThinkingBlock() — stampBlockEnd + push to contentBlocks
UseChat->>UseChat: toRawPersistedContentBlockBody (type=text, channel=thinking)
UseChat->>UseChat: withBlockTiming adds timestamp/endedAt
UseChat-->>DB: persist contentBlocks with timing
Note over DB: On reload
DB-->>UseChat: PersistedContentBlock (lane, channel, timestamp, endedAt)
UseChat->>MsgContent: toDisplayBlock → ContentBlockType.thinking
MsgContent->>MsgContent: parseBlocks → ThinkingSegment (startedAt, endedAt)
MsgContent->>MsgContent: elapsedMs ≤ 3000ms? → return null
MsgContent->>ThinkBlock: render ThinkingBlock(isActive, isStreaming, startedAt, endedAt)
ThinkBlock->>ThinkBlock: thresholdReached gate (3 s for active blocks)
ThinkBlock->>ThinkBlock: rAF expand animation once threshold hit
Reviews (1): Last reviewed commit: "fix lint" | Re-trigger Greptile |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 7f3b7ed. Configure here.
|
|
||
| const stampBlockEnd = (block: ContentBlock | undefined, ts?: string) => { | ||
| if (block && block.endedAt === undefined) block.endedAt = toEventMs(ts) | ||
| } |
There was a problem hiding this comment.
Premature endedAt stamp breaks subagent end matching
Low Severity
The stampBlockEnd helper in ensureTextBlock and ensureThinkingBlock indiscriminately stamps endedAt on whatever the previous block is. When the previous block is a subagent marker, this sets a premature endedAt. Later, the subagent span-end handler's backward search for blocks matching b.endedAt === undefined fails to find the marker, so the subagent block retains an incorrect endedAt timestamp (time of the first content event rather than the actual subagent end time). The same issue exists in stream.ts. While parseBlocks doesn't currently use endedAt on subagent markers for display, the persisted timing data is inaccurate.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 7f3b7ed. Configure here.


Summary
Added thinking blocks. This allows users to understand llm's thinking process and also let them be assured we're actually running stuff in the background when thinking is displayed.
Thinking blocks are only displayed after 3 seconds of thinking.
Type of Change
Testing
Checklist
Screenshots/Videos