fix(ci): surface refresh-baselines HTTP error body and status code#161
fix(ci): surface refresh-baselines HTTP error body and status code#161
Conversation
Replace `curl -s -f` (silent + fail-no-body) with explicit HTTP code capture and response body dump. Workflow has been failing every 6h with exit 22 (HTTP >=400) but root cause was hidden. Next failure will print HTTP status + Supabase error message in logs.
📝 WalkthroughWalkthroughRemplacement d'une invocation curl fail-fast par un flux basé sur fichier temporaire dans le workflow GitHub Actions : capture du corps de la réponse et du code HTTP, journalisation de Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
Updates the scheduled “Refresh community baselines” workflow to surface actionable HTTP diagnostics (status code + response body) when the Supabase Edge Function call fails, addressing opaque curl exit code 22 failures in CI.
Changes:
- Capture HTTP status via
curl -w "%{http_code}"while saving the response body to a temp file. - Print the status code and response body to workflow logs.
- Explicitly fail the job when
http_code >= 400.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| SECRET: ${{ secrets.REFRESH_SECRET }} | ||
| run: | | ||
| curl -s -f -X POST \ | ||
| response=$(mktemp) |
There was a problem hiding this comment.
mktemp response file is never removed. Add a trap (e.g., on EXIT) to rm -f "$response" so repeated scheduled runs don’t accumulate temp files on the runner (and to keep logs/behavior consistent if the script exits early).
| response=$(mktemp) | |
| response=$(mktemp) | |
| trap 'rm -f "$response"' EXIT |
| echo "Response body:" | ||
| cat "${response}" | ||
| echo | ||
| if [ "${http_code}" -ge 400 ]; then |
There was a problem hiding this comment.
The script always prints the full response body, even on success. Consider only dumping the body when http_code is >= 400 (or truncating it), to avoid noisy logs and potential accidental leakage if the endpoint ever returns verbose/internal error details on 2xx responses.
| echo "Response body:" | |
| cat "${response}" | |
| echo | |
| if [ "${http_code}" -ge 400 ]; then | |
| if [ "${http_code}" -ge 400 ]; then | |
| echo "Response body:" | |
| cat "${response}" | |
| echo |
| if [ "${http_code}" -ge 400 ]; then | ||
| exit 1 |
There was a problem hiding this comment.
If curl fails before producing an HTTP code (e.g., DNS/TLS error), http_code may be empty/non-numeric and [ "${http_code}" -ge 400 ] will error with “integer expression expected”. Guard for a numeric code (or treat empty as failure) so the step fails with a clearer message.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/refresh-baselines.yml:
- Around line 17-31: Le script actuel appelle curl into http_code=$(curl -sS -X
POST -o "${response}" -w "%{http_code}" ...) but n'expose pas les erreurs de
transport (DNS/TLS/timeout) si curl échoue; modifiez l'appel pour capturer aussi
la sortie d'erreur de curl (rediriger stderr vers un second fichier temporaire),
ne pas laisser le shell aborter prématurément, puis vérifier le code de sortie
de curl ($?) après l'appel; si curl a échoué, afficher "Response body:" et le
contenu de "${response}" ainsi que le fichier d'erreur stderr, puis exit 1;
conservez l'utilisation de "${FUNCTION_URL}", "${ANON_KEY}", "${SECRET}" et la
variable http_code pour les réponses HTTP valides.
- Around line 18-24: La commande curl utilisée pour remplir la variable
http_code (la substitution qui POSTe vers "${FUNCTION_URL}/refresh-baselines")
n'a pas de timeouts explicites; ajoutez des options comme --connect-timeout
<seconds> et --max-time <seconds> à cet appel curl (par ex. --connect-timeout 10
--max-time 30) afin d'éviter que le job planifié ne reste bloqué en cas de
latence réseau ou de fonction lente.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 55b36692-a393-46be-90da-014d50832452
📒 Files selected for processing (1)
.github/workflows/refresh-baselines.yml
| response=$(mktemp) | ||
| http_code=$(curl -sS -X POST \ | ||
| -o "${response}" \ | ||
| -w "%{http_code}" \ | ||
| "${FUNCTION_URL}/refresh-baselines" \ | ||
| -H "Authorization: Bearer ${ANON_KEY}" \ | ||
| -H "x-refresh-secret: ${SECRET}" \ | ||
| -H "Content-Type: application/json" | ||
| -H "Content-Type: application/json") | ||
| echo "HTTP ${http_code}" | ||
| echo "Response body:" | ||
| cat "${response}" | ||
| echo | ||
| if [ "${http_code}" -ge 400 ]; then | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the workflow file
find . -name "refresh-baselines.yml" -type fRepository: mmmprod/claude-eta
Length of output: 103
🏁 Script executed:
# Check the full context of the workflow file
if [ -f ".github/workflows/refresh-baselines.yml" ]; then
cat -n ".github/workflows/refresh-baselines.yml"
fiRepository: mmmprod/claude-eta
Length of output: 1177
🏁 Script executed:
# Verify the shell syntax of the proposed patch by checking if it's valid bash
python3 << 'EOF'
import subprocess
import tempfile
# Test the proposed patch logic
test_script = '''#!/bin/bash
set -e
response=$(mktemp)
trap 'rm -f "${response}"' EXIT
set +e
http_code=$(curl -sS -X POST \
-o "${response}" \
-w "%{http_code}" \
"http://httpbin.org/status/200" \
-H "Content-Type: application/json")
curl_exit=$?
set -e
echo "HTTP ${http_code}"
echo "Response body:"
cat "${response}"
echo
if [ "${curl_exit}" -ne 0 ]; then
echo "curl transport error: exit ${curl_exit}"
exit "${curl_exit}"
fi
if [ "${http_code}" -ge 400 ]; then
exit 1
fi
'''
# Check if bash can parse it
result = subprocess.run(['bash', '-n'], input=test_script.encode(), capture_output=True)
if result.returncode == 0:
print("✓ Proposed patch syntax is valid")
else:
print("✗ Syntax error in proposed patch:")
print(result.stderr.decode())
EOFRepository: mmmprod/claude-eta
Length of output: 94
Gérez explicitement les erreurs de transport curl pour préserver les diagnostics.
Si curl rencontre une erreur de transport (DNS, TLS, timeout), le script s'arrête immédiatement avant d'afficher les logs. Cela élimine les diagnostics que ce PR vise à exposer.
Patch proposé
- response=$(mktemp)
- http_code=$(curl -sS -X POST \
+ response=$(mktemp)
+ trap 'rm -f "${response}"' EXIT
+ set +e
+ http_code=$(curl -sS -X POST \
-o "${response}" \
-w "%{http_code}" \
"${FUNCTION_URL}/refresh-baselines" \
-H "Authorization: Bearer ${ANON_KEY}" \
-H "x-refresh-secret: ${SECRET}" \
-H "Content-Type: application/json")
+ curl_exit=$?
+ set -e
echo "HTTP ${http_code}"
echo "Response body:"
cat "${response}"
echo
+ if [ "${curl_exit}" -ne 0 ]; then
+ echo "curl transport error: exit ${curl_exit}"
+ exit "${curl_exit}"
+ fi
if [ "${http_code}" -ge 400 ]; then
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response=$(mktemp) | |
| http_code=$(curl -sS -X POST \ | |
| -o "${response}" \ | |
| -w "%{http_code}" \ | |
| "${FUNCTION_URL}/refresh-baselines" \ | |
| -H "Authorization: Bearer ${ANON_KEY}" \ | |
| -H "x-refresh-secret: ${SECRET}" \ | |
| -H "Content-Type: application/json" | |
| -H "Content-Type: application/json") | |
| echo "HTTP ${http_code}" | |
| echo "Response body:" | |
| cat "${response}" | |
| echo | |
| if [ "${http_code}" -ge 400 ]; then | |
| exit 1 | |
| fi | |
| response=$(mktemp) | |
| trap 'rm -f "${response}"' EXIT | |
| set +e | |
| http_code=$(curl -sS -X POST \ | |
| -o "${response}" \ | |
| -w "%{http_code}" \ | |
| "${FUNCTION_URL}/refresh-baselines" \ | |
| -H "Authorization: Bearer ${ANON_KEY}" \ | |
| -H "x-refresh-secret: ${SECRET}" \ | |
| -H "Content-Type: application/json") | |
| curl_exit=$? | |
| set -e | |
| echo "HTTP ${http_code}" | |
| echo "Response body:" | |
| cat "${response}" | |
| echo | |
| if [ "${curl_exit}" -ne 0 ]; then | |
| echo "curl transport error: exit ${curl_exit}" | |
| exit "${curl_exit}" | |
| fi | |
| if [ "${http_code}" -ge 400 ]; then | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/refresh-baselines.yml around lines 17 - 31, Le script
actuel appelle curl into http_code=$(curl -sS -X POST -o "${response}" -w
"%{http_code}" ...) but n'expose pas les erreurs de transport (DNS/TLS/timeout)
si curl échoue; modifiez l'appel pour capturer aussi la sortie d'erreur de curl
(rediriger stderr vers un second fichier temporaire), ne pas laisser le shell
aborter prématurément, puis vérifier le code de sortie de curl ($?) après
l'appel; si curl a échoué, afficher "Response body:" et le contenu de
"${response}" ainsi que le fichier d'erreur stderr, puis exit 1; conservez
l'utilisation de "${FUNCTION_URL}", "${ANON_KEY}", "${SECRET}" et la variable
http_code pour les réponses HTTP valides.
| http_code=$(curl -sS -X POST \ | ||
| -o "${response}" \ | ||
| -w "%{http_code}" \ | ||
| "${FUNCTION_URL}/refresh-baselines" \ | ||
| -H "Authorization: Bearer ${ANON_KEY}" \ | ||
| -H "x-refresh-secret: ${SECRET}" \ | ||
| -H "Content-Type: application/json" | ||
| -H "Content-Type: application/json") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/refresh-baselines.ymlRepository: mmmprod/claude-eta
Length of output: 1177
🌐 Web query:
curl timeouts best practices GitHub Actions workflow
💡 Result:
Best practices for handling curl timeouts in GitHub Actions workflows: 1. Use curl's built-in timeout flags: - --connect-timeout SECONDS: Limits connection time (e.g., 10s for connect, recommended 5-30s based on network). - --max-time SECONDS: Total operation timeout (e.g., 60s for APIs, longer for large downloads). Example: - name: Fetch data run: curl --connect-timeout 10 --max-time 60 --retry 3 --retry-delay 5 https://example.com/api/data 2. Add retries with exponential backoff using --retry and --retry-delay. 3. Set step-level timeouts with timeout-minutes: - name: Curl step timeout-minutes: 5 run: curl ... 4. Set job-level timeouts: jobs: fetch: timeout-minutes: 30 steps: - run: curl ... 5. For large downloads or slow networks, increase --max-time and use job timeouts. Default job timeout is 360 minutes. 6. Use dedicated curl actions like indiesdev/curl@v1 which support timeout and retries natively: - uses: indiesdev/curl@v1 with: url: https://example.com timeout: 30000 # ms retries: 3 curl has no timeout by default, so always specify. Test values based on expected response times + network variability. Monitor logs for timeout error 28.
Citations:
- 1: https://everything.curl.dev/usingcurl/timeouts.html
- 2: https://stackoverflow.com/questions/59073731/set-default-timeout-on-github-action-pipeline
- 3: https://everything.curl.dev/usingcurl/timeouts
- 4: https://gist.github.com/yidas/467968d25cd7424cb5ea98500300680f
- 5: https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions
- 6: https://thelinuxcode.com/prevent-timeouts-curl/
- 7: https://github.com/indiesdev/curl
- 8: Support setting job
timeout-minutesglobally for a workflow actions/runner#1449 - 9: Add configurable timeout and retry for git network operations actions/checkout#2390
- 10: https://notes.kodekloud.com/docs/GitHub-Actions/GitHub-Actions-Core-Concepts/Timeout-for-Jobs-and-Steps/page
- 11: https://oneuptime.com/blog/post/2025-12-20-github-actions-debug-timeout-issues/view
Ajoutez des timeouts explicites à curl.
Sans --connect-timeout et --max-time, ce job planifié peut rester bloqué indéfiniment en cas de latence réseau ou de fonction lente. Selon les bonnes pratiques curl, il est essentiel de définir des limites de temps pour les appels externes.
Patch proposé
- http_code=$(curl -sS -X POST \
+ http_code=$(curl -sS -X POST \
+ --connect-timeout 10 \
+ --max-time 60 \
-o "${response}" \
-w "%{http_code}" \
"${FUNCTION_URL}/refresh-baselines" \
-H "Authorization: Bearer ${ANON_KEY}" \
-H "x-refresh-secret: ${SECRET}" \
-H "Content-Type: application/json")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| http_code=$(curl -sS -X POST \ | |
| -o "${response}" \ | |
| -w "%{http_code}" \ | |
| "${FUNCTION_URL}/refresh-baselines" \ | |
| -H "Authorization: Bearer ${ANON_KEY}" \ | |
| -H "x-refresh-secret: ${SECRET}" \ | |
| -H "Content-Type: application/json" | |
| -H "Content-Type: application/json") | |
| http_code=$(curl -sS -X POST \ | |
| --connect-timeout 10 \ | |
| --max-time 60 \ | |
| -o "${response}" \ | |
| -w "%{http_code}" \ | |
| "${FUNCTION_URL}/refresh-baselines" \ | |
| -H "Authorization: Bearer ${ANON_KEY}" \ | |
| -H "x-refresh-secret: ${SECRET}" \ | |
| -H "Content-Type: application/json") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/refresh-baselines.yml around lines 18 - 24, La commande
curl utilisée pour remplir la variable http_code (la substitution qui POSTe vers
"${FUNCTION_URL}/refresh-baselines") n'a pas de timeouts explicites; ajoutez des
options comme --connect-timeout <seconds> et --max-time <seconds> à cet appel
curl (par ex. --connect-timeout 10 --max-time 30) afin d'éviter que le job
planifié ne reste bloqué en cas de latence réseau ou de fonction lente.
Address CodeRabbit review feedback on PR #161: - Add EXIT trap to clean up mktemp response file - Capture curl exit code; surface clear error when curl fails before producing an HTTP code (DNS, TLS, connection reset) - Validate http_code is numeric before integer comparison to avoid obscure "integer expression expected" failure - Only dump response body on HTTP >=400 to avoid noisy logs and reduce risk of leaking verbose payloads on success
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/refresh-baselines.yml (1)
20-26:⚠️ Potential issue | 🟠 MajorAjoutez des timeouts explicites à
curlpour éviter un blocage long du job planifié.L’appel réseau (Line 20 à Line 26) n’a toujours pas
--connect-timeout/--max-time. En cas de latence ou de service dégradé, le workflow peut rester bloqué inutilement longtemps.Patch proposé
- http_code=$(curl -sS -X POST \ + http_code=$(curl -sS -X POST \ + --connect-timeout 10 \ + --max-time 60 \ -o "${response}" \ -w "%{http_code}" \ "${FUNCTION_URL}/refresh-baselines" \ -H "Authorization: Bearer ${ANON_KEY}" \ -H "x-refresh-secret: ${SECRET}" \ -H "Content-Type: application/json") || curl_exit=$?#!/bin/bash set -euo pipefail file=".github/workflows/refresh-baselines.yml" echo "Vérification de la présence de timeouts curl dans ${file}" rg -n --no-heading --fixed-strings 'curl -sS -X POST' "${file}" rg -n --no-heading --fixed-strings -- '--connect-timeout' "${file}" || true rg -n --no-heading --fixed-strings -- '--max-time' "${file}" || true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/refresh-baselines.yml around lines 20 - 26, The curl invocation that posts to "${FUNCTION_URL}/refresh-baselines" (setting http_code, output file "${response}" and capturing curl_exit) lacks explicit timeouts and can hang; update that curl command to include at minimum --connect-timeout (e.g. a few seconds) and --max-time (overall timeout) flags so the job fails fast on network issues, and ensure the existing || curl_exit=$? behavior still captures non-zero exits; reference the curl command that uses -sS -X POST and the variables FUNCTION_URL, ANON_KEY, SECRET, response, and curl_exit when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/refresh-baselines.yml:
- Around line 20-26: The curl invocation that posts to
"${FUNCTION_URL}/refresh-baselines" (setting http_code, output file
"${response}" and capturing curl_exit) lacks explicit timeouts and can hang;
update that curl command to include at minimum --connect-timeout (e.g. a few
seconds) and --max-time (overall timeout) flags so the job fails fast on network
issues, and ensure the existing || curl_exit=$? behavior still captures non-zero
exits; reference the curl command that uses -sS -X POST and the variables
FUNCTION_URL, ANON_KEY, SECRET, response, and curl_exit when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ca35c1a7-c559-4aea-8a59-10af21e3879e
📒 Files selected for processing (1)
.github/workflows/refresh-baselines.yml
Summary
Refresh community baselinesfails every 6h withProcess completed with exit code 22(curl HTTP >=400).curl -s -fswallows the response body and HTTP status, hiding root cause.%{http_code}capture + body dump so next failure prints actionable diagnostics.Test plan
gh workflow run refresh-baselines.yml🤖 Generated with Claude Code
Summary by CodeRabbit
Notes de version