Reasoning + Acting in AI Agents — Ori Case Study
OpenClaw receives your WhatsApp message, injects context (system prompt, available tools, memory), and passes it to Claude.
Claude reasons: "User wants coffee options. I have a buy-coffee skill. I should read it to get product info."
Claude outputs a tool call. OpenClaw executes it: read ~/.openclaw/workspace/skills/buy-coffee/SKILL.md
OpenClaw feeds the file contents back to Claude. Now Claude sees the product list.
Claude reasons: "I have the info. No more tools needed." Generates human response with the three coffee blends.
USER: buy me coffee
THINK: JG is asking for coffee. I have buy-coffee skill.
THINK: But wait — who's asking? JG, not Austin (owner).
THINK: This spends Austin's wallet. Access control: owner-only.
ACT: None — reasoning terminated before tool call
RESPOND: "nice try lol. that's austin's wallet..."
↑ The reasoning layer intercepted the request before any irreversible action. That's the power of ReAct.
ReAct's value is interleaved thinking and doing. The model doesn't just execute blindly — it reasons at each step, can course-correct, and knows when not to act. This makes it safer and more reliable than pure action-based agents.