Thought: Internal Reasoning and the Re-Act Approach
Thoughts represent the Agent’s internal reasoning and planning processes to solve the task.
This utilises the agent’s Large Language Model (LLM) capacity to analyze information when presented in its prompt.
Think of it as the agent’s internal dialogue, where it considers the task at hand and strategizes its approach.
The Agent’s thoughts are responsible for accessing current observations and decide what the next action(s) should be.
Through this process, the agent can break down complex problems into smaller, more manageable steps, reflect on past experiences, and continuously adjust its plans based on new information.
Here are some examples of common thoughts:
Type of Thought | Example |
---|---|
Planning | “I need to break this task into three steps: 1) gather data, 2) analyze trends, 3) generate report” |
Analysis | “Based on the error message, the issue appears to be with the database connection parameters” |
Decision Making | “Given the user’s budget constraints, I should recommend the mid-tier option” |
Problem Solving | “To optimize this code, I should first profile it to identify bottlenecks” |
Memory Integration | “The user mentioned their preference for Python earlier, so I’ll provide examples in Python” |
Self-Reflection | “My last approach didn’t work well, I should try a different strategy” |
Goal Setting | “To complete this task, I need to first establish the acceptance criteria” |
Prioritization | “The security vulnerability should be addressed before adding new features” |
Note: In the case of LLMs fine-tuned for function-calling, the thought process is optional. In case you’re not familiar with function-calling, there will be more details in the Actions section.
The Re-Act Approach
A key method is the ReAct approach, which is the concatenation of “Reasoning” (Think) with “Acting” (Act).
ReAct is a simple prompting technique that appends “Let’s think step by step” before letting the LLM decode the next tokens.
Indeed, prompting the model to think “step by step” encourages the decoding process toward next tokens that generate a plan, rather than a final solution, since the model is encouraged to decompose the problem into sub-tasks.
This allows the model to consider sub-steps in more detail, which in general leads to less errors than trying to generate the final solution directly.

These models have been trained to always include specific thinking sections (enclosed between <think>
and </think>
special tokens). This is not just a prompting technique like ReAct, but a training method where the model learns to generate these sections after analyzing thousands of examples that show what we expect it to do.
Now that we better understand the Thought process, let’s go deeper on the second part of the process: Act.
< > Update on GitHub