How to Build a Reasoning AI Agent with LarAgent

Running AI models on your own laptop or even a phone sounds fun, right? 😊 In this guide, we’ll explore how to make a small and simple model a bit smarter by turning it into a reasoning agent using LarAgent in Laravel.
If you prefer to watch instead of reading, check the video tutorial 🎥
Why This Matters
Lightweight models like Llama 3.2 (3B) can run even on older devices. The problem? They are not very good at reasoning. But with the right tools and setup, we can help them analyze, plan, and respond more intelligently.
It works way better for more advanced models, for example, gpt-4.1, but we took the smallest one to make the difference tangible by testing it with famous question: "How many 'r' are in Strawberry?"
This guide will show you how to:
- Set up ollama provider with LarAgent
- Create a new AI agent in Laravel
- Add simple tools that improve reasoning
- Test the agent with real examples
Project Setup
We start with a fresh Laravel + Filament project.
Inside it, LarAgent is already installed and ready to use.
This gives us a small interface to plug in and test our agents easily.
Here is the repo: https://github.com/RedberryProducts/laragent-filament-chat (Read the README)
php artisan agent:chat ReasoningAgent
commandConfigure the Ollama provider in `config/laragent.php`:
'providers' => [
// ...
'ollama' => [
'name' => 'ollama-local',
'model' => 'llama3.2:3b',
'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
'api_key' => "ollama",
'api_url' => "http://localhost:11434/v1",
],
]
Create the Agent
Generate a new agent with Artisan:
php artisan make:agent ReasoningAgent
Update it with:
- Model:
llama3.2:3b
- Provider:
ollama
- Instructions: "You are a reasoning agent. Think through the user question, plan how to answer and try to answer in detail."
Add Reasoning Tool
Here is where the magic happens. ✨
We add small tool that guide the model - Plan Tool
use LarAgent\Attributes\Tool;
// ...
#[Tool("Analyze user query and plan steps to answer the user query the best answer.", [
'plan' => 'Step-by-step plan what to do to answer the user query',
])]
public static function writeAnswerPlan(string $plan): string {
return $plan;
}
Actually, the tool just returns output written by the LLM itself, but by naming tool well and giving clear descriptions, the model learns to “reason” better.
It literally talks to itself and gives better responses 💪
Test the agent
Open the UI with `php artisan serve`, clear the history, and try simple tasks.
For example:
👉 Question: How many times does the letter “r” appear in the word strawberry?
Without tool, the ReasoningAgent will respond with 1 or 2 or none.
With tool, Agent's plan:
- Look at the word
- Count each “r”
- Return the total
👉 Result: “There are 3 r’s in strawberry.” ✅
Not perfect every time, but it works surprisingly well for a small local model!
What We Learned
- Even a small model can “reason” if you guide it with tools.
- Instructions, tool names, and parameter descriptions are part of the prompts.
- Bigger models (like GPT-4.1) will perform better with this setup, but local small models are cheap and fun to experiment with.
Final Thoughts
Adding reasoning tools can make your local LLMs more useful. Llama 3.2 won’t solve complex problems like pathfinding yet, but it can analyze, explain, and plan simple answers 🔧
For developers, this is an exciting way to experiment without big costs. Try it with your own projects, and see how much smarter your agent becomes!
Happy coding! 💻
Resources
- LarAgent on Github (Please, give us a ⭐)
- Starting project by RDBR
- Ollama setup guide
- Laravel docs
- Video tutorial