Eliminating AI Hallucinations with Action-Based Systems

Eliminating AI Hallucinations with Action-Based Systems
In the world of AI agent development, we face a persistent challenge: hallucination. When an AI confidently presents incorrect information or claims to have taken actions it hasn't, it undermines the reliability of the entire system.
This post explores a practical solution — action-based systems — that dramatically reduces hallucinations by separating what the AI decides from what the system executes.
The Problem: Why AI Agents Hallucinate Actions
AI language models excel at generating plausible-sounding text, but they have no inherent ability to interact with external systems. When we ask an AI agent to perform tasks requiring tool use, two common problems emerge:
- False claims of action: The AI reports "I've sent the email" or "I've saved the file" without actually performing these operations
- Skipped steps: The AI jumps to conclusions without gathering necessary information first
These issues occur because language models are prediction machines — they predict what a helpful assistant would say about performing an action, rather than performing the action itself.
Let's see a problematic example:
User: Please search for recent news about quantum computing and send me an email summary
AI: I've searched for the latest quantum computing news and sent you an email with the top 5 developments from this week. You should receive it shortly at your registered email address.
Despite the confident tone, the AI has neither searched for news nor sent an email. It has hallucinated both actions.
The Solution: Action-Based System Architecture
Action-based systems solve this problem through a clear separation of responsibilities:
- The AI identifies what action to take and provides necessary parameters
- The application code executes the action with those parameters
- The application provides feedback about the execution results
This architecture leverages the AI's strength (decision making) while constraining its weakness (claiming to execute actions).
How Action-Based Systems Work
At its core, an action-based system follows this pattern:
- Define explicit actions the AI can request
- Enforce structured output specifying the action and parameters
- Execute actions through code, not through the AI
- Provide execution feedback to the AI for next steps
Step 1: Define Explicit Actions
First, clearly define a set of actions the AI can request:
{
"ACTIONS": [
{
"name": "SEARCH_WEB",
"description": "Search the web for information",
"required_parameters": ["query"],
"optional_parameters": ["max_results"]
},
{
"name": "SEND_EMAIL",
"description": "Send an email to a recipient",
"required_parameters": ["recipient", "subject", "body"]
},
{
"name": "FETCH_WEATHER",
"description": "Get current weather for a location",
"required_parameters": ["location"]
}
]
}
Step 2: Enforce Structured Output
Create a schema that forces the AI to commit to a specific action with required parameters:
{
"action": "SEARCH_WEB",
"parameters": {
"query": "latest quantum computing breakthroughs 2025",
"max_results": 5
},
"reasoning": "To provide up-to-date information on quantum computing advances, I need to search for recent news first."
}
Step 3: Execute Actions Through Code
Implement each action as a function that your application calls:
def execute_action(action_request):
action_type = action_request["action"]
parameters = action_request["parameters"]
if action_type == "SEARCH_WEB":
return search_engine.search(
query=parameters["query"],
max_results=parameters.get("max_results", 10)
)
elif action_type == "SEND_EMAIL":
return email_service.send(
to=parameters["recipient"],
subject=parameters["subject"],
body=parameters["body"]
)
# Handle other action types…
Step 4: Provide Execution Feedback
Return the real results of the action to the AI:
{
"action_status": "success",
"action_results": [
{
"title": "Quantum Error Correction Breakthrough at NIST",
"url": "https://example.com/quantum-error-correction",
"snippet": "Researchers demonstrate 99.8% error correction in a 32-qubit system…"
},
// Additional results…
],
"next_steps": "Now you can analyze these results or request another action."
}
A Complete Example: Weather-Based Travel Assistant
Let's walk through a complete example to see how this works in practice.
User Request
User: I'm planning a trip to San Francisco next week. What should I pack?
AI Action Selection
The AI determines it needs weather information first:
{
"action": "FETCH_WEATHER",
"parameters": {
"location": "San Francisco, CA",
"days": 7
},
"reasoning": "To provide packing recommendations, I need to know the weather forecast for San Francisco next week."
}
System Execution
The application executes the weather API call and returns actual data:
{
"action_status": "success",
"action_results": {
"location": "San Francisco, CA",
"forecast": [
{"day": "Monday", "condition": "Foggy", "high": 65, "low": 52},
{"day": "Tuesday", "condition": "Partly Cloudy", "high": 68, "low": 54},
// More days…
],
"avg_high": 67,
"avg_low": 53,
"precipitation_chance": "20%"
}
}
AI Response
Now the AI can respond with accurate information:
Based on next week's forecast for San Francisco, I recommend packing:
- Light jacket or sweater for the cool mornings (lows around 53°F)
- Layerable clothes as temperatures will reach mid to high 60s during the day
- Comfortable walking shoes
- Light rain jacket (20% chance of precipitation)
- Sunglasses for partly cloudy days
San Francisco is known for microclimates and fog, so layers are essential even in warmer months.
This response is grounded in actual data rather than hallucinated weather conditions.
Implementation Strategies
1. Use Custom Output Schemas
Force the AI to output in a specific format:
class ActionRequest(BaseModel):
action: str = Field(…, description="The action to perform")
parameters: Dict[str, Any] = Field(…, description="Parameters needed for the action")
reasoning: Optional[str] = Field(None, description="Explanation for choosing this action")
2. Implement a Validation Layer
Validate that actions and parameters meet requirements:
def validate_action_request(request):
# Check that action exists
if request.action not in ALLOWED_ACTIONS:
return False, f"Unknown action: {request.action}"
# Check required parameters
required_params = ACTION_DEFINITIONS[request.action]["required_parameters"]
for param in required_params:
if param not in request.parameters:
return False, f"Missing required parameter: {param}"
return True, "Action request is valid"
3. Create an Action Execution Engine
Build a component responsible for executing actions and handling errors:
class ActionEngine:
def __init__(self, action_handlers):
self.action_handlers = action_handlers
def execute(self, action_request):
action = action_request.action
if action not in self.action_handlers:
return {
"action_status": "error",
"error": f"No handler for action: {action}"
}
try:
result = self.action_handlers[action](action_request.parameters)
return {
"action_status": "success",
"action_results": result
}
except Exception as e:
return {
"action_status": "error",
"error": str(e)
}
4. Design a Multi-Turn Conversation Flow
Create a loop that allows for multiple action sequences:
def agent_conversation_loop(initial_prompt):
conversation_history = [{"role": "user", "content": initial_prompt}]
while True:
# Get next action from AI
action_request = ai_service.get_next_action(conversation_history)
# Execute the action
action_result = action_engine.execute(action_request)
# Add results to conversation
conversation_history.append({
"role": "system",
"content": json.dumps(action_result)
})
# Check if we should generate a response to the user
if action_request.action == "RESPOND_TO_USER":
user_response = action_request.parameters["response"]
return user_response
Benefits of Action-Based Systems
This architecture offers significant benefits:
- Eliminated hallucinations: The AI can't claim to have taken actions it hasn't
- Clear separation of concerns: The AI decides what to do, code handles how to do it
- Controlled information flow: The AI only works with verified data from actions
- Transparency: Each action is logged and can be audited
- Progressive enhancement: New actions can be added without changing the core system
Common Challenges and Solutions
Challenge 1: Complex Action Sequences
For complex tasks requiring multiple steps, implement a planning phase:
{
"action": "CREATE_PLAN",
"parameters": {
"goal": "Send a weekly sales report",
"steps": [
{"action": "QUERY_DATABASE", "description": "Get sales data for past week"},
{"action": "GENERATE_CHART", "description": "Create visual representation"},
{"action": "COMPOSE_EMAIL", "description": "Draft email with findings"},
{"action": "SEND_EMAIL", "description": "Send to the sales team"}
]
}
}
Challenge 2: Handling Action Failures
Always provide meaningful feedback when actions fail:
{
"action_status": "error",
"error_type": "AUTHENTICATION_FAILED",
"error_message": "Could not authenticate with the database service",
"suggestion": "You can ask the user to provide valid credentials"
}
Challenge 3: Action Parameter Complexity
For complex parameters, implement structured validation:
class EmailParameters(BaseModel):
recipient: EmailStr
subject: str = Field(…, max_length=100)
body: str
attachments: Optional[List[Dict[str, str]]] = None
cc: Optional[List[EmailStr]] = None
bcc: Optional[List[EmailStr]] = None
Real-World Production Implementations
Major AI systems already use variations of action-based approaches:
- OpenAI's Function Calling: Defines functions the model can invoke with structured parameters
- Anthropic's Tool Use: Implements a similar system for tool invocation
- Langchain's Tools: Creates abstracted interfaces for various tools and APIs
- Karo Framework: A flexible agent framework that implements action-based patterns for reliable tool execution across diverse domains
Conclusion
Action-based systems provide a reliable architecture for creating AI agents that don't hallucinate actions. By separating decision-making from execution, we get the best of both worlds: the AI's reasoning capabilities without the risks of hallucination.
This pattern works because it embraces a fundamental truth: AI models are excellent at deciding what to do but should never be trusted to claim they did it. The execution layer provides the ground truth that keeps the entire system reliable.
As you build AI agents, consider implementing this pattern to create more reliable, trustworthy systems. The initial investment in structured actions pays significant dividends in reliability, maintainability, and user trust.

About Rosemary Nwosu-Ihueze
AI agent compliance and audit consultant with hands-on production deployment experience. I help businesses ensure their AI agents meet regulatory requirements and operational standards through comprehensive monitoring, risk assessment, and compliance frameworks. Having built compliance tools and agent frameworks used in production, I bring real-world expertise to the governance challenges of autonomous AI systems.