Back to blog
3 min read ai

Building a Self-Hosted AI Assistant for IT Monitoring

Use Ollama, a local LLM, and a Python script to create an AI assistant that reads your server logs and alerts you in plain English when something goes wrong.

ai automation monitoring

Monitoring dashboards are great - until you have 15 of them and no one is watching. What if your infrastructure could just tell you what’s wrong, in plain language?

That’s exactly what we’ll build: a lightweight AI assistant that reads log files, detects anomalies, and sends you a human-readable summary via webhook.

The stack

  • Ollama: run LLMs locally (we’ll use Mistral 7B)
  • Python: for the glue logic
  • Systemd journal / log files: the data source
  • Discord/Slack webhook: where alerts go

Total cost: $0. Runs on a machine with 8GB RAM.

Step 1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh
ollama pull mistral

Verify it works:

ollama run mistral "Summarize this in one sentence: Server CPU at 98% for 15 minutes, disk I/O wait at 40%, 3 OOM kills in the last hour."

Step 2: Log collector script

import subprocess
import requests
import json

def get_recent_logs(minutes=30):
 """Pull recent syslog entries."""
 result = subprocess.run(
 ["journalctl", "--since", f"{minutes} minutes ago", "--no-pager", "-q"],
 capture_output=True, text=True
 )
 return result.stdout[-3000:] # Last 3000 chars to fit context window

def ask_ai(log_text):
 """Send logs to local Ollama for analysis."""
 response = requests.post("http://localhost:11434/api/generate", json={
 "model": "mistral",
 "prompt": f"""You are an IT monitoring assistant. Analyze these server logs and:
1. Identify any errors, warnings, or anomalies
2. Rate severity (low/medium/high/critical)
3. Suggest a fix if applicable

Keep your response under 200 words.

LOGS:
{log_text}""",
 "stream": False
 })
 return response.json()["response"]

def send_alert(message):
 """Send to Discord webhook."""
 webhook_url = "YOUR_WEBHOOK_URL"
 requests.post(webhook_url, json={"content": f"**Server Report**\n{message}"})

if __name__ == "__main__":
 logs = get_recent_logs()
 if len(logs.strip()) > 100: # Only analyze if there's meaningful content
 analysis = ask_ai(logs)
 if any(word in analysis.lower() for word in ["high", "critical", "error", "failure"]):
 send_alert(analysis)
 print(analysis)
 else:
 print("No significant log activity.")

Step 3: Schedule it

Run every 30 minutes with a cron job:

*/30 * * * * /usr/bin/python3 /opt/monitoring/log_analyzer.py >> /var/log/ai-monitor.log 2>&1

Or use a systemd timer for more control over dependencies and logging.

What the output looks like

Here’s an actual response from Mistral analyzing a busy log file:

Severity: Medium Detected 12 failed SSH login attempts from IP 203.0.113.42 in the last 30 minutes. This suggests a brute-force attempt. Additionally, the nginx service restarted twice due to configuration reload errors. Suggested action: Add the offending IP to fail2ban’s jail or block it at the firewall level. Check nginx config syntax with nginx -t before reloading.

That’s genuinely useful, and it took zero manual effort.

Limitations and next steps

  • Context window: local models have limited context. Trim logs aggressively or pre-filter for errors.
  • False positives: LLMs can be overly cautious. Tune your prompt or add a severity threshold.
  • Multi-server: extend this with a central log collector (rsyslog, Loki) feeding into the same script.

The point isn’t to replace proper monitoring (Prometheus, Grafana). It’s to add a layer of interpretation that saves you from staring at dashboards all day.

Need help applying this?

Turn this guide into a working setup

Start with a free diagnostic or request a paid audit. We can help you move from article-level advice to a stable implementation plan.

Content -> Audit -> Implementation
Share this article
Link copied!
Call Now — 0912 463 2317