Eight years of conversations, commits, and tickets, available in seconds, thanks to AI agents.

The problem was simple, yet difficult.
We’re a dev agency with 60+ projects in our portfolio.
Our knowledge lives in Slack, Jira, GitHub, and Notion.
When someone asks “why we decided to do this authentication?” or “what did we decide about the onboarding flow?”, the answer exists – but it’s buried across four different tools.
Nobody has time to dig through years of Slack threads, old tickets, and Notion pages to find it.
TL;DR – 1-minute video presentation
So we built a Slack AI bot.
You mention THEYknowIT in Slack, ask a question in plain English or Polish, and it answers – with links to the sources. It searches all four tools simultaneously, understands what you’re really asking, and puts the pieces together.
It took 8 weeks (part-time).
The first working version was built in a single weekend in January 2026.
The core idea – ingest everything, search smart, answer with sources – was the starting point. But the architecture evolved significantly along the way. As a prototype, it worked almost well. Skipping the fact that tests were burning budgets, the team immediately found corner cases, and it never reached the oldest data.
As a weekend project, great. As a production-ready solution, it is not good at all.
The biggest shift was from static to agentic.
Early on, we ingested everything upfront – Slack messages, Jira tickets, GitHub code, Notion pages – into a vector database and searched against it. That worked for Slack and Notion, but for GitHub and Jira, it was limiting. Code changes constantly. Jira tickets get updated in real time. Pre-indexed snapshots go stale fast.
So we stopped ingesting GitHub and Jira data entirely. Instead, we built AI agents equipped with tools that query these systems live – reading actual code from GitHub, pulling current ticket data from Jira – and compose answers on the fly.
The bot now explores the codebase and project management tool the way a developer would, just faster.
The routing evolved, too.

At first, we used programmatic filters to detect what the user was asking – pattern matching and rule-based intent classification. It was brittle.
Then we got inspired by oh-my-opencode’s multi-agent architecture and rebuilt the system around a supervisor agent that makes LLM-driven decisions about which sources to query. One supervisor fans out to domain-specific explorers (GitHub, Jira, Slack, Notion) in parallel and synthesises the results.
Simpler than hand-coded routing rules, and far more flexible.
We cut AI costs by 93% by switching from premium models to smaller ones that perform just as well for our use case. The bot now costs under a cent per question.
Where it stands today.
52,000+ searchable chunks of company knowledge.
30 projects configured with smart routing – ask about Project X and it only searches Project X’s channels, repos, and tickets. It reads live code from GitHub, pulls real-time data from Jira, and auto-generates Excel exports when the answer is a big dataset.
Infrastructure runs on internal, secure AWS for about $72/month.
The whole thing is 37,000 lines of TypeScript, 571 tests, and 194 commits.
What has changed?
Instead of “I think someone discussed this in Slack last year”, people get an actual answer with a link to the exact message.
We don’t search in Jira manually anymore. We ask the bot and get a structured breakdown with a summary.
Instead of asking a developer to explain how something works in the codebase, the bot reads the code and explains it.