Today I want to talk about agentic AI and databases — specifically, why agents need strict guardrails, especially for memory.
The Knowledge Graph Trap
The first thing people usually do — and the internet is full of these guides — is build a knowledge graph with an LLM. It doesn’t matter which language you use: you feed in messages without any kind of ontology and just start collecting data. The results look quite good at the beginning. But after a couple of weeks of usage, you realize that your data is completely random and looks more and more like garbage.
This was our observation at King when we started building memory. We realized that we need an ontology — because without one, it simply doesn’t work. But we need the ontology in a form of guardrails and constraints that define what is possible in a memory. And from the other side, we need it to express not just what’s possible, but what really matters.
The Limits of the Semantic Stack
Today we have OWL, SHACL, and everything else in the RDF world that addresses this problem. But the existing tooling still doesn’t give you all the capabilities to construct something truly complex while keeping it strict.
I’ve been using OWL for a long time. It has amazing features: rules, descriptive logic, reasoning, and much more — things that property graphs unfortunately lack. But it’s still hard to express super strict guardrails. Sometimes you need to combine different tools, for example SHACL and OWL, to achieve what you need. I know this is a form of religion: every time you go to the RDF folks and say the semantic stack can’t do something, they will tell you that you’re using it wrong, and that with the newer version of RDF you can do everything. I completely agree — you can. The problem is that it gets clunky and complex. And if it’s clunky and complex for people, it will likely be clunky and complex for LLMs as well.
Types as the Foundation
How do we combine strict guardrails with the ability to express what’s important? In software — especially in more advanced languages — we have types. We design classes, hierarchies, inheritance, and other structures. We express our logic and constraints as types.
If you take this to its extreme, you end up in Haskell, Rust, or OCaml — languages with extremely powerful type systems. But you can go even further. There are languages that allow you to blend types with values, encoding logic directly at the type level. You can create a type that specifies a list of exactly three items, or describe logical constraints directly within the type itself. These languages work with dependent types — the dependency between type and value.
The mathematics behind dependent types, especially cubical types, forces you to think like a mathematician. But here’s the key insight: the types we use in programming actually came from a different branch of mathematics. They came from logic. Types are logical constructs, born from a revolution in mathematics when Russell discovered that sets are not enough to describe our world. Types are the form of logic that changed the fundamentals of mathematics. And if they can describe the world around us, they can describe what matters for agents’ memories and LLMs — because types are flexible enough.
Stricter Constraints, Better Results
With better constraints, LLMs work significantly better. The stricter the constraints, the better the guardrails. We already see this in the vibe coding space: if you have a strict language with static types, and especially one that forces you to do things in one particular way, it consumes fewer tokens and produces results much faster. Rust is much better for vibe coding than JavaScript — or at least much cheaper — and it keeps getting better. There were initial problems with Rust due to limited training data, but modern coding agents handle it extremely well now.
Beyond the Object-Relational Impedance
If types are the answer, how do we bring them to our data storage? We could build a layer on top of our database in software, but then we face the classical headache of object-relational impedance mismatch. As developers, we use ORM tools to bridge this gap, encoding part of the constraints in the database and another part in code. Synchronizing all of this is extremely hard.
Get Volodymyr Pavlyshyn’s stories in your inbox
Join Medium for free to get updates from this writer.
But what if we could use the same types — even functions and programmability — directly inside our databases? I’m not talking about document databases that attempted to cross this border by sacrificing schemas, making the integration friendlier for languages like JavaScript. I’m talking about a database that has held my attention for more than two years: TypeDB.
TypeDB: Types at the Core
TypeDB takes dependent type theory seriously and makes it the core of the database. Everything is typed. You have strict type definitions that define the schema. Your query is another form of type. Functions are typed. It’s a kind of dream: the concepts you’ve used in programming for decades now live inside your database.
More importantly, you get extremely strict, understandable, and well-defined constraints. You can finally say precisely what matters and how it should look.
Even with property graph databases like LadybugDB — which I genuinely like — this isn’t always possible. In LadybugDB and similar property graphs, you can describe the shape of a node with all its properties and define what connections between nodes are allowed. But you have no ability to describe the shape of a subgraph. You can say that a parent and child can have a relation, but you cannot say that a child should have only two parents. You would need a validation query, pushing that constraint to the application layer. In TypeDB, you can design a type with a special kind of relation that has particular rules and fixes precisely this constraint.
PERA: A Developer-Friendly Paradigm
TypeDB was intimidating to me at first because it uses TypeQL — a completely different query language and an entirely new paradigm. But this paradigm is genuinely good. Beyond strictness, it doesn’t force you to learn advanced mathematics. It gives you a fairly simple concept called PERA: Polymorphic Entity-Relation-Attributes. It’s a visual and intuitive modeling framework that makes designing your data quite straightforward.
The real beauty is that you can feed this to an LLM. I’ve already tried it several times: teach the LLM TypeDB, use the TypeDB constraints as extraction rules, and you don’t even need a special evaluation framework — because the database itself validates the data you try to store. If something goes wrong, you get an error and can feed it back to the LLM. So alongside the construction rules, you also get an evaluation engine. For developers, this feels remarkably close to what you already know.
From LadybugDB to TypeDB
What I did was take my LadybugDB book and bring it to the next level. I took all my concepts — Semantic Spacetime, metagraphs, and everything else — and encoded them in TypeDB. It works. But there’s an important observation: TypeDB is a more closed-world database. It’s stricter and forces you to think in advance about what’s possible. Property graphs, with their weaker constraints, are more friendly to open-world ideas and more easily extendable. When you want to extend TypeDB, you need to reason about what your current types allow and extend them deliberately.
But at the same time, as I’ve said, stricter constraints and closed-world design provide better guardrails for your agents.
The Book
I wrote a book about TypeDB where I express all this work — a memory architecture that explains how to use these concepts with stricter guardrails for agentic AI. If you want to learn something new, and if you want better guardrails for your agents, give it a try.













