I’m lucky enough to work on a subject I truly love, something so old in the IT world yet always surprising in how modern and relevant it still is: databases. At the end of the day, a database is nothing more than a way to read and write information efficiently. I feel like I’ve been dealing with databases forever, ever since I built my first commercial DBMS based on SQLite (https://en.wikipedia.org/wiki/REAL_Server), sometime around 2002 if I recall correctly. I still remember when a single instance managed to serve millions of monthly requests while using only a few dozen MB of memory, and it never crashed once.
Over the years, a clear trend has emerged, one that I still watch closely: for every new problem, a new database seems to be invented. The space is incredibly fragmented; there’s always a vendor claiming to have just the right database for each specific challenge. This fragmentation has only accelerated with the rise of AI, bringing new requirements like column-oriented processing and vector search. I have a strong opinion about this: I believe a single database could efficiently solve 90% of use cases. But maybe I’m a little too biased to say that out loud.
In any case, there’s one issue I think should already be behind us: choosing one database over another. Or rather, it would be behind us if we really wanted to push for a technological leap, a true database of the future. Today, the market is neatly split between cloud-based databases and embedded databases running on devices, on the Edge. It’s as if these were fundamentally different technologies, as if under the hood, they weren’t essentially the same thing. But they are.
Anything you can solve with a Cloud database, you can solve with an Edge database, except for one big difference: in the Cloud, there will always be one unavoidable inefficiency, no matter how advanced the technology is, the speed of light. If my request starts in Milan and my server is halfway across the world, light will still take a finite time to travel. That latency, dictated by physics, can’t be eliminated. It doesn’t matter how efficient the database is or how powerful the server is: light will always take about 30 ms to travel from Milan to London, or worse, 100–120 ms to reach New York.
Over the years, different strategies have been introduced to mitigate this problem. The most obvious one is replicating the same database in multiple regions, ensuring each request is served from the nearest node. But this raises other challenges: guaranteeing consistent reads across all nodes, and managing the complexity of keeping such a system running. My startup has solved many of these issues, but there’s still one challenge that no architecture like this can fully overcome.
What happens if there’s no network? Or if the connection is too slow? Or if my device, robot, or drone can’t afford to wait even 20 ms for an answer, because it needs to process something in real time? In those cases, the Cloud isn’t an option. There’s no way to send a request to some server and expect an instant response.
Soon, our homes will be full of smart devices and domestic robots. But what if the WiFi goes down? Do the robots fall down the stairs because the server didn’t respond? Or worse, does our smart alarm system fail because its camera couldn’t recognize that the face it saw belonged to a family member? That would be completely unacceptable.
The answer lies in using the technology we already have. Still, with a shift in mindset: local-first should be the default standard for building apps, apps that aren’t disrupted by network issues, airplane mode, or any other variable we can’t control. If we want to deliver truly great user experiences, the database must be local. Requests should have zero latency, and the database should support the new features modern applications demand: AI, vector search, federated learning, offline sync, and embeddable logic.
All of this can be done today with SQLite, enhanced by a set of extensions built to tackle these new challenges. It must work at zero latency, in microseconds, and at the same time synchronize intelligently with the Cloud, only when the network allows it, automatically resolving conflicts caused by concurrent transactions.
SQLite is the most widely deployed software in the world, probably also the most rigorously tested, and one of the very few projects with guaranteed long-term support for the next 25 years. I’d even add that it’s also one of the most underrated pieces of software; too often dismissed as “just” an embedded database, when in reality it’s incredibly efficient as a server-side database too.
I may sound biased, my startup is dedicated to these very problems and is all-in on SQLite, but I truly believe that, as of 2025, it makes no sense to keep choosing between the Cloud and the Edge. We don’t need two separate solutions for the same problem. What we need is a unified solution that combines Edge and Cloud, delivered by a single vendor, with a single set of extensions and smart features that can handle every use case. At the end of the day, you need an efficient way to handle data; everything else, like the database engine, the transport layer, or the sync algorithm, is just an implementation detail.
After 25 years (SQLite was first released in 2000), it’s time to say “welcome back” to SQLite, because the next 25 years will be remembered as the era that opened up a new market, one where the speed of light is no longer a limitation but simply a measure of how fast our new solutions can be.
I’m the founder of SQLite AI, our mission is to make local-first sync with SQLite and AI on the Edge straightforward and reliable. If you’d like to experiment with a production-ready sync engine, the same kind of technology discussed in this article, you can explore it at: https://www.sqlite.ai/sqlite-sync
