This was how basically how Aperture from Apple worked way back in the day (before it was discontinued). The database structure was represented as one monolithic file in Finder (the file manager). It was actually a directory full of renamed file blobs if you peeked at it behind the scenes (with a terminal), but Aperture had a tight control on it and treated it like a sealed off box. (Very Apple-like, right?) At least there was a way for people to extract images with a bit of know-how. (Lightroom later added import support to make the task easier.)
However, all the other raw editors (FOSS and proprietary) keep things on disk, either in the location they already exist, or they also have the choice to move or copy on input (depending on app). Some have local databases to speed things up and others just work on files or smaller directories.
As far as darktable is concerned, it has a local database that serves as a working cache so it doesn’t have to read all the XMP sidecar files all the time… as that just does not scale, especially on spinning disks (which are still necessary for larger photo collections). The database enables huge libraries and lets someone search through all the data pretty quickly. Doing that over XMPs would not work for something like a 4+ TB collection (which is what I have, spanning back to 1999).
It’s possible to recreate darktable’s library database by re-importing the images and their XMPs. I’ve done this before too (and it takes hours for me). But we don’t have to worry about database corruption, as we could always rebuild (as long as we do save the data back out to XMPs, which is enabled by default). This means that we don’t have to worry about losing images or even metadata due to database corruption.
I’m happy there’s a way to use darktable in both modes (transient file browser and library), so we can all use it the way we want.