MongoDB has a construct called Capped Collections, which are DB tables that store a maximum number of records, and old records are removed in order to make way for new records.
It turns out that Postgres (or other SQL DBs) can accomplish this task as well, and easier than you might think.
First, let’s create our table. We are going to have an
id column, and a
data column, but you can have whatever columns you need for your particular situation.
1 2 3
Now, we create a sequence for our primary key, set the MAXVALUE to be the desired size of our capped collection, and we use the CYCLE option, so that the sequence will count up to MAXVALUE and then restart again at 1.
1 2 3 4 5 6 7 8
In the next step (not shown here), you should pre-seed the table with MAXVALUE records. We do this so that every
id value has a row, and we don’t have to worry about deciding whether to do an INSERT or UPDATE operation, just to keep things simple and performant.
Now that our table is set up, whenever we want to INSERT a new row, we instead do an UPDATE like so
1 2 3
nextval command gets us the next
id, and because we specified CYCLE it will wrap around back to 1 once it reaches MAXVALUE. The end result is you will only ever have MAXVALUE rows in the table, with the oldest entries being aged out.
So go forth, and cap those collections!