Blank Notebook

Experiences, Thoughts & Reviews

10 November 2025

Aerospike: My 'Aha!' moments - A record which is too big

by ramesh ramalingam

A women having bigger data blocks in hand and wondering how it will fit in 1 MB limit gate of Aerospike In my experience with NOSQL DBs, size of the records were in a talk rarely. Most of the NOSQL DBs allow storing adequately bigger records. Even though we store denormalized data and indexes there weren’t much hassle(Though there could be some performance impact when the size grows). But in Aerospike, the default maximum size of a record is 1MB. Yes. It is. If you cross that line then you will get “Error Code 13: Record Too Big” error.

Though you can change this limit by configuring the max-record-size parameter in the Aerospike configuration file, increasing this limit will have cascading effects on the performance. For example, larger the record takes more disk I/O during replication. So it is good to stick to the default as much as possible.

My experience

When we are using NOSQL DBs as primary data store, there will be scenarios where we need to store larger records. In our case, we needed to store the indexes between multiple domain objects to support our querying needs. Lets take an example of Grocery store application which maintains multiple outlets. By the end of the day, we need to take the report of sales done in every store.

To do this we may need to create indexes (secondary indexes in some DBs) which will keep the mapping between store and sales. But in most of the cases, especially when you volume of sales is huge, it will impact the performance. So it is good to create our own indexing mechanism which stores day-wise sales for each store. Please find the sample index below.

{
  "PrimaryKey": "Store:[StoreID]_Date:[Date]",
  "sales": {
    "SaleID1": { "item": "Item1", "quantity": 2, "amount": 20 },
    "SaleID2": { "item": "Item2", "quantity": 1, "amount": 15 },
    ...
  }
}

When I was trying to build such indexes, I faced the “Error Code 13: Record Too Big” error multiple times. To solve this problem, I had to rethink my indexing strategy. Instead of storing all sales for a store in a single record, I decided to store them in chunks. I may try splitting using natural dividers like per hour, per category etc but some chunks can cause the risk of growing bigger than expected especially during the peak hours.

So I decided to split them based on fixed size. Aerospike has a suggested approach called “Adaptive Map” to address this issue. You can find their sample implementation here. But we implemented a different flavour of it using “filter expressions” for some internal reasons.

Read here why Aerospike has the max size limit for record.

All articles in this series

  1. Which storage options to choose?
  2. A record which is too big
  3. Is your key too hot?
  4. The predictable Primary Index
  5. Resurrection of the record
tags: aerospike - storage - record - size - chunk

Recent posts