Googling about couchdb and size limits results in everyone saying that it’s virtually unlimited. This might be true about couchdb, but not about ext3. I recently hit the ”max filesize limit” of 2 TB in a couchdb database (luckily just for an internal system). The result was that couchdb crashed(with the following error) every time the 2 TB database was accessed in any way.
[error] [<0.84.0>] ** Generic server couch_server terminating
** Last message in was {’EXIT’,<0.416.0>,
{{badmatch,{error,efbig}},
[{couch_db_updater,’-flush_trees/3-fun-0-’,5},
{couch_key_tree,map_simple,3},
{couch_key_tree,map_simple,3},
{couch_key_tree,map,2},
{couch_db_updater,flush_trees,3},
{couch_db_updater,update_docs_int,5},
{couch_db_updater,handle_info,2},
{gen_server,handle_msg,5}]}}
** When Server state == {server,”/data/couchdb-data”,
{re_pattern,0,0,
<<69,82,67,80,124,0,0,0,16,0,0,0,1,0,0,0,0,0,
0,0,0,0,0,0,48,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,93,0,72,25,77,0,0,0,0,0,0,
0,0,0,0,0,0,254,255,255,7,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,77,0,0,0,0,16,171,255,3,0,0,0,
128,254,255,255,7,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,69,26,84,0,72,0>>},
100,3,”Tue, 13 Sep 2011 13:14:02 GMT”}
** Reason for termination ==
** kill
For me this just meant moving the database from a Ext3 partition on the server to a larger XFS partition instead. If I would have thought about this up front I would have chosed Ext4 instead of Ext3, which has a limit (depending on how you configure it) of 16 TB. XFS on the other hand excels with a sizelimit of 8 ExaByte (that’s 1 million terabytes for you that wonders).
Update: The database is still going strong, with 23.5 million documents in it, and currently using 15.4 TB (mostly by attachments)