In the world of NoSQL


I increased the number of map tasks in hadoop to 64 per TaskTracker, and TaskTracker started to crash every time I launched a map reduce job.

 

Errors were:

java.lang.OutOfMemoryError: unable to create new native thread

And:

org.apache.hadoop.mapred.DefaultTaskController: Unexpected error launching task JVM java.io.IOException: Cannot run program ”bash” (in directory ”/data/1/mapred/local/taskTracker/hdfs/jobcache/job_201110201642_0001/attempt_201110201642_0001_m_000031_0/work”): error=11,  Resource temporarily unavailable.

 

Googling for this problem presented the following solutions:

  1. Increase the heap size for the TaskTracker, I did this by changing HADOOP_HEAPSIZE to 4096 in /etc/hadoop/conf/hadoop-env.sh to test.  This did not solve it.
  2. Increase the heap size for the spawned child.  Add -Xmx1024 in mapred-site.xml for mapred.map.child.java.opts.  This did not solve it.
  3. Make sure that the limit of open files is not reached, I had already done this by adding ”mapred – nofile 65536” in /etc/security/limits.conf.  This did not solve it.

I decided to sudo to the mapred user and check the ulimits again, what I noticed that was off was:

max user processes              (-u) 1024

 

Adding the following to /etc/security/limits.conf and restarting the TaskTracker solved it:

mapred – nproc 8192

 

Apparently CentOS limits the number of processes for regular users to 1024 by default.

Read more...

§50 · oktober 20, 2011 · Hadoop · Kommentarer inaktiverade för Hadoop TaskTracker java.lang.OutOfMemoryError · Tags: , , ,



Googling about couchdb and size limits results in everyone saying that it’s virtually unlimited.  This might be true about couchdb, but not about ext3.  I recently hit the ”max filesize limit” of 2 TB in a couchdb database (luckily just for an internal system).  The result was that couchdb crashed(with the following error) every time the 2 TB database was accessed in any way.

[error] [<0.84.0>] ** Generic server couch_server terminating
** Last message in was {’EXIT’,<0.416.0>,
{{badmatch,{error,efbig}},
[{couch_db_updater,’-flush_trees/3-fun-0-’,5},
{couch_key_tree,map_simple,3},
{couch_key_tree,map_simple,3},
{couch_key_tree,map,2},
{couch_db_updater,flush_trees,3},
{couch_db_updater,update_docs_int,5},
{couch_db_updater,handle_info,2},
{gen_server,handle_msg,5}]}}
** When Server state == {server,”/data/couchdb-data”,
{re_pattern,0,0,
<<69,82,67,80,124,0,0,0,16,0,0,0,1,0,0,0,0,0,
0,0,0,0,0,0,48,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,93,0,72,25,77,0,0,0,0,0,0,
0,0,0,0,0,0,254,255,255,7,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,77,0,0,0,0,16,171,255,3,0,0,0,
128,254,255,255,7,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,69,26,84,0,72,0>>},
100,3,”Tue, 13 Sep 2011 13:14:02 GMT”}
** Reason for termination ==
** kill

For me this just meant moving the database from a Ext3 partition on the server to a larger XFS partition instead.  If I would have thought about this up front I would have chosed Ext4 instead of Ext3, which has a limit (depending on how you configure it) of 16 TB.  XFS on the other hand excels with a sizelimit of 8 ExaByte (that’s 1 million terabytes for you that wonders).

Update: The database is still going strong, with 23.5 million documents in it, and currently using 15.4 TB (mostly by attachments)

Read more...

§38 · september 15, 2011 · CouchDB, Filesystem · Kommentarer inaktiverade för Couchdb maximum database size · Tags: , , ,



Recently I have been playing around with HBase for a project that will need to store billions of rows (long scale), with a column count variating from 1 to 1 million. The test data (13.3 million rows, 130.8 million columns) resulted in 27 GB of storage, without compression.  After activating compression it only took 6.6 GB.

I followed some guides on the net on how to activate LZO (which can't be enabled by default due to license terms), but all I tried had some minor faults in them (probably due to version issues).

Anyhow, this is how I did it(assuming Debian or Ubuntu):

apt-get install liblzo2-dev sun-java6-jdk ant
svn checkout http://svn.codespot.com/a/apache-extras.org/hadoop-gpl-compression/trunk/ hadoop-gpl-compression
cd hadoop-gpl-compression
export CFLAGS=”-m64″
export JAVA_HOME=/usr/lib/jvm/java6-sun/
export HBASE_HOME=/path/to/hbase/
ant compile-native
ant jar
cp build/hadoop-gpl-compression-*.jar $HBASE_HOME/lib/
cp build/native/Linux-amd64-64/lib/* /usr/local/lib/
echo ”export HBASE_LIBRARY_PATH=/usr/local/lib/” >> $HBASE_HOME/conf/hbase-env.sh
mkdir -p $HBASE_HOME/build
cp -r build/native $HBASE_HOME/build/native

Then verify that it works with:

cd $HBASE_HOME
./bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/testfile lzo

Read more...

§24 · september 12, 2011 · HBase · Kommentarer inaktiverade för Activating LZO compression in HBase · Tags: , ,



I recently ran out of disk space on the partition where my couchdb databases resided, the disk had been filled by a couchdb database that severly needed to be compacted (which in my case would reduce it from 270 GB to 40 GB).

The problem was that when couchdb compacts a database, it basically writes a whole new database (so I need 40 GB of free space to be able to perform the compaction).

 

By default a couchdb compaction is performed in three stages:

1. Start/resume compact to dbname.couch.compact

2. Remove dbname.couch

3. Rename(not move) dbname.couch.compact to dbname.couch

 

I couldn’t just create a symlink called dbname.couch.compact to an empty file on another partition, since couchdb removed the file due to invalid format.  It might have worked if I had started a compaction, killed couch, moved dbname.couch.compact to another partition, and then created the symlink, and resumed the compaction, but since that’s not how I did it I don’t know if it will work.

 

My solution was to:

1. Copy dbname.couch to the other partition

2. Stop couch

3. Symlink dbname.couch to dbname.couch on the other partition

4. Start couch and start the compaction.  It should complete the compaction and remove the symlink.

Read more...

§6 · juli 4, 2011 · CouchDB · 1 comment · Tags: ,