Moving to OmniOS Community Edition

Had a small snag when I tried to upgrade my old (r151018) OmniOS installation to OmniOS CE as described in the ANNOUNCEMENT OmniOS Community Edition – OmniOSce r151022h

During “pkg update” stage I got something similar to the following:

pkg update: The certificate which issued this certificate:/C=US/ST=Maryland/O=OmniTI/OU=OmniOS/CN=OmniOS r151018 Release
Signing Certificate/emailAddress=omnios-supp…@omniti.com could not be found.

Thankfully, the solution was a straightforward sequence of steps to upgrade to r151020, then to r151021 and finally to r151022.
From there I was able to successfully upgrade to OmniOS CE. Even “-r” option in “pkg update -rv” worked as a charm because this option doesn’t exist in r151018. Probably, I could skip r151021 all together, but it’s always better be safe than sorry.

How to reuse dropped sharded collection’s name

It happens that sometimes you want to drop your sharded collection and be able to reuse its name again. However, it might not be as straightforward as one expects it to be:

mongos>sh.shardColelction("your_database.your_collection", { "sharded_key": 1})

"code" : 13449,
"ok" : 0,
"errmsg" : "exception: collection your_database.your_collection already sharded"

The error message might be different but you get the idea – you can’t shared a collection if its name matches the one that has been recently dropped. Thankfully, there is a workaround described in SERVER-17397:

When dropping a collection:
use config
db.collections.remove( { _id: "DATABASE.COLLECTION" } )
db.chunks.remove( { ns: "DATABASE.COLLECTION" } )
db.locks.remove( { _id: "DATABASE.COLLECTION" } )
Connect to each mongos and run flushRouterConfig

Followed the steps in prod yesterday and it worked like a charm.

TIL Remove a Znode from Zookeeper

Yep, you could easily achieve that (and much more) using zkCli.sh (Zookeeper client):

$ /usr/share/zookeeper/bin/zkCli.sh 
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
	connect host:port
	get path [watch]
	ls path [watch]
	set path data [version]
	rmr path
	delquota [-n|-b] path
	quit 
	printwatches on|off
	create [-s] [-e] path data acl
	stat path [watch]
	close 
	ls2 path [watch]
	history 
	listquota path
	setAcl path acl
	getAcl path
	sync path
	redo cmdno
	addauth scheme auth
	delete path [version]
	setquota -n|-b val path

Issue “rmr” (to remove recursively) or “delete” to remove a znode.

TIL HSTS requires a secure transport

Otherwise (quoting RFC6797):

If an HTTP response is received over insecure transport, the UA MUST ignore any present STS header field(s).

That means SSL certificate on your server must be valid, i.e. no errors or warnings when you open a page from a browser over https.

Restart your Mongos after maxConsecutiveFailedChecks

Take it literally.
If you configured your MongoDB config servers as a replica set and for some reason, say a network outage, Mongos server lost connection to all of them and is not able to reconnect during maxConsecutiveFailedChecks attempts then, surprise, it becomes useless. Even if the network is up and running again, Mongos will not reconnect to the config servers and you won’t be able to authenticate to your shard cluster until Mongos is restarted.

From https://api.mongodb.com/cplusplus/current/classmongo_1_1_replica_set_monitor.html

static int 	maxConsecutiveFailedChecks = 30
 	If a ReplicaSetMonitor has been refreshed more than this many times in a row without finding any live nodes claiming to be in the set, the ReplicaSetMonitorWatcher will stop periodic background refreshes of this set. 

And if you check the source code of 3.2.x (3.2.12 as of this writing) branch you will see the following (./src/mongo/client/replica_set_monitor.cpp):

if (_scan->foundAnyUpNodes) {
            _set->consecutiveFailedScans = 0;
        } else {
            _set->consecutiveFailedScans++;
            if (timeOutMonitoringReplicaSets) {
                warning() << "All nodes for set " << _set->name << " are down. "
                          << "This has happened for " << _set->consecutiveFailedScans
                          << " checks in a row. Polling will stop after "
                          << maxConsecutiveFailedChecks - _set->consecutiveFailedScans
                          << " more failed checks";
            }
        }

So once you go pass maxConsecutiveFailedChecks the replica set will become unusable:

bool SetState::isUsable() const {
    return consecutiveFailedScans < maxConsecutiveFailedChecks;
}

As far as I can't tell 3.4.x doesn't have maxConsecutiveFailedChecks and hopefully one will not have to intervene and restart Mongos manually.

Watch “Monitorama 2016: All of Your Networking Monitoring is (probably) wrong” talk

Just came across this talk being mentioned in the comments on Hacker news and, boy, it’s absolutely amazing!
Watch this hilarious talk here – Monitorama 2016: All of Your Networking Monitoring is (probably) wrong

Btw, the talk is presented, presumably, by the same guy who wrote Monitoring and Tuning the Linux Networking Stack: Receiving Data and Monitoring and Tuning the Linux Networking Stack: Sending Data which are both must-read.

TIL gethostbyname*() and gethostbyaddr*() functions are obsolete

It wasn’t obvious to me until I tried to run netcat utility (aka nc) on Ubuntu 10.04 (lucid) release to check Zookeeper’s status:

echo "stat" | nc zookeer_server_name 2181
zookeer_server_name: forward host lookup failed: No address associated with name

It wouldn’t have been a problem had Zookeeper server used IPv4 address but it was configured with IPv6. So tools that used gethostbyname2(), e.g. getent, were still ok, and only those with gethostbyname() were failing me. Luckily, netcat and other important libraries had newer versions I could use. Once again, if you are on an old and rusty Linux distro be aware that gethostbyname*() and gethostbyaddr*() functions are obsolete

Update
As Anton mentioned in his comment below, getaddrinfo() had its own gotchas, which, if I got it right, were caused by AI_ADDRCONFIG flag. There is a good summary page which goes in more details regarding AI_ADDRCONFIG and the peculiarities pertaining to its current implementation in glibc.

jbd2 is munching your disks? Use ftrace to find why.

Have you ever been wondering why jbd2 (or jbd if your are still using ext3) is sitting at the top of iotop and consuming the most of IO bandwidth? Well, it’s certainly not because it’s doing that just to drive you nuts but there is a reason. And the reason is most probably there is an app that is doing a lot of sys_fsync(), sys_fdatasync() or sys_msync().
In case your are not on the latest and greatest kernel and BPF is not available, there is an easy way to confirm that using ftrace.

Just enable tracing of ext4_sync_file_enter events:

# echo 1 > /sys/kernel/debug/tracing/events/ext4/ext4_sync_file_enter/enable

And print out the output from trace or trace_pipe (refer to the documentation of ftrace for more information):

# cat /sys/kernel/debug/tracing/trace/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 16/16   #P:8
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
          mongod-2299  [001] ...1 661508.531446: ext4_sync_file_enter: dev 252,1 ino 267191 parent 267956 datasync 1 
          mongod-2299  [003] ...1 661508.543931: ext4_sync_file_enter: dev 252,1 ino 267191 parent 267956 datasync 1 
          mongod-2299  [003] ...1 661508.566134: ext4_sync_file_enter: dev 252,1 ino 267191 parent 267956 datasync 1 
          mongod-2299  [003] ...1 661511.255926: ext4_sync_file_enter: dev 252,1 ino 267191 parent 267956 datasync 1 
          mongod-2299  [000] ...1 661511.703643: ext4_sync_file_enter: dev 252,1 ino 267191 parent 267956 datasync 1 

Once you are done, just stop tracing the events and clear ftrace’s buffer:

# echo 1 > /sys/kernel/debug/tracing/events/ext4/ext4_sync_file_enter/enable
# echo > /sys/kernel/debug/tracing/trace

So in my case jbd2 excessive activity was caused by MongoDB’s journaling which syncs data every 50ms (starting from version 3.2).

MongoDB – Defeating RangeDeleter

Not closing a cursor in MongoDB could hurt you big, so it’s generally not recommended to use no_cursor_timeout=True (pymongo3) or timeout=False (pymongo2). Especially when you run shared MongoDB installation:

PyMongo does “close” cursors when they are garbage collected, but they aren’t closed immediately and closing a cursor in all current versions of MongoDB is asynchronous. Depending on the python implementation, relying on garbage collection to close the cursor is not a great idea. Discarded, not fully iterated cursors can live for some time when using Jython or PyPy which do not do reference counting garbage collection. That’s why the Cursor object has an explicit close() method.

Not using close() method could potentially be the reason behind the following lines in the MongoDB’s log file:

SHARDING [RangeDeleter] waiting for open cursors before removing range

And even if it looks innocuous, it’s actually not quite, since what it means is that a source shard can’t delete its copy of the documents – Step 7 in chunk migration procedure.

At the time of this writing there is still no way (SERVER-3090) to glean more information that pertains to a cursor’s id, so the only way out that I was able to come up with was to kill those dangling cursors using an undocumented (as of this writing) killCursors command:

> use your_db_name
> db.runCommand({killCursors: 'your_collection_name', cursors: [NumberLong(51518759968), NumberLong(51484189302), NumberLong(51451409949), NumberLong(51434938438), NumberLong(51429435438), NumberLong(51383912702)]})
{
	"cursorsKilled" : [
		NumberLong("51518759968"),
		NumberLong("51484189302"),
		NumberLong("51451409949"),
		NumberLong("51434938438"),
		NumberLong("51429435438"),
		NumberLong("51383912702")
	],
	"cursorsNotFound" : [ ],
	"cursorsAlive" : [ ],
	"cursorsUnknown" : [ ],
	"ok" : 1
}

And as a reward look for the following message in the log file:

SHARDING [RangeDeleter] Deleter starting delete for