MongoDB, Atlas, and verifying our deployments

I’ve had a little bit of time to re-acquaint myself with what’s been generating 503s on the fl-maps side.

The core issue is affecting startup.

MongoError: (Unauthorized) not authorized on admin to execute command { listIndexes: "users", cursor: {  } }

We can see builds failing with this most recent container output.

[167.71.2.39] x Verifying Deployment: FAILED
	
	      ------------------------------------STDERR------------------------------------
	      _modules/mongodb-core/lib/cursor.js:212:36)
	    at /built_app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:469:18
	    at _combinedTickCallback (internal/process/next_tick.js:131:7)
	    at process._tickDomainCallback (internal/process/next_tick.js:218:9)
	=> Starting meteor app on port:3000
	
	/built_app/programs/server/node_modules/fibers/future.js:313
							throw(ex);
							^
	MongoError: (Unauthorized) not authorized on admin to execute command { listIndexes: "users", cursor: {  } }

But build status is irrelevant in terms of the app actually starting.

I’ve altered the permission set on the users used to connect to Atlas and had no change in behavior. There are officially set limitations per tier in Atlas. See https://docs.atlas.mongodb.com/reference/free-shared-limitations/, though listIndexes() is not marked as such.

Testing strategy to confirm would be to regenerate Atlas clusters on a different tier but budget constraints. Guys in this thread say the issue goes away if they switch to M10 clusters. Meteor with MongoDB Atlas - Index Creation fails - #5 by kellertobi - help - Meteor forums

@AndyatFocallocal you previously mentioned it may be possible for Mongo to donate something to us, could it be an entitlement on this account that would allow reduced M10 billing? Other alternative is to switch to hosting mongo instances along all our other services or ingesting another provider.

1 Like

Thanks for doing that research Tom. I’ve passed it along, this issue seems to be a really tricky one

Here’s Ronan at Mongo’s reply:

Hey Andy

Thanks for getting back in touch with the details. Yes, that was what I was looking for - but unfortunately they don’t align with the theory I had… It’s not clear to me exactly what the root cause of the problem is at this point but I’m also conscious you’ve been offline for quite some time now. Without knowing the root cause it’s hard to know what the correct solution is - but at least we could rule out a cluster tier problem if you were in a position to connect your application to a (temporary) larger tier cluster. It would be best to view this as a test and therefore to create a separate cluster which can potentially be torn down later if it doesn’t help (while it’s possible to upgrade from an M2 to say an M10 it is not possible to downgrade so building a separate cluster is best here). To that end, you could build an M10 cluster in the same org as your existing M2 and reconfigure your app to (temporarily) point to it instead. All the user credentials should transfer over, etc. If that works we at least have a solid data point. And to avoid any cost implications you can use the XXXXX code (which you can apply on the Billing page of your org) which I believe will give you $100 of free credit.

If that sounds like a good approach I’d suggest you apply the code, build the new M10 cluster and try to connect your app to the new database before reporting back. If that works you should be able to leverage the new cluster to stand up the application again and get moving, while we look at options on our side to support your continued use of an M10. If the test fails, however, we may need to continue investigating the root cause - which may involve getting one of the local team here on a call with your technical team to see if we can drive a solution in real-time. In that case I would suggest terminating the M10 cluster to maintain the credits for the ongoing test we may need to carry out.

I appreciate time is moving on but hopefully that approach makes sense. Please let me know your thoughts!

Regards

Ronan

What version of mongodb were you using on mLab and which version are you trying to use today on Atlas? Are you positive that the users in the db are configured with the correct permissions? Have you tried possibly looking into upgrading some meteor packages such as mongo (your package version is mongo@1.4.2, current is 1.11)? Can you try adding ?authSource=admin to your connection string? Perhaps try the old-style connection string as well (mongodb:// vs. mongo+srv mongodb docs - connection-string).

I work with Meteor and MongoDB at my current workplace, so I am somewhat familiar with this environment. I handled our data migration from mLab to Atlas before the mLab shut down.

1 Like

Suggestions from Russell on FB

  • within MongoAtlas prove the data is there by looking at documents in the each collection to verify the data is okay. Then also share the error from the application.

  • check the IP of the server the application is running on, and check if the IP is white listed

  • I would also suggest you setup a development environment or test server thats has all system components like mongo and web server in on space so you can see if the application runs without mongo issues