Memory leak? bitcored eats up 64G of memory


(Modong) #1

I was just using the REST API to do some query along the blocks and query for quite a lot of transactions.
However, after a while the bitcored process will eat up all my memory and OOM. Any idea on what could go wrong here?
Is there any potential memory leak in bitcored (or included services)?

I am using Ubuntu 1404, node v0.12.9 and bitcore version 3.0.0.

Thanks in advance…


(Moeadham) #2

Yeah, we see the same behavior. Seems to occur after someone requests an address with a lot of history (ex. A popular dice address).

Looked into fixing it, but based on how the DB is set up its a pretty huge change. A blacklist + regular reboots & a load balancer keep it mostly under control.


(Braydon Fuller) #3

@moeadham which changes were you looking to make?


(Moeadham) #4

Well, here is a good example:
https://insight.bitpay.com/api/addr/1dice8EMZmqKvrGE4Qc9bUFf9PX3xaYDp?noTxList=1

While noTxList=1, if you check the code,
github. com/bitpay/insight-api/blob/master/lib/addresses.js#L56
(sorry for some reason this forum is blocking links to github??)

It is calling
this.node.getAddressSummary(address, options, function(err, summary)

Which is:
github .com/bitpay/bitcore-node/blob/462e4e3cdd15e5d59812e089eb88f6ce8e45066b/lib/services/address/index.js#L1357

It doesn’t look like any of the functions in that async waterfall really actually have any type of pagination capabilities.

I’m not too familiar how the DB streaming works, so I gave up to be honest. What we are doing is just adding a layer to blacklist most of these addresses to avoid the API from getting taken down too easily:
blockchain .info/popular-addresses

I can push that patch up to our fork if you are interested, but its an ugly, shameful work-around.


(Braydon Fuller) #5

Indeed, there isn’t any pagination at that level. Building the summary is necessary step for later pagination of more detailed info. If you have the branch handy someplace I wouldn’t mind taking a look.

I’m currently working on improving such queries to have better pagination. Keeping a summary index/cache or blocking sound like good options.