Memory leak caused by "PG::Result#each"

Issue #243 closed
Masashi Miyazaki
created an issue

Hi,

I'm struggling with a memory leak issue caused by ruby-pg gem. Our application simply fetches records from PostgreSQL, convert those records in our format, and transfer them to another endpoint. The problem is that we are seeing that the memory usage of that process increases by iteration.

According to our debug log, it seems that allocated memory during converting the PG::Result seems not released. Here is an example code which we are doing.

# Leak
converted_result = []
conn = PGconn.open(PG_DB_CONFIG)
result = conn.exec("SELECT * FROM <table-name> LIMIT 50000")
result.each do |record|
  converted_result << convert_result(result)
end
return converted_result

Also, I could reproduce the same memory leak issue with the following code.

# Leak
converted_result = []
conn = PGconn.open(PG_DB_CONFIG)
result = conn.exec("SELECT * FROM <table-name> LIMIT 50000")
result.each do |record|
  converted_result << {}
end
return nil    # converted_result should be collected by GC

On the other hand, I could not see the memory leak with the following code, not using "PG::Result#each" but doing a same thing.

# No Leak
converted_result = []
conn = PGconn.open(PG_DB_CONFIG)
result = conn.exec("SELECT * FROM <table-name> LIMIT 50000")
1.upto(50000) do
  converted_result << {}
end
return nil

From these observations, I suspect that the memory leak issue has something to do with ruby-pg implementation. I made a simple project to reproduce this issue which may help you to see the issue.

pg_mem_leak: - https://github.com/mmasashi/pg_mem_leak

My environment: - ruby 2.0.0p647 (2015-08-18 revision 51631) [x86_64-darwin15.0.0] - pg (0.18.4) - PostgreSQL 9.4.4 on x86_64-apple-darwin15.0.0, compiled by Apple LLVM version 7.0.0 (clang-700.0.72), 64-bit - Mac OSX 10.11.5 (15F34)

Masashi

Comments (6)

  1. Masashi Miyazaki reporter

    It might be related to Ruby GC. The memory usage started decreasing after waiting for a long time like hours. Also we decided to use cursor instead of getting a bunch of records.

    I will close this and reopen if needed.

    Thanks for the advise!

  2. Masashi Miyazaki reporter

    As I commented, it might be related to Ruby GC. The memory usage reduced little by little by running for a long time without doing anything. Also we decided to use "cursor" to avoid this situation, which seems to work for us.

  3. Log in to comment