Frequent crashes with multithreading

Issue #260 invalid
Bráulio Bhavamitra
created an issue

Backtrace at

The rails app is opensource and is hosted

Happens with 0.18, 0.19 and 0.20

Doesn't happen at all with Unicorn (no multithreading)

Comments (10)

  1. Lars Kanis

    Pg gem is often used in a multi-threaded environment and you're using simple PG::Connection#exec calls, so that I doubt, that this is a pg issue. Anyhow the heap memory gets corrupted somehow and libpq stumbles about it.

    What is your operating system and what does PG.threadsafe? return?

  2. Chris Bandy

    You have two workers which are forks. You also need to follow the guides for forking servers (and puma docs.) Try disconnecting in before_fork:

    before_fork do
      # or maybe
      # or maybe
  3. Sasindran NA

    @brauliobo is this issue occurs only while using puma with --preload and multiple workers?

    We have a single worker process and we are using PG 0.20.0, and this issue is not showing up.

  4. Eduard Bondarenko

    We have same issue with many delayed job workers. Error is PG::ConnectionBad: PQconsumeInput() could not receive data from server: Bad file descriptor and after this worker can't stop, hangs.

    I recently update pg from 0.19 to 0.20 and still have the issue. PG.threadsafe? # true

  5. Michael Granger repo owner

    @Eduard Bondarenko : I don't know much about delayed_job, but I assume that it forks its workers. If so, are you closing database connections and re-establishing them after the fork? Forked processes inherit the file descriptors of their parents, so as @Chris Bandy suggested above, you might have to ensure your workers get their own database connections by forcing a disconnect before they fork.

    Most systems that fork have some kind of lifecycle management like an before_fork hook, so I'd start there.

    @brauliobo : Did the before_fork work in production?

  6. Log in to comment