Experimental Driver - Memory Fault with Large Result Set

Issue #31 resolved
Brian Jerome created an issue

Using the same PGM and input from Issue #30: Experimental Driver - Segmentation Fault with Large Result Set I am receiving a Memory fault - core dumped message.

Comments (5)

  1. Former user Account Deleted

    I am receiving a Memory fault - core dumped message.

    Ok, even bigger jsonout buffer in dbconn.h. Maybe a fatter white whale will please ye Capitan Ahab.

    • Super Driver - bottom page - node db2ia - Added Danny 30MB size (up from 15MB test version)
    $ grep DB2_I5_DB2SOCK_OUT_SIZE dbconn.h
    #define DB2_I5_DB2SOCK_OUT_SIZE 30000000 /* @adc (Danny) */
    

    BTW -- Warning I may have put old 5MB version up previous time. This time 30MB, I checked size match of new db2ia.node in zip file. Good luck. Thar she blows! (Hopefully not this time).

  2. Brian Jerome reporter

    Maybe the limit Danny proposed originally was all that was needed -- or the 15MB and the correct version ;) Will there be any performance impact with the buffer being that large? I've noticed using the xml service a while back when the buffer size was 15MB vs 512KB there was a big speed difference.

  3. Former user Account Deleted

    Maybe the limit Danny proposed originally was all that was needed -- or the 15MB and the correct version ;)

    First. MY vote a correct production node toolkit API answer is additional parameter specifying working out memory size on API (user says 'how big'). In fact, no better speed answer than precise user control per script limits memory resources consumed in a web server like node. Not to mention, web server may not run out of memory on demand when users treat memory as important (... cough ... cough ... wheeze ... just sayin').

    ... performance impact with the buffer being that large?

    However ... walking dead ... most folks want everything toolkit done for them. Therefore, pick a size 15MB, 30MB, pick a card, any card, they all come with costs.

    Performance specifically 15MB - 30MB? I have no idea 'significance'. To wit, eye of the beholder marks performance conversations, wherein 'reasonable' often measured as milliseconds between friends.

    I've noticed using the xml service a while back when the buffer size was 15MB vs 512KB there was a big speed difference.

    Anyway, db2sock is not in full performance testing (beyond Halmela testing). To wit, I suspect performance running needs to be done. That is, we are writing best guess performance sensitive code, and testing causally every day. However, actually, we still changing code for decimals and zones ... array blank records removal (dou and dob) ... aggregate async APIs ... so on.

    Help yourself ...

    Best, db2sock a community project, everything is important, feel free to read code, experiment, and recommend performance changes.

    As for memory ... well ... with big fat memory, comes great responsibility -- (Spiderman parody).

  4. Former user Account Deleted

    Oh yeah, did the change to the node experimental driver work??? Aka, are we done this issue?

  5. Brian Jerome reporter

    As far as I can tell I haven't hit any more memory faults. I've been stress testing my code and now I'm just running into JavaScript heap out of memory now. We can save performance testing for later.

  6. Log in to comment