DB2Sock Toolkit

Issue #10 new
Steven Scott created an issue

Hey guys,

I've been toying around with the start of a replacement toolkit that utilizes db2sock's REST interface. What I have so far is basic CMD, QSH and PGM calling.

I've written it in TypeScript, as a fully-JS module, no native code compilation required. I'm currently aiming for a more "functional" approach, compared to the existing toolkit's "array"-based approach to passing parameters around.

At the moment, I'm working on getting authorization to post it online under the MIT license, and I'm hoping to have it up within the next few weeks.

I wanted to open this task so that there can be public discourse and knowledge that it's something being worked on.

Comments (53)

  1. Brian Jerome

    What I'm currently looking for is to make a simple PGM call with JSON input and receive JSON output.

    Example Program Call:

    ...
    let program;
    
    ...
    // Node receives program name/lib/input
    
    // Create Program object
    program = new Program({  // Program class defined elsewhere
        name: programName, // These property values are assigned elsewhere
        lib: programLib, 
        data: jsonInput
    });
    
    ...
    
    db2sock.callProgram(program, (response) => {
        // Handle json output response
        ...
    });
    

    Essentially, Node will take in a program name and JSON input, then running async callProgram will execute the program and return the response in the callback function.

    Buffer Size?

    How is the io buffer size handled? I've seen a few tests that are defaulted to 512K. Some cases I would need 4K or 15MB depending on output size.

    Connection Pooling?

    If db2sock is using a socket connection, is database pooling necessary? It'd be nice to not make a new connection every time a program is called. If there is no pooling done I can write it on the Node side.

    Other Note

    It'd be great when Node v8 is supported on IBM i to utilize async functions to write Promise-based code.

  2. Former user Account Deleted

    Connection Pooling?

    Connection pooling is built into new db2sock database driver (libdb400.a). Aka, you can 'choose' to let libdb400.a take care of connection pooling (also known as 'persistent' connections).

    json toolkit use persistent connection

    From a json interface perspective you only need surround your request with a 'qualified' connection. Every json packact/request you send should have the same 'qualified' key.

    see source/toolkit/parser-json/README.md

    Here is 'persistent' db2 json request. Each request re-uses connect "qual":"fred".

    {"connect":[
      {"qual":"fred"},
      {"query":[{"stmt":"select * from QIWS/QCUSTCDT"}, 
                {"fetch":[{"rec":2}]},
                {"fetch":[{"rec":2}]},
               ]}
    ]}
    Note: A statement handle is returned in output of a "query".
    "{"script":[{"query":[{"handle":3},
    
    fetch "rec":"all" will close statement last record.
    {"connect":[
      {"qual":"fred"},
      {"query":[{"handle":'.$handle.'},
                {"fetch":[{"rec":"all"}]}
               ]}
    ]}
    
    -- or close early --
    
    close "handle":3 will close statement.
    {"connect":[
      {"qual":"fred"},
      {"close":[{"handle":'.$handle.'}]}
    ]}
    

    Sample a 'persistent pgm json request. Each request re-uses connect "qual":"fred".

    {"connect":[
      {"qual":"fred"},
      {"pgm":[{"name":"HELLO",  "lib":"DB2JSON"},
            {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
           ]}
    ]}
    
    next request to same open connection ...
    
    {"connect":[
      {"qual":"fred"},
      {"pgm":[{"name":"HELLO",  "lib":"DB2JSON"},
            {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
           ]}
    ]}
    

    The samples above are using a default persistent connection (current profile). You may 'choose' to provide a fully qualified connection to use alternative profiles (see PaseJson.c).

    /* {"connect":{"db":"DB","uid":"UID","pwd":"PWD","qual":"QUAL","iso":"nc|uc|cs|rr|rs","libl":"mylib yourlib","curlib":"mylib"}}
     * Note: connection is provided toolkit services (if missing). Aka, connect(null, null, null)
     */
    char * json_conn_attr [] = {"db","uid","pwd","qual","iso","libl","curlib", NULL};
    int json_conn_tool [] = {TOOL400_CONN_DB,TOOL400_CONN_UID,TOOL400_CONN_PWD,TOOL400_CONN_QUAL,TOOL400_CONN_ISOLATION,TOOL400_CONN_LIBL, TOOL400_CONN_CURLIB};
    
  3. Former user Account Deleted

    Buffer Size?

    This topic is toolkit json (db2 driver aother topic). Buffer size depends on send/receive transport you choose to use with json toolkit (rest, db2, etc.).

    db2sock/toolkit/proc/crtsql.cmd

    0) Base in/out size of a "single program/cmd/qsh/etc." call is 15M. However, multiple calls per json request are possible, so this topic is more about output by transport.

    CREATE PROCEDURE DB2JSON.DB2PROC(
    INOUT PGMTMPL BLOB(15M)) 
    LANGUAGE C NOT DETERMINISTIC 
    MODIFIES SQL DATA 
    EXTERNAL NAME 'DB2JSON/DB2PROC(iCall400)' 
    PARAMETER STYLE GENERAL;
    

    db2sock/toolkit/cgi and db2sock/toolkit/fastcgi (both Apache and nginx)

    1) REST -- Output buffer size is essentially infinite using REST interface (Apache+fastcgi+db2sock, or nginx+db2sock). That is, there is no limit of REST return bytes beyond administrator settings of the web site server (also time limits).

    db2sock/db2/PaseCliAsync.h (the CLI API to json toolkit)

    2) PASE db2 (libdb400.a) - Output buffer size is essentially infinite using SQL400Json. That is completely up to language toolkit provider to establish "how big" or "infinite".

    == traditional cli style interface (synchronous) ==
    
    SQLRETURN SQL400Json( SQLHDBC  hdbc, 
    SQLCHAR * injson, SQLINTEGER  inlen, 
    SQLCHAR * outjson, SQLINTEGER  outlen );
    
    == NEW non-traditional async interfaces (for likes of node) ==
    
    typedef struct SQL400JsonStruct { SQLRETURN sqlrc; 
    SQLHDBC  hdbc; 
    SQLCHAR * injson; SQLINTEGER  inlen; 
    SQLCHAR * outjson; SQLINTEGER  outlen; 
    void * callback; } SQL400JsonStruct;
    
    == join async (async for languages like php, python, etc.) ===
    
    SQL400JsonStruct * SQL400JsonJoin (pthread_t tid, SQLINTEGER flag);
    
    == callback async (async for languages like node) ==
    pthread_t SQL400JsonAsync ( SQLHDBC  hdbc, 
    SQLCHAR * injson, SQLINTEGER  inlen, 
    SQLCHAR * outjson, SQLINTEGER  outlen, 
    void * callback );
    

    Note (hdbc): A default connection/hdbc is provided whenever a toolkit interface does not pass hdbc (hdbc==0). This allows connection pooling to occur 'anywhere'. Aka, use built in connection pooling via json only. Most everyone likely to use this optoin. Or, you may build your own c program connection pooling passing the hdbc connection from you own c programs. (This is all about flexibility, only limited by your imagination building your toolkit).

    Stored procedure convenience for local/remote Pase or ILE (db2sock/toolkit/procj)

    2.1) DB2 result set (local/remote) -- Output buffer size is essentially infinite for db2 result set returns (DB2PROCJR - normal and DB2PROCJH - hex).

    2.2) DB2 i/o parm (local/remote) -- Output buffer size is 15M for db2 i/o parm returns (DB2PROCJ).

    CREATE PROCEDURE DB2JSON.DB2PROCJ(
    INOUT JSONBUF CLOB(15M)) 
    LANGUAGE C NOT DETERMINISTIC 
    MODIFIES SQL DATA 
    EXTERNAL NAME 'DB2JSON/DB2PROCJ(iJson400)' 
    PARAMETER STYLE GENERAL;
    
    CREATE PROCEDURE DB2JSON.DB2PROCJR(
    IN JSONBUF CLOB(15M)) 
    LANGUAGE RPGLE NOT DETERMINISTIC 
    MODIFIES SQL DATA 
    Result Sets 1 
    EXTERNAL NAME 'DB2JSON/DB2PROCJR(iJson400R)' 
    PARAMETER STYLE GENERAL;
    
    CREATE PROCEDURE DB2JSON.DB2PROCJH(
    IN JSONBUF CLOB(15M)) 
    LANGUAGE RPGLE NOT DETERMINISTIC 
    MODIFIES SQL DATA 
    Result Sets 1 
    EXTERNAL NAME 'DB2JSON/DB2PROCJR(iJson400H)' 
    PARAMETER STYLE GENERAL;
    

    Anything you like ... get creative (you)

    There really is not limit to buffer size. In fact, there is not limit to transports of the json requets. You could choose to write a ssl socket of your own. Maybe make up a queue interface using IBM i. On and on and on ... you are only limited by your own imagination.

  4. Brian Jerome

    @rangercairns Thanks -- I think your comments helped. I've run a couple preliminary tests with Option (2) and the buffer size will be a non-issue.

  5. Steven Scott reporter

    I'm hoping to have something published by the end of next week. The interface I've come up with initially is along the lines of (From one of my tests written in TypeScript):

            const tk = new Toolkit('http://x.x.x.x/db2json.db2');
    
            let pgm = tk.program('HELLO', 'DB2JSON');
            pgm.addParam('char', '128a', 'Hi there');
    
            tk.add(pgm);
            let result = tk.run((result: any) => {
                let data = result['script'][0]['pgm'][2];
                expect(JSON.stringify(data)).to.equal('{"char":"Hello World"}');
                done();
            });
    

    This is just the initial implementation, and once posted, feedback and contributions will definitely be welcome. My initial thoughts were to try avoiding just passing in plain objects, and aim for more of a functional approach to adding parameters. At the moment I don't have any response processing being done, which is why the result value is being accessed as a plain object.

  6. Steven Scott reporter

    Hey guys,

    Just wanted to give a heads up that I've published the current work in progress code for the toolkit I've been working on. You can find it at this repo: https://bitbucket.org/freschesolutions/db2sock-itoolkit/overview

    Feedback and pull requests are welcome. Development may be a bit on the slow side due to limited time I can commit to it, but I will be working on improving it and supporting more things.

  7. Brian Jerome

    @fresche_sscott I took a look at your implementation, and it looks like a good start. Though, my back-end isn't using REST api so it would add an unnecessary HTTP call if I used it. I'm not familiar too much with DB2 unfortunately -- what exactly is the db2json.db2 file to which the remote URL is pointing?

  8. Former user Account Deleted

    Abstract json "transport" away from the "toolkit" call (different classes)

    Guys, may already be doing (not read code), but unsolicited suggestion ...

    Speaking with Alan Seiden today (php toolkit). One best suggestion/practice is abstract json 'transport' (http, asyn db2 SQL400JsonAsync, stored proc DB2PRORCj, ssl, web socket, etc.), away from 'toolkit' proper (call pgm, cmd, qsh, etc.). That is, maintain a very loose class coupling between "new toolkit class" and "new transport class". Both Alan and I regret php toolkit was DB2 centric 'connection', which, lead to unnatural "connection" interfaces to support new transports like REST (http).

    BTW -- For node toolkit, you really need a change to db2a driver to call SQL400JsonAsync(callback) to fix many async problems db2 driver/toolkit. This may be a total re-write of db2 node (c code). For 'temporary', you can use a non-async API like db2json.db2procj (or db2procjr), for 'test' toolkit. I believe db2sock toolkit proper json should stay same, but new things added per demand (... i hope).

  9. Former user Account Deleted

    nodejs-idb-connector / src / db2ia /

    Yes.

    today db2a (hack async) ...

    Today async is using uv_queue_work for 'fine grain' ODBC/CLI APIs like PrepareAsync, ExecuteAsync, FetchAllAsync. You use a worker thread , where, uv_queue_work takes a while to 'acquire' (long relative admin set-up/out), and end up back node interpreter many times to do a 'query'. Essentially each uv_queue_work request is "emulating" CLI ansync APIs, because old PASE libdb400.a has no async APIs (not db2sock).

    db2sock better db2a (maybe rewrite) ...

    A better way db2sock, supports real async CLI/ODBC APIs. In fact we are build better 'wide grain' APIs that will connect, prepare, fetch, all 'async' in one single call API (SQL400DoALLQueryFetchAsync -- not written yet). Thereby, the db2a code can simply throw out all code 'fine grain' PrepareAsync, ExecuteAsync, FetchAllAsync (monkey business), and run in one call SQL400DoALLQueryFetchAsync.

    For example (new toolkit CLI API) ...

    In the case of db2sock, example toolkit, the new CLI API SQL400JsonAsync runs the entire toolkit call with json in/out. There will be no need to mess with a bunch of uv_queue_work request work. Aka, rewrite db2a to enable a simple API ToolkitJsonAsync(jsonin, callback).

    Does db2sock 'async' work?? ... yes ... but not all tested (or written) ...

    I implemented db2sock 'async' example in the php ibm_db2 driver. Not callback (node will use callback), but join/reap 'async' as PHP not really threaded ( db2_toolkit_json_async / db2_toolkit_json_reap). All works with one call to SQL400JsonAsync and SQL400JsonJoin.

    • Yips ibm_db2 download - trail php7 ibm_db2 version that works with this db2sock project. Full ibm_db2 source included, just copy to modify node db2a, but use callback instead of 'join'.
    -- sync (normal) --
    <?php
    require_once('connection.inc');
    $jsonin = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},
            {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
           ]}';
    $conn1 = db2_connect($database, $user, $password);
    $jsonout = db2_toolkit_json( $conn1, $jsonin);
    var_dump($jsonout);
    ?>
    
    -- or async (new) --
    
    <?php
    require_once('connection.inc');
    $jsonin = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},
            {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
           ]}';
    $conn1 = db2_connect($database, $user, $password);
    echo getmypid() . " about to call db2_toolkit_json_async ...\n";
    $stmt = db2_toolkit_json_async( $conn1, $jsonin );
    do {
      echo getmypid() . " read paper, walk dog, get rss feeds, play stock market\n";
    } while (!db2_toolkit_ready( $stmt ));
    $jsonout = db2_toolkit_json_reap( $stmt );
    var_dump($jsonout);
    ?>
    

    the talk ...

    You see, i suggest, we need stop thinking like previous century ODBCosaurus/CLIosaurus and let new db2sock driver do all 'async' (thread) and callback work. Aka, work with this db2sock Open Source project and save huge time and effort (and run secure with locks).

    Remote? No. LUW driver will not implement SQL400JsonAsync. However, db2json.db2procj/r is shipped with db2sock, so we will still allow/enable alternative remote call ODBCosaurus/CLIosaurus (was that a meteor i just saw?). Aka, flexible alternative remote, exactly reason we recommend design separation of toolkit classes (call pgm, cmd, qsh, etc.) and transport classes (db2sock, db2proj, REST, etc.).

  10. Steven Scott reporter

    @bjerome The db2json.db2 "file" is actually just a FastCGI interface that db2sock implements, which allows RESTful access to the async DB2 driver. It's implemented and documented in the "fastcgi" folder: https://bitbucket.org/litmis/db2sock/src/master/toolkit/fastcgi/?at=master

    I initially implemented the REST interface because it made development easier (And I don't believe we currently have an async DB2 module that uses DB2Sock). I implemented the "provider" as an interface though, so adding a secondary interface that uses a direct Node DB2 module (When a proper async one that uses Db2Sock is implemented) should be pretty easy (So long as the expected JSON in, and returned JSON out match what the REST interface provide).

  11. Former user Account Deleted

    ... should be pretty easy (So long as the expected JSON in, and returned JSON out match what the REST interface provide)

    I agree. Ultimatley, no json difference between SQL400JsonAsync vs. REST vs. remote db2procj/db2procjr or any other json 'transport' (assuming ILE ccsid does not bite us too hard).

    In fact, feel free to use stored procs db2proj (i/o parm) and db2projr (result set). These are designed remote access, but they are not 'async'. However, you could build your whole node toolkit db2 transport from a laptop. Later, when db2a ready (re-write), adding local IBM i call to SQL400JsonAsync. However, warning, be careful, evolution of db2a to 'async' has been bumpy. That is, if your toolkit becomes db2 dependent on 'sync' db2procj/db2procjr behaviour ... well ... you may get to do it all again (re-write).

  12. Steven Scott reporter

    That is, if your toolkit becomes db2 dependent on 'sync' db2procj/db2procjr behaviour ... well ... you may get to do it all again (re-write).

    That's another reason I'm starting with the REST interface, as I can use it in an asynchronous manner from the start, and (Hopefully) won't accidentally implement things that end up breaking when used with the asynchronous DB2 node module. All of my work so far has actually been developed and tested on a Windows desktop, with only DB2Sock and the REST interface being set up on the IBM i.

  13. Former user Account Deleted

    BTW -- Jesse G. - Rochester IBM i Open Source Architect opened an issue on litmis/nodejs-idb-connector conformance to N-API interface (node db2a driver). This 're-write' may be a good place start adding other db2sock 'real async' fast calls like toolkit SQL400JsonAsync. We maybe not have to wait too long until can try out whole idea including new toolkit to tune up the performance.

  14. Brian Jerome

    @fresche_sscott Thanks for the explanation..

    If we have Node v6.12.2 running on our IBM i then the N-API interface wouldn't be supported, correct? According to this community request Node 8.x isn't released yet but is planned for some future release.

  15. Former user Account Deleted

    node experiment db2 driver radically different ....

    First, this is all open source, so there is no reason we hide these lab experiments ...

    So, folks, inside lab we plan experimenting with completely replacing db2a driver with toolkit call built-in db2. We even have a guy signed up to give it a try (our new co-op). That is, theory, a high percentage of node script use cases is any db2 operation in node is simply converted to json ( JSON.stringify). Why do we have a db2a middle man???

    stmt.execSync(sql, function(result) {
           console.log("Result: %s", JSON.stringify(result));
    });
    

    The experiment proposes nearly 90% of node db2 operations follow the same pattern of connect, prepare, execute (w/parms), fetch, followed by JSON.stringify on each row. Therefore, theory, we could simply use toolkit built json based in/out interface with one 'async' call to hanlde the vast majority of use cases (SQL400JsonAsync in db2sock).

    $ cat j0601_query_qcustcdt.json
    {"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
            {"parm":[{"value":"Jones"},{"value":"Vine"}]},
            {"fetch":[{"rec":"all"}]}
           ]}
    

    Of course, have to test performance is up to par. We think may be ok because all the same ODBC/CLI operations you do by hand in your script are handled in one 'asnyc' call (no mess, no fuss).

    fetch pagination (fetch all, fetch 1, fetch 2, fetch 7) ...

    Of course, would be a disaster if the json result set was thousands of records. So, yes, we will handle fetch block pagination. That is, you can fetch a little or a lot oo all. The 'statement' handle is returned on any json prepare/execute. The php sample below (running sample), shows how the handle can be used to fetch paginate (fetch multiple json blocks). The result set statement will close on last record fetched. Or ...you may close the result set early by handle.

    ALL connection pooling and locking to avoid 'async' overrun of QSQSRVR jobs is built into the db2sock driver. In the sample below default user profile is used with "qual":'"fred", therefore the connection pool element/handle associated with "fred" will handle all the json requests. Important, it will also serialize the request (mutex lock), to avoid overrunning the QSQSRVR job handling your result set.

    BTW -- People don't understand that db2 operations are not all thread safe, so many times current db2a 'async' interfaces appear to broken vis 'missing records', 'bad data', etc. (... nope it's your node script and lack of connection/statement pooling). Anyway, db2sock 'async' interface take care of the whole mess for you. You just write toolkit(ish) code.

    function myjson1() {
    $clob =
    '{"connect":[
      {"qual":"fred"},
      {"query":[{"stmt":"select * from QIWS/QCUSTCDT"}, 
                {"fetch":[{"rec":2}]},
                {"fetch":[{"rec":2}]}
               ]}
    ]}
    ';
    return $clob;
    }
    function myjson2($handle) {
    $clob =
    '{"connect":[
      {"qual":"fred"},
      {"query":[{"handle":'.$handle.'},
                {"fetch":[{"rec":"all"}]}
               ]}
    ]}
    ';
    return $clob;
    }
    function myjson3($handle) {
    $clob =
    '{"connect":[
      {"qual":"fred"},
      {"close":[{"handle":'.$handle.'}]}
    ]}
    ';
    return $clob;
    }
    

    BUG -- I need to add RECORD not found to back of every last record, so people do not have to worry about closing on the simple "fetch":"all" case, and, other end of records use cases.

    Just want you to know what we are thinking here in the lab.

  16. Former user Account Deleted

    Remote SQL400JsonAsync ... not following the pack

    All is not lost to those that see the glass half empty ...

    So, we all know that Linux, Unix, Windows (LUW) will never implement a non-standard API like SQL400JsonAsync. Of course! You can never lead, only follow.

    Option 1 (LUW npm, whatever):

    However, we can build a simple binary wrapper and call all same ODBC/CLI functions that make up SQL400JsonAsync. In fact, the source code is right here in the db2sock project. Aka, we just have to add new makefile(s) for LUW and take the db2sock chunks we need to get the whole thing running on Linux, Unix, Windows.

    For you technical geeks, we simply need to 'async' / thread wrapper a normal ODBC stored procedure call to the existing db2json.db2procj (parm i/o), or db2json.db2projr (result set), and ... bingo ... instant SQL400JsonAsync running on Linux, Unix, Windows.

    Option 2 (pure node , whatever):

    I suspect there are 'async' work APIs build right into node (other languages). So, you could create your own SQL400JsonAsync. Thast is, simply use the script language 'async work' framework, then call the normal driver DB2 ODBC APIs (client access ODBC, DB2 Connect, etc.) same way SQL400JsonAsync (see db2sock code).

    Note: Technical mechanics exactly the same as c code version ... we simply need to 'async' / thread wrapper/framework a normal ODBC stored procedure call to the existing db2json.db2procj (parm i/o), or db2json.db2projr (result set), and ... bingo ... instant SQL400JsonAsync running on Linux, Unix, Windows.

    Aka, for those not clear on simple task. You can already make a pure node (other language) SQL400JsonAsync by using any 'async work' framework as described above calling ODBC. Right now (today).

    We are programmers. Our world ends only at imagination.

  17. Former user Account Deleted

    Ok, I built a new node6 db2ia.node binary for db2sock SQL400Json(Async). See the new db2a functions toolkitSync(json) and toolkit(json, callback) below. You can find the full c code source and binary on Yips below (if you want to try it).

    • Yips db2sock and db2a - test driver -- replace db2ia.node and use with db2sock (full c code source is included for geeks like me want build there own node26 db2 driver)

    Install.

    ** Install
    [@
    1) Must use with db2sock (follow install links)
    ... then ...
    2) unzip everything db2sock_toolkit(n).zip
    3) save the original node6 db2ia.node 
    > cp /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/db2ia.node /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/db2ia.node-save
    4) copy the new db2ia.node
    cp db2ia.node /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/db2ia.node
    

    Example run.

    bash-4.3$ node --version
    v6.9.1
    
    == sync json toolkit call ===
    
    bash-4.3$ cat toolkitsync.js 
    var db = require('/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a')
    var dbconn = new db.dbconn();  // Create a connection object.
    dbconn.conn("*LOCAL");  // Connect to a database.
    json = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},{"s":{"name":"char", "type":"128a", "value":"Hi there"}}]}';
    console.log(dbconn.toolkitSync(json));
    bash-4.3$ node toolkitsync.js 
    {"script":[{"pgm":["HELLOSRV","DB2JSON","HELLO",{"char":"Hello World"}]}]}
    
    == async json toolkit call ==
    
    bash-4.3$ cat toolkitasync.js 
    var db = require('/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a')
    var dbconn = new db.dbconn();  // Create a connection object.
    dbconn.conn("*LOCAL");  // Connect to a database.
    json = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},{"s":{"name":"char", "type":"128a", "value":"Hi there"}}]}';
    dbconn.toolkit(json,
      function (result) {
        console.log(result);
      }
    );
    setTimeout(function() {
    }, 800);
    
    
    bash-4.3$ node toolkitasync.js 
    {"script":[{"pgm":["HELLOSRV","DB2JSON","HELLO",{"char":"Hello World"}]}]}
    bash-4.3$
    

    BTW -- I just used basic technology already in node db2a. Someday we will need to replace with something better and faster (i think). Maybe that will happen with the Jesse G initiative (also gives our co-op something to copy for the toolkit call to SQL400Json(Async) ... just to help out).

  18. Danny Roessner

    @rangercairns This looks great! Very clean and simple. Just wanted to comment and say that I appreciate all the work you've been doing with this. Will hopefully get a chance to thoroughly test this over the next couple weeks, but this already looks very promising.

  19. Brian Jerome

    @rangercairns It looks like our OS 7.1 (AIX6.1) is missing some modules (like libstdc++.a & libc.a) when I try using that new db2ia.node. Also using Node v6.12.2. What versions are you using?

    2-1-2018 7-34-33 AM.png

  20. Jesse G

    @bjerome, I'm not sure, but can you try doing a 'yum install libstdcplusplus6'? That object naming notation implies it was built with our new (beta) set of shared libraries. Again, not sure, but it's worth a try.

  21. Brian Jerome

    @ThePrez Thanks for looking into this and getting back so quickly! We're trying on another OS version at the moment so I will try this out later. :-)

  22. Former user Account Deleted

    Trial db2sock, db2a and libstdc++ ... toolkit

    I compile my test db2a with gcc, so you need gcc runtime. I posted zip to Yips for download.

    Yips Super Driver - test w/db2sock - db2ia.node, (libgcc_s.a libstdc++.a).

    db2sock_toolkit1.zip - yips db2 module for db2sock (w/ toolkit):
    > cd /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/
    > cp db2ia.node db2ia.node-save
    > cp /path/new/db2ia.node db2ia.node
    
    yips libstdc++.zip - additional gcc binaries
    /opt/freeware/lib/libgcc_s.a
    /opt/freeware/lib/libstdc++.a
    
  23. Brian Jerome

    @rangercairns We have reinstalled all the open source tools. The gcc binaries are now in /opt/freeware/lib but it's saying there is a format error.

    -bash-4.3$ node toolkitasync.js
    module.js:597
      return process.dlopen(module, path._makeLong(filename));
                     ^
    
    Error: Could not load module /QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/bin/db2ia.node.
            Dependent module /opt/freeware/lib/libstdc++.a(libstdc++.so.6) could not be loaded.
            File /opt/freeware/lib/libstdc++.a is not an
              archive or the file could not be read properly.
    System error: Exec format error
    Could not load module /QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/bin/db2ia.node.
            Dependent module /QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/bin/db2ia.node could not be loaded.
        at Error (native)
        at Object.Module._extensions..node (module.js:597:18)
        at Module.load (module.js:487:32)
        at tryModuleLoad (module.js:446:12)
        at Function.Module._load (module.js:438:3)
        at Module.require (module.js:497:17)
        at require (internal/module.js:20:19)
        at Object.<anonymous> (/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a.js:5:18)
        at Module._compile (module.js:570:32)
        at Object.Module._extensions..js (module.js:579:10)
    -bash-4.3$ 
    
  24. Former user Account Deleted

    File /opt/freeware/lib/libstdc++.a is not an archive or the file could not be read properly

    This nearly always means you did not FTP files in binary. Aka, libstdc++.a was not FTP'd in binary to your IBM i.

  25. Brian Jerome

    @rangercairns Thanks Tony. Something must be corrupt on that partition running AIX 6.1. We installed the tools on a partition running AIX 7.1 instead and it's working now.

  26. Brian Jerome

    @rangercairns Do you have a reference for the correct way to load libstdc++ on to the iBM i for OS v7.2 or 7.3? It seems it was not included with the license program or PTFs.

  27. Jesse G

    @bjerome, 'yum install libstdcplusplus' will be the "correct way." (of course in beta currently) You may need to install per Tony's instructions from YiPs, depending on how the binary was built.

  28. Former user Account Deleted

    ... /opt/freeware/lib/libstdc++.a is not an archive or the file could not be read properly.

    Repeat. Before losing forest from trees (below). This indicates your libstdc++.a is corrupted. Aka, most often not binary FTP to IBM i.

    .. license program or PTFs ... yum install libstdcplusplus ...

    To be clear, only 'test drivers' are placed on Yips. They are not PTFs or official released OPS yum installed software via RPMs (beta).

    In case of node db2a, node development convention for 'extensions', wizards of developers used C++ (g++). Nothing wrong per say with C++. However, C++ creates strong (unbreakable strong), relationship with matching compile (g++) and runtime (libstdc++.a). Basically, you get OO with C++ (name mangling method names, etc.) , but, you are locked into the matching runtime and maybe even version dependent (made an offer you can't refuse).

    You may need to install per Tony's instructions from YiPs, depending on how the binary was built.

    I am using g++/gcc 4.8.3 and related version binaries like libstdc++.a. You may see your version of gcc, etc. like below.

    bash-4.3$ ls -l /opt/freeware/lib/*std*
    lrwxrwxrwx    1 db2sock  0                44 May 23 2017  /opt/freeware/lib/libstdc++.a -> gcc/powerpc-ibm-aix6.1.0.0/4.8.3/libstdc++.a
    
    
    bash-4.3$ g++ --version
    g++ (GCC) 4.8.3
    Copyright (C) 2013 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    bash-4.3$ gcc --version
    gcc (GCC) 4.8.3
    Copyright (C) 2013 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    

    Someday when yum is top self PASE on IBM i, all will be in the past. Go Kevin go (IBM)!!!

    Thanks for testing in today's world of PASE.

  29. Brian Jerome

    @rangercairns I was able to test the toolkitasync.js and it looks pretty good. Are there more plans with this? Would updates just consist of replacing the db2ia.node?

    The /QOpenSys/usr/lib/libdb400.a in this project is modified to work with this, so the litmis/nodejs-idb-connector will no longer work (also uses libdb400.a). I have to keep switching (in development of course) between the two libdb400.a files for my dev projects (testing the two still). I'd like to keep using this instead.

  30. Former user Account Deleted

    I was able to test the toolkitasync.js and it looks pretty good.

    First, clarity for other readers. You are using my personal compiled 'experimental' async node db2ia test driver from Yips new Super Driver page (below). Yes, at present, this new Yips experimental node db2 driver only works with new db2sock (new libdb400.a).

    • Super Driver - test driver - experimental async node db2ia on bottom of page.

    Are there more plans with this?

    I am not in charge of litmis/nodejs-idb-connector decisions. Maybe ask Jess G. IBM Rochester Open Source architect about 'plans'.

    Would updates just consist of replacing the db2ia.node?

    Maybe not 'replace' per say. That is, would be easy enough to modify my personal source db2ia copy to run on either old libdb400.a (PASE shipped), and/or, new db2sock libdb400.a (yips download). Simply use PASE dynamic load functions check 'true' async toolkitasync.js was available (aka, SQL400JsonAsync function available).

    "If you were waiting for the opportune moment, that was it" (Jack Sparrow, Pirates of Caribbean).

  31. Aaron Bartell

    I am not in charge of litmis/nodejs-idb-connector decisions. Maybe ask Jess G. IBM Rochester Open Source architect about 'plans'.

    The efforts of db2sock are being watched closely (with excitement) by a variety of the repo core-committers. Need to have it out of beta mode before we can start replacing things (I have a number of customers with Node.js in production).

    Might need to consider using OO interface concepts to allow for implementation selection; like we did for ruby-itoolkit. We have three Node.js DB2 implementations at play. The current nodejs-idb-connector, the nodejs-idb-pconnector (n1), and eventually nodejs-idb-db2sock.

    n1 - same as nodejs-idb-connector except it uses Promises and async/await.

    I've been waiting for API interfaces to settle down before I dove into this db2sock project to see if doing OO interfaces is even a reality based on each interface being exactly similar.

    Thoughts?

  32. Former user Account Deleted

    Thoughts?

    None helpful from me. In fact, you froze me to inaction. “Nobody move! I dropped me brain!” – Jack Sparrow (Pirates of Caribbean)

  33. Aaron Bartell

    In fact, you froze me to inaction.

    Ooops. Didn't mean to do that. Carry on. I will put something together (eventually) to test the idea.

  34. Danny Roessner

    I will put something together (eventually) to test the idea.

    Yes please. We are very excited about these tools and are looking to get something into production in the very near future. Development tests are looking better and better, just need to iron out these connector decisions.

  35. Danny Roessner

    @aaronbartell Any updates on this? We're currently using nodejs-idb-pconnector with db2sock remote to make RPG calls. Would be nice to have the option for native for better performance.

  36. Aaron Bartell

    @droessner

    Any updates on this? We're currently using nodejs-idb-pconnector with db2sock remote to make RPG calls. Would be nice to have the option for native for better performance.

    I haven't gotten to it yet. Hoping a customer requests it so I can move it up in the priority list.

  37. Brian Jerome

    @aaronbartell Curious as to what it would take for just @droessner and I to request a higher priority :-)

  38. Aaron Bartell

    At PowerUp18 I had the honor of doing one of @ThePrez 's presentations and in it was foreshadowing of things to come; namely an ODBC driver that runs on IBM i that all PASE langs would be able to make use of. ODBC has been around a long time and is very stable/performant. This would essentially allow us to use the LUW odbc npm on IBM i and could potentially negate the need for db2sock(n1). And then a Node.js iToolkit interface would eventually be written over the ODBC driver APIs (could really just be a separate data provider like @fresche_sscott authored in his project)

    n1 - I hesitate to say this because there are things Tony accomplished with db2sock that most likely still hold relevance to certain projects.

  39. Danny Roessner

    @ThePrez Thank you for clarifying. @aaronbartell Our main interested in db2sock is being able to make RPG calls using JSON (via node.js). It would be nice to have a native implementation for this (rather than a remote procedure call).

  40. Jesse G

    @droessner, an ODBC transport would get you relatively close to what you seek. An in-process call (I assume that's what you mean by "native" in this context) has numerous risks spanning security and stability. I would recommend an ODBC transport instead, to give tasks isolation. I would expect performance to remain reasonable. If you would like to perform some performance testing with ODBC, I'd like to work with you.

  41. Danny Roessner

    @ThePrez The remote implementations we have tried so far has not been great in terms of performance (especially for RPG programs that have a large footprint) so we are looking for an option that gets us better performance. (We are using IWS in production now and using that as a comparison). I'm not too familiar with ODBC transport. Is it something that is production ready on the IBM i or at least close to?

  42. Aaron Bartell

    @droessner Have you considered putting a stored procedure directly over the RPG vs. going through XMLSERVICE? Both calls end up going through the database, but a direct call to the stored procedure will have significantly less overhead (no XML, no dynamic call composition, etc).

  43. Danny Roessner

    @aaronbartell I had not considered that. This is mostly calling legacy RPG that contains multiple inputs including array data structures as well as multiple outputs also with array data structures. Is a stored procedure an option for that?

  44. Brian Jerome

    @aaronbartell It seems using a data structure as a host variable is not possible (according to this page). Would this even be the right approach for what we need?

    We'd like to avoid mapping the program params (as a CLOB xml or CHAR json) since it could get complicated for the legacy stuff we need to support (all the issues db2sock has had with it too). I'm unsure the best approach for passing array DS as IN/OUT/INOUT like @droessner mentioned earlier. Have you given it more thought?

  45. Log in to comment