Get Involved

Issue #2 resolved
Chris Hird created an issue

Would like to contribute

Comments (108)

  1. Former user Account Deleted

    Thanks Chris. We have many ambitious goals for the new PASE super driver. You will be an excellent candidate to help out. I promise we will not take up all you valuable time. We really could use another pair of eyes on the toolkit JSON functions, performance, etc.

  2. Former user Account Deleted

    Practical matters ...

    I have only minimal JSON code in this driver at the moment. I will be returning to this driver next week to add much more in the 'toolkit' JSON side (you volunteered). I will post an append to this issue to notify the new JSON is ready for a look (*).

    (*) Of course, you are welcome to try things out any time. I do not want to waste your time when much is not yet available in the driver.

    Thanks again.

  3. Former user Account Deleted

    Well, observationally speaking, you are most comfortable with php. I believe we want your most comfortable expertise language. I think any version of PHP on IBM i will be just fine. Also, any web server(s) you choose are also fine (more the better). However, i need you to take the following action with your IBM i version of PHP.

    Action: I need to know which version of php you are running on IBM i so that i may build a DB2 pecl driver to match (> php -v).

    Goal: I believe much of our experiment interaction will be performance. Great! To this end, i suggest we are best served running at the highest possible speed transport. This implies a db2 driver transport like ibm_db2 using php syntax rendered JSON over new 'semi-architecture' SQL400Json API (new this driver).

    Personally speaking: Should be interesting, i have been working on multiple ideas for SQL400Json API including a 'run here' inside php job (screaming fast), as well as traditional QSQSRVR job (not too bad in Norwegian speak), REST, and, of course async/callback (php poll).

    When: I will work on the JSON stuff next week. I will post an append when ready for test. Thanks for your help.

    Summary: Looking forward to hearing your observations. Who knows, maybe we can bring your php work back into IBM i instead of Linux.

    Optional read:

    Note 1 - The 'slow REST' version of db2 super driver libdb400.a will work with anything language 'as is' (see php tests git source). Also any web server with small wrapper (see RPG CGI - yes ILE, but same JSON wrapper any web server). In fact, you may want to try REST interface from your Linux machine to see how REST interface performs. Here again, your observation may prove fruitful to better the super driver.

    Note 2 - Unfortunately, node db2 driver has some serious issues 'evolution' has not yet fixed. The only way it runs well today is by stalling the node interpreter via the 'Sync' interfaces. Not really a good candidate for this experiment. Please don't ask for details, most people barely understand node, and this driver 'short coming' list is very long (but we may fix 'a lot' with this super driver).

  4. Former user Account Deleted

    Chris, I have to ask that you please refrain from bringing vendor products and vendor names into this discussion. Discussion along this line will result in immediate end to our experiment.

  5. Chris Hird reporter

    Tony

    OK I was just giving you my position, no offence meant. I will delete the entries with any mention of vendors. Sorry this turned out this way, not my intention to cause a problem or swing favor with any vendor.

  6. Former user Account Deleted

    Cool. We must protect the reputation of all good vendors. Thanks for you help.

    I will post an entry here when i have more of 'semi-architecture' SQL400JSON API working (JSON in/out DB2 and toolkit).

    Optional read ...

    You will find I am not fettered by the past. This applies to project design.

    Radical simple design ...

    I think new SQL400Json interface should have a built in memory cache for performance.

    The argument case for cache (on-line sales) ...

    Vast majority of 'high volume' web pages are actually cached data. That is, 'high volume' on-line sales sites encourage you scroll, click and buy like crazy fast, then at check-out you get a message 'item #3 - Pink Blender is no longer available'. Technically, we IBM i database people know, this inexcusable 'last moment' offence against 'database truth' is due to caching. I would ask you to take a moment of unfettered thought about 'how to make money on the web'. I suspect you will come to conclusion that an 'unlikely' cache miss about Pink Blenders is worth the risk of possible customer satisfaction 'fallout'. Mathematically speaking, a warehouse full of 2000 pink blenders offers a web page design where probability theory favours cache human scroll, click, buy performance over 'data truth'. Even more 'funny', when an unlikely cache miss event occurs, web users have been instinctively trained to re-select the order to include the 'red blender' instead (Pavlov's dogs if you will allow).

    Most DB2 IBM i people have a great deal of difficulty accepting this 'web' truth. However, If you accept the argument, many, many, types of cached data responses are perfectly fine.

    Specific ...

    1) SQL400Json will be used context web workloads 95% of time (cache is good).

    2) About 80-90% of DB2 queries are read only (cache is good).

    3) Various 'cache invalidate' mechanisms can range from timestamp file, number of seconds cache, number of use count cache, etc.

    Implementation ...

    Truly simple mechanism, save JSON output in memory 'keyed' and send match query data back to the client based on the 'cache invalidate' scheme.

    The above example is 'database'. The 'cache' mechanism very likely can be also used for toolkit calls. Again, user acceptance trick will be controls for 'invalidate cache scheme', 'mark not candidate', and so on.

    Ok ... just one 'radical thought among hundreds' ... welcome to my world Chris. Look forward to your input.

  7. Aaron Bartell

    The above example is 'database'. The 'cache' mechanism very likely can be also used for toolkit calls. Again, user acceptance trick will be controls for 'invalidate cache scheme', 'mark not candidate', and so on.

    Just wanted to add my $0.02. I lived through a few years of the Hibernate (Java) ORM. It attempted to "help" in areas like lazy loading and cacheing, and it worked sometimes. For the times it didn't work it was difficult to get it to not load from cache (no simple back doors). My hope is we'd make it very clear and easy for how to not use cache.

  8. Former user Account Deleted

    If i may, 'your vote' is exactly why i am doing db2sock super driver in an Open Source project. Your vote is worth much more that 2 cents.

    3) Various 'cache invalidate' mechanisms can range from timestamp file, number of seconds cache, number of use count cache, etc. The above example is 'database'. The 'cache' mechanism very likely can be also used for toolkit calls. Again, user acceptance trick will be controls for 'invalidate cache scheme', 'mark not candidate', and so on ...

    Yes, SQL400Json 'cache controls' are key. I am with you 100%. The 'cache' is feature to assist people with common web task, not to force people to accept cache for all things. Therein, #3 (above).

    Optional read ... Tony "Ranger" Cairns, who?

    If you look at db2sock c code with honesty, you will see c code is good (check out fast trace -- near art c code). Obviously, i do not need any help writing c code.

    However, most importantly, i abhor intellectual bullies, most precisely, any design like you mention that has 'no back door' (no controls SQL400Json in my terminology). Again, your vote, chat, etc, should avoid resulting errors caused by 'no choice' APIs.

    BTW -- My humour can be singular. 'Grasshopper', etc. I mean no offensive. Also i have tried very hard to write more text to explain what i am thinking (longer), but i may slip back to terse in a spot that you need clarification. Please be assured, questions are not stupid. In fact, questions result in the best of projects.

  9. Former user Account Deleted

    Mmm ... Aaron ... i don't recall if any company considers ORM to be the finest product/architecture ever (cat's meow, bee knees, 52 skidoo?). Anyway, please take care not to publicly call a companies baby 'ugly'. Aka, no vendor names or vendor products please. I really want us Open Source jockeys stay far clear of liable monkey business.Thanks for your support.

  10. Former user Account Deleted

    Chris, Per our JSON task, we need to find a common php version. I wish to provide a modified ibm_db2 driver. To end, I believe you mentioned your IBM i php was 5.4.0? I don't have anything that old to match php versions.

    Can you find a more modern php? Perhaps php 7?

    Again, i do not care which vendor. I suspect 'performance' will be a key topic in this JSON activity. PHP7 better performing version of the language (**).

    (**) if only php 5.4 on IBM i available (yikes), i can try to get ibm_db2 to work on that expired version.

  11. Chris Hird reporter

    Tony

    I will work on getting a later version of PHP. I will need time as unable to install currently available Apache Server for IBM i with PHP version greater than mentioned. Need to think about how to do this (Dave Dressller did get PHP7 working on IBM i so may work out how to build my own env using it.) .

    Chris...

  12. Former user Account Deleted

    PHP 7 ... great. Can i impose upon you to match my php version 7.1.2?

    bash-4.3$ php -v
    PHP 7.1.2
    

    (*) The only reason i ask for 7.1.2 ... php added 'version defines in headers' to prevent mismatch compiled pecl extensions (ibm_db2, etc). Yes. Argh! True pain for 'programmers', but understandable for production administrators. Case in point talk about a 'no back door' policy, pecl version thing qualifies (Aaron comment).

  13. Aaron Bartell

    Anyway, please take care not to publicly call a companies baby 'ugly'. Aka, no vendor names or vendor products please. I really want us Open Source jockeys stay far clear of liable monkey business.

    The Hibernate ORM is an open source project with implementation flavors, not a vendor. I will steer clear of naming vendors.

    FWIW, I like it when expound on your views and document the "why". I've learned a lot in the process.

  14. Former user Account Deleted

    Sneak peek (some already works, but not all up in git project) ...

    The following JSON gives general idea. Basically, as with all JSON, idea is to minimize text (json factoring).

    Standard ...

    Samples below follow a minimalist idea of positional parameters. Specifically, drop a parameter off the end and default is assumed. This is a standard optional parameter scheme "parm":[p1,p2,...].

    Non-standard (hopefully) ...

    However, when a parameter 'context' can be determined missing parameters that precede may be 'understood'. This is a non-standard idea. Example: "pconnect":["id"] missing db, uid, pwd, therefore we know single parameter means persistent connection "id". Of course, "id" only opens a security hole, so we may only allow such option under a pre-start env var (set CONNECT=ALLOW_ID). The basic meaning ... 'i am running under web convention authentication kerberose, basic auth, etc., so i am able to "connect" with "id" token to do database work. Aka, essentially trust in web snake oil, wherein authentication mother Apache basic auth knows best, so you do not need a db2 db, uid, pwd.'

    /* json
     * request {
     * -- toolkit database --
     * "query":"select * from bobdata",
     *   "fetch":"*ALL",
     * "query":"call proc(?,?,?)",
     *   "parm":[1,2,"bob"],
     * "connect":["*LOCAL","UID","PWD"],
     *   "query":"call proc(?,?,?)",
     *   "parm":[1,2,"bob"],
     *   "fetch":"*ALL",
     * "pconnect":["id"],
     *   "query":"select * from davedata where name=? and level=? and reports=?",
     *   "parm":["bob",1,2],
     *   "fetch":"*ALL",
     * -- toolkit commmand --
     * "cmd":"ADDLIBLE LIB(DB2JSON)",
     * -- toolkit program --
     * "pgm":["NAME","LIB","procedure"],
     *   "dcl-ds":["name",dimension, "in|out|both|value|return", "dou-name"],
     *   "dcl-s":["name","type", value, dimension, "in|out|both|value|return"],
     * -- complex parm (example) --
     * "pgm":["CLIMATE","MYLIB","RegionTemps"],
     *   "dcl-ds":["regions_t",0,"in"],
     *     "dcl-s":["region","5a","TX"],
     *     "dcl-s":["region","5a","MN"],
     *     "dcl-s":["region","5a","", 20],
     *   "end-ds":"regions_t",
     * -- single parm --
     *   "dcl-s":["countout","10i0",0,"both"],
     * -- complex return value --
     *   "dcl-ds":["temp_t",999, "return","countout"],
     *     "dcl-s":["region","5a"],
     *     "dcl-s":["min","12p2"],
     *     "dcl-s":["max","12p2"],
     *   "end-ds":"temp_t",
     * }
     */
    
  15. Aaron Bartell

    The basic meaning ... 'i am running under web convention authentication kerberose, basic auth, etc., so i am able to "connect" with "id" token to do database work. Aka, essentially trust in web snake oil, wherein authentication mother Apache basic auth knows best, so you do not need a db2 db, uid, pwd.'

    First, I am assuming we are always talking 1-tier with db2sock, correct? Second, in the above scenario is it using the IBM i user profile who started the web server to do the authentication?

  16. Former user Account Deleted

    Oh, man, short answer 'there are no limits'.

    always talking 1-tier with db2sock, correct?

    No. Short version, let's say we pre-start a set of connections with full db,uid,pwd,id (4 items, last "id"), Now, later, by web convention allowed attached by only "id" (after start). Therein, say db="wisconsin" was non-LOCAL, we have a multiple tier application where db2 json/toolkit/data/sql is served per normal DRDA on another machine.

    Mmm ... you have to let your imagination run a bit to see all the possibilities, but there are many 'password-less' notions including for DRDA in IBM i (i forget the pre-id/pwd set-up CL CMD that goes with WRKRDBDIRE). Again, whatever ILE can do, we can do with db2sock, because ILE can call the APIs like SQL400Json.

    IBM i user profile who started the web server to do the authentication

    Also no. There are IBM i specific notions like Apache %%client%%, EIM, LDAP, etc., that may switch profile in a ILE CGI based on a "id". The db2sock project already supports Apache RPG CGI 'json call' (binary ascii RPG), therefore any option available to Apache is also available to db2sock (right now). Again, basically anything you can do with a ILE program, can be 'inherited' by db2sock (again see include RPG CGI).

    I realize this is an unsatisfactory answer. Aka, people from Missouri are reputed to require "see it to believe it". Due respect to fast NGIX, ILE Apache, ILE DRDA, ILE EIM, etc., have capabilities to do most anything.

  17. Aaron Bartell

    you have to let your imagination run a bit to see all the possibilities, but there are many 'password-less' notions including for DRDA in IBM i

    I actually did a DRDA presentation of this exact scenario.

    You make mention of RPG-CGI and that's more pointedly what I am curious about; 2-tier with LUW. Today I have customers using things like the jdbc npm for DB2 for i access from CentOS/Ubuntu. The former XMLSERVICE would have been too slow for frequent database access. Would be great if this db2sock could do WebSockets for remote DB2 access from LUW. Will normal HTTP cut it? What type of throughput are you expecting when compared to JDBC?

  18. Former user Account Deleted

    I have probably made your head hurt. Sorry. For fun, relative to 'not limited narrative' let's pile on another wild 'heretic' idea to think about ...

    The argument ...

    DB2 performance may not be most important deciding factor in a web application. I know. Just saying such a thing may lead to hysteria (IBM i people fainting in the isle, etc.).

    Consider ...

    The relative 'code path' performance 'distance' of a request from browser to target server is 'nearly infinite'. Replace 'browser' with 'REST API' (scripting language request), 'distance' remains same 'nearly infinite'. All in all a 'web request' takes three days in a covered wagon to travel across the internet to ask for a bit of data from a database ( relative computer time imagined ).

    So?

    Well, consider traditional scripting language 'database connection' attempting to run as fast as inhumanly possible to feed up three rows of JSON data. But, but, but, data returns by browser/rest covered wagon appropriately about three days work to requester (computer relative time).

    What if?

    What is the true perceived performance impact on a web application that used a 'pure json' database driver (PHP samples tests). That is, db2sock under a Apache RPG CGI with JSON REST requests instead of faster than speed of light traditional language database driver (like ibm_db2). Yes, of course, first answer is 'way too long'!!!! However, in fact, most rich client browser web page interfaces are actually multiple REST requests via Javascript (AJAX, JQuery, etc.). Aka, argument embodied, our internet already works with REST database requests all the time (80-90% of the time).

    And ...

    We already have a db2sock Apache RPG CGI module with persistent connections (right now). Not finished, but available.

    The sharing ...

    This original thought contemplated what it means to be a 'database request' on the web. Perhaps, sharing such a 'heretical' thought could lead to a new generation of thinking about what a script database driver should actually become. Aka, 80-90% of web database requests should actually be JSON over db2sock (Apache), not a traditional driver (ibm_db2). Therein, all the web convention simply work out-of-the-box (kerberos, basic auth, ldap, on and on).

  19. Former user Account Deleted

    We appeared out of order.

    The former XMLSERVICE would have been too slow for frequent database access. Would be great if this db2sock could do WebSockets for remote DB2 access from LUW. Will normal HTTP cut it? What type of throughput are you expecting when compared to JDBC?

    Short answer. I see no restrictions using db2sock as a REST JSON interface. Web sockets, is simply another 'wrapper' (same as socket, etc.).

    Correct. XMLSERVICE is not a speed maniac. In fact, XMLSERVICE was designed 100% around flexibility. Secondary goal XMLSERVICE was extreme simplicity used in any language toolkit (proven). Essentially, we can almost run an entire IBM i machine using XML to XMLSERVICE (shell, cmd, pgm, srvpgm, db2, etc.). Aka, we accomplished the goal with XMLSERVICE, which never was 'supreme performance' (*).

    Today, db2sock, we are talking about attempting to keep the 'extreme flexibility' of XMLSERVICE, switch to JSON (arbitrary to performance), but really speed things up with some trade-offs. In fact, this is what we are all about with the SQL400Json new semi-architecture API.

    To point, you guys are involved to accept the outcome of this experiment. Aka, look upon the result and judge 'good enough' for all things i want to do. This is the true meaning of Open Source in my view.

    Optional read ....

    (*) A bit of personal whining for XMLSERVICE author. In many ways, i was disappointed hearing demands of ridiculous performance expectations out of 'web workloads' (sarcasm). I recall sombody downloading 30 thousand records to a spread sheet (XML), looped thousands of times, 1 CPU machines with 1000 hits a second expectations, so on. XMLSERVICE was never intended to handle that sort of abuse. In fact, i have always made clear a custom DB2 stored procedure should be used when ultra high speed performance was required. Perfect knowledge between client and server in a stored procedure is always best. Anyway ... learn from a mistake. In this db2sock case, we will try to get the expectations correct together (you guys are the team).

  20. Former user Account Deleted

    Would be great if this db2sock could do WebSockets for remote DB2 access from LUW. Will normal HTTP cut it?

    Again, we are doing Open Source, so you can decide 'cut it'. However, Tuesday morning speculation before work, with no data, and assuming we are still talking about JSON over REST.

    Yes - HTTP should be fine for normal few records (normal ajax style web pages).

    No - Real time gaming (web socket style). What crazy application are you thinking?

    SO, not such a fine point, if customer application interaction is expected crazy fast in the realm of gaming speeds, massive data record transfers, 'crazy stuff' for lack of better term, then JSON REST will never 'cut it'.

    What type of throughput are you expecting when compared to JDBC?

    JDBC does not have anything like db2sock JSON REST idea. I have no understanding of the compare in this question. Can you qualify?

  21. Aaron Bartell

    @rangercairns, I agree on us living in a REST world. If there's one thing Amazon/Bezos has taught us is that web APIs are the new norm for decoupled success.

    That being said, there's a reason Google pushed hard for HTML5's WebSockets and is entertaining things like protocol buffers. The number of AJAX requests from browser to server has probably increased ten-fold in the past few years. My point is we need to make sure we're designing for tomorrows workloads so we're not immediately irrelevant.

    With all that said, I am fine with "good enough" in the evolutionary sense of having something "better than" XMLSERVICE. My comments aren't meant to impede current work but instead to flavor approaches/features for the long term.

  22. Former user Account Deleted

    topic ... future, all database requests decoupled (possible)

    I wonder if web sockets allow binary data transfer, aka, bson (binary json). I know many new databases use the binary protocols to avoid unwanted translations, etc. (JSON not 'cut it'). While web sockets with bson would be limiting in flexibilty, assuming binary on web socket ok, may be a very good high speed 'database request' decoupler for scripting languages. aka, 'cut it'.

    Do you know how to code up web sockets in ILE or PASE c?

  23. Former user Account Deleted

    "better than" XMLSERVICE.

    Yes. This is the goal of this experiment. Even if we fall short of 'wildly fast", we can do better i am sure of it. Again, we are trying to reach the 80-90% cases, not necessarily the crazy end of demands. However, if we figure out 'crazy' that would be fine (eye of beholder).

  24. Chris Hird reporter

    Tony, I am lost? web sockets in ILE? does that not require an ILE based web server? Am I missing the point here? I can create socket based programs in ILE C but my understanding (could be totally out of the park) is that web sockets allow browser to web server connectivity??? Having the ability to call an ILE socket is not a problem (except the EBCDIC to ASCII translations obviously) so having a browser based socket call should be possible and I think I could do that? But is that a web socket??

    Sorry if I am missing something here?

    Chris...

  25. Aaron Bartell

    No - Real time gaming (web socket style). What crazy application are you thinking?

    Websockets afford more than just speed. They also offer full-duplex stateful communication. Google's Ian Hickson had this to say:

    "Reducing kilobytes of data to 2 bytes…and reducing latency from 150ms to 50ms is far more than marginal. In fact, these two factors alone are enough to make Web Sockets seriously interesting to Google."

    I hesitate to mention the above, because as you stated, I can create my own Websocket front-end over db2sock and gain the Websocket advantages. What I can't do is go to bson if the result is already in json. This is where your XMLSERVICE plugins made good sense (if I am understanding them correctly). I am thinking we start out with json output and add bson next.

    JDBC does not have anything like db2sock JSON REST idea. I have no understanding of the compare in this question. Can you qualify?

    I brought up JDBC because of shared end goal of delivering datasets. Dismiss the comment for now.

    Do you know how to code up web sockets in ILE or PASE c?

    Websockets is a function of the web server so I don't believe we'd need to implement anything in ILE/PASE and instead we can rely on Apache/Nginx. So at the end of the day we kind of get Websockets for "free" and we just need to teach people how to configure them.

  26. Former user Account Deleted

    Ah, clearer. Also will answer back to Chris.

    I am thinking we start out with json output and add bson next.

    Completely sensible.

    Websockets is a function of the web server so I don't believe we'd need to implement anything in ILE/PASE and instead we can rely on Apache/Nginx. So at the end of the day we kind of get Websockets for "free" and we just need to teach people how to configure them.

    Great. I have not used the support. So, I will accept you explanation. That is, if we can do JSON under an ILE CGI, we get 'web sockets' for free. Therein, Chris, we do not need to do anything for the json experiment, beyond perhaps a test using web sockets.

    This is where your XMLSERVICE plugins made good sense (if I am understanding them correctly).

    To be clear, XMLSERVICE is completely replaced. That is the JSON API is directly coded into the new super driver (SQL400Json new CLI-like API). If we decide, perhaps add yet another 'API' SQL400Bson, so on. Aka, Open Source project, we allow ourselves the freedom to expand the limited architectures of ODBC/CLI to 'do want we need'.

    The only 'ILE code' is for natural interaction with other ILE components.

    Example 1: The Apache RPG CGI that can call db2sock SQL400Json API (included)

    Example 2: On QSQSRVR side of SQL400Json API, new stored procedure stub to load, activate and directly call ILE programs, etc (not in project yet). Similar thing XMLSERVICE does today, but very limited role to only critical ILE need functions.

    Expanding on why need a ILE stored procedure stub (example 2) ...

    DB2 does not provide PASE stored procedure support, so we need drop to ILE stored procedures to complete the calls to other ILE objects. (Slight 'lie' DB2 does support PASE Java stored procedures ... but starting Java ... mmm ... we end up back at XMLSERVICE speeds).

  27. Former user Account Deleted

    Chris, i am assuming the answers above explain your inquiries about web sockets. I found Aaron's reply very helpful (assuming i understood). Again, i have not used the support (not expert). If missing your line of questioning, please feel free to redirect toward pertinent.

  28. Former user Account Deleted

    Oh, one last explanation. This one especially for Chris. All of the planned ILE parts will be RPG Free.

    Why?

    Basically, people have RPG skills and compilers (common). Counter, ILE c skills are limited in the community. We want people to be able to understand at least ILE side RPG interaction with Open Source PASE db2sock driver.

    Performance difference between ILE c and ILE RPG for the limited scope of activity needed will be a complete wash. Simply, not worth confusing the non-c people. Instead, hopefully, engaging others skills for unimagined uses with RPG free.

  29. Chris Hird reporter

    Certainly does, phew..

    As for ILE program calls I always wonder why we need the stored procedures? Is there no way to have them invoked directly (with obvious checks and balances which would required some kind of prepare request prior to actual request)?

    I obviously know little of the underlying technology and requirements but seems like it should be possible?

    Chris...

  30. Chris Hird reporter

    Tony

    How much code has been added by the community to the XMLSERVICE using RPG (/Free or other)?

    The fact is building it in RPG will limit the Community activity just because you are going to require RPG programmers to do it. If its in C you ' MAY ' attract programmers who are not IBM i specific programmers? We struggle to get anyone to do ILE submissions for some reason and those that do are not going to be working at the level you are in terms of OS depth and PASE interaction.

    I am not able to comment on the performance difference between RPG and C. Maybe you are right and the speed is not going to be an issue.

    Chris...

  31. Former user Account Deleted

    How much code has been added by the community to the XMLSERVICE using RPG (/Free or other)?

    Not much. XMLSERVICE code from Luca for idle/time out functions (timeout signal). Everything else was more request, usually by email, not git issues (sigh).

    If its in C you ' MAY ' attract programmers who are not IBM i specific programmers?

    Yes. My heart is with loyal guys of RPG. These folks made IBM i into a great machine.

    We struggle to get anyone to do ILE submissions for some reason and those that do are not going to be working at the level you are in terms of OS depth and PASE interaction.

    I have always found RPG code is considered company assets. Aka, nobody can give away the 'RPG goods' without getting fired (crude). However, secretly, I feel great pride RPG code on IBM i is considered so valuable. A company asset. RPG person a company asset.

    Yes. Ironic. My PASE side loves to give away everything allowed to community (approved my leaders), keeping step with the unavoidable march to 'Open'.

    So, bottom line, I am more interested in making sure that RPG folks have something to copy/pattern using PASE new cool stuff (hope db2sock project).

    I am not able to comment on the performance difference between RPG and C. Maybe you are right and the speed is not going to be an issue

    Yes. I sympathize. You may have to take my word. Ironically, APIs we use are mostly 'c-ish' APIs. Here again, opportunity to demonstrate by example using RPG (free) to 'system' and 'PASE' APIs.

    project (slight politics)

    I like RPG. Community likes RPG. Cool!

  32. Aaron Bartell

    Just a personal side comment... I know RPG fairly well (did it for 12yrs) but don't know C/C++ as well. Would welcome someone writing C for a project lie this that knows the RPG perspective so that RPG folks could use this project as a learning experience.

  33. Chris Hird reporter

    FWIW...

    I am not particularly interested in RPG development ( I struggle with keeping up with C skills and all the other languages I dabble in "Getting too old to keep it all in my small brain") Maybe could take the RPG code and see if I could mimic the process in C though? Maybe that helps with your question Aaron (see that as we could collaborate :-))

  34. Former user Account Deleted

    As for ILE program calls I always wonder why we need the stored procedures? Is there no way to have them invoked directly (with obvious checks and balances which would required some kind of prepare request prior to actual request)?

    This is a fantastic observation. However, DB2 does not support dynamic load, activate, call by 'string' (aka, look up, 'NAME', 'LIB'). Second, DB2 supports only primitive type call parameters (integer, packed, numeric/zoned, double, clob, blob, etc.). That is, DB2 does not support any data structures as call parameter(s), no support *SRVPGM return structures, nothing complex (really). These primitive database types fall far shy of 'toolkit' sophisticated nested data structures we find in RPG and Cobol. We are left wanting in the 'pure' DB2 space. I see no horizon to merge the two versions of reality 'simple type database' and 'complex type RPG'.

    Hence toolkits. In our case, db2sock Open Source 'experiment' into extended APIs like SQL400Json and beyond. Understood?

  35. Former user Account Deleted

    Mmm ... most of this project is c programming. You can do worse than patterning this PASE db2sock c code. Again, you may have to take my word for the quality of the PASE c code, but, it really is good.

  36. Chris Hird reporter

    Tony, I am sure your C code is top notch! I will still consider taking the ILE RPG /free stuff and create a C version (just for kicks if nothing else).

    Chris

  37. Former user Account Deleted

    Optional read ...

    Forgive my observation, but i write in dozens of languages (better than most). Trained for untold years by IBM in c/C++. However, write one RPG program like XMLSERVICE, IBM i community latches idea RPG is your primary language. At times a really strange community.

  38. Former user Account Deleted

    Sorry we crossed reply paths. Cool ... please have to the c migrate. More the merrier.

  39. Chris Hird reporter

    Tony,

    optional read..

    I consider you a developer in the truest sense of the word! Not an RPG programmer :-) I hope that is clear to you? As a developer, you take the best tool for the job and use it to its best advantage and move to the next one when it is a better fit.

    Chris..

  40. Former user Account Deleted

    Thanks. However, I don't always use the best tool for the job. In fact, now i am wondering about 'community'. This project community to be precise. We could convert all of db2sock project to c code (ILE and PASE). In truth, c programming will be easier for me. Should i do the rest of the project in c code (including ILE)?

    This is an Open Source project, emphasis on 'source', wherein customers/vendors have to compile the code. The big question in my mind ... cost???

    1) PASE side -- gcc compiler is free (yeah hard to set-up, but free).

    2) ILE side -- I thought folks had to give and arm and a leg to buy the c compiler. Yes? Would switching to c code be a violation of trust? Open Source at a huge cost to regular Joe that has RPG compiler?

    Comment?

  41. Former user Account Deleted

    I am going to lunch. Please. if possible respond before i return. I need to get the actual code written for the experiment.

  42. Chris Hird reporter

    Tony

    I have all the compilers, not sure if anyone has RPG without C as it was always shipped together? (I get a license key for WDS not individual compilers?).

    As for actual cost, if they are into Open Source they are generally NOT compiling on a production system and probably prefer to use Litmus Spaces or another free IBM i Dev environment for any Open Source activity. (goes back to your previous comment about who owns the intellectual property when developed on your employers system).

    I am just about to engage with a User group setting up a sandbox dev environment (kindly donated by a MSP). All of the compilers are available even though most of the users will use RPG? I expect that they will prefer to engage in projects that use RPG in the beginning and with hope will migrate to multi language ILE with time.

    Chris...

  43. Aaron Bartell

    Should i do the rest of the project in c code (including ILE)?

    Opinion: Yes. The willing-to-contribute C community is significantly larger than the willing-to-contribute RPG community.

    2) ILE side -- I thought folks had to give and arm and a leg to buy the c compiler. Yes? Would switching to c code be a violation of trust? Open Source at a huge cost to regular Joe that has RPG compiler?

    Are binaries being distributed?

    There are a handful of IBM i hosting providers that can be used to compile ILE C at little or zero cost.

  44. Chris Hird reporter

    Tony,

    The question appears to be which language should you write this in?

    For me 'C' without any question, for others they may want it in RPG?

    Does writing it in RPG mean you will get more community input??? It would reduce my input, but others???

    If this was me (being selfish) I would write this in the language of my choice. Most if not all of the people who have an interest in this will JUST download the code and compile it (that makes the language question irrelevant because if it don't compile it aint getting used).

    As they will need the C part for PASE why provide ILE in RPG? Now I need 2 skills to manage/contribute if that is my intention.

    Chris...

  45. Former user Account Deleted

    As they will need the C part for PASE why provide ILE in RPG? Now I need 2 skills to manage/contribute if that is my intention.

    So, i talked to my business manager (Jesse). He says, most customers that have WDS will have all compilers (including c). Only a few are expected to take measures to ale cart purchase RPG only.

    Decision ...

    You guys are the primary crew. I will write in C all the way, both sides (ILE and PASE).

    Are binaries being distributed?

    So, personally, i think we wait to see how things work before we talk distributing. Anyway, i am not authorized publicly to say about IBM i future plans. You may ask Jessi if you would like.

    Disclaimer:

    Reminder, while i am acting/contributing with approval of my leadership, i do not speak for IBM in a product capacity. The opinions in these issues are my own. Aka, you got a geek with a dream and a keyboard. Not without influence to be sure, but definitely not the boss.

  46. Aaron Bartell

    Aka, you got a geek with a dream and a keyboard.

    That's t-shirt worthy. COMMONs18! (see below pic)

    geek-dream-keyboard.png

  47. Former user Account Deleted

    Good shirt.

    Now ...

    On the project side, i have adjusted to use ILE c. I removed references and code RPG. I only have one ILE part converted so far. See ILE-CGI/db2json.c (note: A lot less code over RPG, so i am happy to switch on popular demand.)

    Next ...

    Will take me a few days to week to 'tool' my other toolkit work to ILE c. No big deal, just finish designs/code (geek with a keyboard).

    I will post a message here when ready for some help testing. Thanks.

  48. Aaron Bartell

    See ILE-CGI/db2json.c (note: A lot less code over RPG, so i am happy to switch on popular demand.)

    No kidding! Love it.

  49. Former user Account Deleted

    Ok, i added a PASE fastcgi warpper to libdb400.a for JSON REST.

    Now we can configure either ILE CGI or PASE fastcgi. Both appear to work by PHP tests in this project (db2sock/tests_php).

    Should be noted that i am 'trying out' ILE CGI persistent. That is, keeping PASE running in the Apache CGI job. Technically, secret sauce for ILE CGI persistent is using a named activation group (DB2JSON activation). Not 100% sure side effects of 'fast' ILE CGI 'persistence', but appears to work.

    Both CGI and fastcgi will accept a db2sock persistent connection in the json rest for performance (see tests_php/cgi_querypconnect.php and tests/fastcgi_querypconnect.php respectively). Of course, more 'web performance tune' needed, but gives an idea of how things may work.

  50. Former user Account Deleted

    Next week. Current design of JSON input "fairly" minimalist. Of course, require a manual, toolkit wrapper, so on, but general idea.

    /* json
     * request {
     * -- toolkit database --
     * "query":"select * from bobdata",
     *   "fetch":"*ALL",
     * "query":"call proc(?,?,?)",
     *   "parm":[1,2,"bob"],
     * "connect":["*LOCAL","UID","PWD"],
     *   "query":"call proc(?,?,?)",
     *   "parm":[1,2,"bob"],
     *   "fetch":"*ALL",
     * "pconnect":["id"],
     *   "query":"select * from davedata where name=? and level=? and reports=?",
     *   "parm":[1,2,"bob"],
     *   "fetch":"*ALL",
     * -- toolkit commmand --
     * "cmd":"ADDLIBLE LIB(DB2JSON)",
     * -- toolkit program --
     * "pgm":["NAME","LIB","procedure"],
     *   "dcl-ds":["name",dimension, "in|out|both|value|const|return", "dou-name"],
     *   "end-ds":"name",
     *   "dcl-s":["name","type", value, dimension, "in|out|both|value|const|return",ccsid],
     * "end-pgm":"NAME",
     * -- complex parm (example)               -- temp_t[] RegionTemps(regions_t,int,int)
     * "pgm":["CLIMATE","MYLIB","RegionTemps"],-- *SRVPGM MYLIB/CLIMATE
     *   "dcl-ds":["regions_t"],               -- ds parm assumed "both" --
     *     "dcl-s":["region","5a","TX"],       -- region[0] = "TX"
     *     "dcl-s":["region","5a","MN"],       -- region[1] = "MN"
     *     "dcl-s":["region","5a","", 20],     -- region[2-21] = "" --
     *     "dcl-ds":["people_t",20],           -- ds[20] nested --
     *       "dcl-s":["first","32a"],
     *       "dcl-s":["last","32a"],
     *     "end-ds":"people_t",
     *   "end-ds":"regions_t",
     * -- single parm --
     *   "dcl-s":["countout","10i0",0,"both"],
     *   "dcl-s":["available","10i0"],         -- assumed "both" (not inside ds) --
     * -- complex return value --
     *   "dcl-ds":["temp_t",999, "return","countout"],
     *     "dcl-s":["region","5a"],
     *     "dcl-s":["min","12p2"],
     *     "dcl-s":["max","12p2"],
     *   "end-ds":"temp_t",
     * "end-pgm":"CLIMATE",
     * }
     * -- types --
     * "5a"    char(5)         char a[5]
     * "5av2"  varchar(5:2)    struct varchar{short,a[5]}
     * "5av4"  varchar(5:4)    struct varchar{int,a[5]}
     * "5b"    binary(5)       char a[5]
     * "5bv2"  varbinary(5:2)  struct varbinary{short,a[5]}
     * "5bv4"  varbinary(5:4)  struct varbinary{int,a[5]}
     * "3i0"   int(3)          int8, char
     * "5i0"   int(5)          int16, short
     * "10i0"  int(10)         int32, int, long
     * "20i0"  int(20)         int64, long long
     * "3u0"   uns(3)          uint8, uchar, char
     * "5u0"   uns(5)          uint16, ushort, unsigned short
     * "10u0"  uns(10)         uint32, uint, unsigned long
     * "20u0"  uns(20)         uint64, ulonglong, unsigned long long
     * "4f"    float           float
     * "8f"    double          double
     * "12p2"  packed(12:2)    (no c equiv)
     * "12s2"  zoned(12:2)     (no c equiv)
     * "8h"    hole            hole
     */
    
  51. Chris Hird reporter

    Tony

    Still trying to get a suitable PHP based solution for testing, do you think it would be better to test with Node.js? I may just bite the bullet and install Zend PHP for now if not?

  52. Former user Account Deleted

    Well, does not really matter which language. However, some other guys joining the 'toolkit' party are php people. Wherein, php is a common language.

    Also, no 'public' access to nodejs db2 driver and python db2 driver (not open source), so i hate to involve 'testing stuff' that is locked behind IBM. Aka, i want everyone to understand exactly how the PASE c code works and interacts with the ILE side.

    So, php is really best, everything is in the open (open source).

  53. Chris Hird reporter

    OK I have some issues, I installed the Zend stack and have a number of issues which are making it difficult to work with (CPU utilization through the roof and no admin access to change the settings). I have logged a couple of requests on the forums so hope to get some responses. I am still waiting for some info on how to build a stack from source so even that's not where I would like it to be. Hope to get something we can work with soon :-)

  54. Former user Account Deleted

    Ok, did not want to be a Zend Server discussion. Assuming php7 (you). I install as production to turn off developer helper features. Most particular stress to CPU was turn off development Z-Ray.

    Z Ray on/off

  55. Former user Account Deleted

    Also, i just notice i am using same chroot /QOpenSys/zend7 as php 7. I will change my project to chroot /QOpenSys/db2sock and update any reference in this git project to 'db2sock'. Oversight on my part.

  56. Former user Account Deleted

    Ok, changed everything to chroot /QOpenSys/db2sock - see source README(s)

    BTW -- This is not ready for big test yet. Needs bit more completed in the json 'toolkit' area to get a good feel for how works. However, you may try anything you like.

  57. Former user Account Deleted

    Hi guys, We are having json format in and out discussions here. At the moment we can't do pgm calls. I am going to try to get something running end 2 end soon just so you can see major technology differences to xmlservice (2 weeks). However, i would caution json may change to better suit toolkit builders as we test (play).

  58. Former user Account Deleted

    Update -- I called my first hello world PGM and hello world SRVPGM with toolkit (libtkit400.a) on top of db2 driver (libdb400.a). You can see how PASE and ILE work together (toolkit-base/PaseTool.c and ILE-PROC/db2proc.c). This simple pattern replaces all of XMLSERVICE (someday). There is much work left, but works faster than XMLSERVICE, even without tuning (suggest wait until late summer for perf test).

    BTW -- The sample JSON is only for testing (libjson400.a). The json format is completely wrong, clunky, not pretty, etc., but i needed to check design parts worked together. I am targeting fall to complete.

  59. Former user Account Deleted

    The json/xml parsers are complete user replaceable. To wit, no pre-defined format for JSON or XML, all up to your parser. The only parser requirement to toolkit is parser input ke/value pairs, and parser callbacks for output of IBM i calls (cmd, pgm, etc.). Take a look at toolkit-base/README.md for details (not complete).

    A default parser will be provided (not complete yet), but any toolkit parser(s) dynamically loaded based on env vars (README toolkit-parser-json, toolkit-parser-xml). The dynamic load occurs on first use of SQL400Json or SQL400Xml call, also 'async' capable already, like all DB2 interfaces in the new driver (libdb400.a).

    Again, much to do, remove parser junk in PaseTool.c (key, val only), formalise a real json default, duplicate xmlservice format for default xml interface, etc.

    Happy Fourth of July.

  60. Former user Account Deleted

    For those wondering ... when will db2sock SQLJson400 be available?? Well, plan is during September (this September 2017). Actually hope to have at least the correct framework up to repository sometime next week (litmis/db2sock).

    Again, the one up in repository is not correct, simply a toy for me to work out general ideas. Aka, do not rely on the odd json like 'dcl-s' and 'dcl-ds' ... i was only working through some of the conversion code (string 2 packed/zoned, etc.).

  61. Chris Hird reporter

    Tony

    Looking at NGINX on IBM i and running php7.0-fpm (not available today). Would your solution tie into this cleanly?

  62. Former user Account Deleted

    Looking at NGINX on IBM i and running php7.0-fpm (not available today). Would your solution tie into this cleanly?

    Yes. Web server makes no difference. Language makes no difference as db2sock supports any db2 caller or rest caller (php, node, python, go, etc.).

    nginx fastcgi php (fpm)
    
     # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
     #
     location ~ \.php$ {
     include snippets/fastcgi-php.conf;
    
     # With php7.0-cgi alone:
     # fastcgi_pass 127.0.0.1:9000;
     # With php7.0-fpm:
     fastcgi_pass unix:/run/php/php7.0-fpm.sock;
     }
    

    Better question ... 'how many ways db2sock??'

    1) db2: nginx->php(fpm) ... where php drivers ibm_db2, pdo_ibm, odbc to db2sock libdb400.a. Therefore IBM db2sock DB2 (CLI) and, db2sock toolkit, including json interface, etc.

    2) rest: nginx->PGM-db2jsonfcgi(pase fastcgi) ... should be able to set-up db2sock fastcgi without php (pase program source db2sock/fastcgi/db2jsonfcgi).

  63. Chris Hird reporter

    Tony

    Interesting…I think option 1 is what I need as I will need php anyhow for other purposes. We have NGINX on IBM I but no php-fpm as far as I know so that is where I need to start looking.

    Thanks

  64. Former user Account Deleted

    db2 ... ngix -> any language (php fpm)-> db2 driver (ibm_db2, etc.) -> db2sock (libdb400.a, libtk400.a, libjson400.a, etc.)

    Ok, db2 side should be trivial. That is after you get php(fpm) running, just a normal db2_connect, etc, (or pdo_ibm, odbc, etc.).

    rest ... ngix fastcgi db2sock (need a main)

    Mmm ... almost work out of the can (db2jsonfcgi). Unfortunately we need a slightly different main zfcgi to start db2jsonngix. Aka, we need to be able to start start 'standalone' on a given port (like 9000 below). Not hard to do I think, but depends on interest (yours). Do you have any interest in a json interface over rest to ngix/db2sock (toolkit)???

    location ~ \.db2$ {
        fastcgi_pass  127.0.0.1:9000;
    }
    
  65. Chris Hird reporter

    As usual Tony you are 10 steps ahead of me! Not sure of the implications and how to implement at this point.? But it looks interesting enough to give it a run through? I assume I could have the NGINX running on one server and the fastcgi_pass linked to any IP/port? (Still delving into the NGINX capabilities)

    Chris..

  66. Former user Account Deleted

    Mmmm ... i did a quick read up on fastcgi with ngix ...

    nginx fastcgi proxy

    Warning ... a few unanswered questions about LPP OPS version of nginx with respect to supporting fastcgi (any). Please feel free, but I need to chat builder LPP OPS nginx (not around now).

  67. Chris Hird reporter

    Good catch, I would have thought it would be standard but maybe not for the LPP OPS version?

  68. Chris Hird reporter

    Here is the build on my system

    nginx version: nginx/1.10.3

    built by gcc 4.8.2 (GCC)

    built with OpenSSL 1.0.2j 26 Sep 2016 (running with OpenSSL 1.0.2h 3 May 2016)

    TLS SNI support enabled

    configure arguments: --prefix=/QOpenSys/QIBM/ProdData/OPS/tools --sbin-path=/QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx --with-poll_module --with-threads --with-ipv6 --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_ssl_module --with-http_auth_request_module --with-http_slice_module --with-http_secure_link_module --with-http_degradation_module --with-http_random_index_module --with-http_flv_module --with-http_mp4_module --with-debug --with-cpu-opt=ppc --with-cc-opt=-Wno-sign-compare --with-ld-opt='-Wl,-brtl -Wl,-blibpath:/QOpenSys/QIBM/ProdData/OPS/tools/lib/libgcc-4.8.2:/QOpenSys/QIBM/ProdData/OPS/tools/lib:/QOpenSys/usr/lib -L/QOpenSys/QIBM/ProdData/OPS/tools/lib/libgcc-4.8.2'

    That seems to imply NO FastCGI capabilities.

    Chris…

  69. Former user Account Deleted

    Ok, ran a quick test to check lpp ops nginx fastcgi working ... yes (small java test).

    This java test is known as an 'external' fastcgi server test (stand alone fastcgi).

    I did NOT run a unix domain socket fasctcgi test (like zend server apache fastcgi php)

    http://ut28p63/fred.jt
    
    FastCGI-HelloJava stdio
    
    request number 1 running on host localhost
    request number 1 running on host /fred.jt 
    
    FastCGI-HelloJava stdio
    
    request number 2 running on host localhost
    request number 2 running on host /fred.jt 
    
            location ~ \.jt$ {
                root           htdocs;
                fastcgi_pass   127.0.0.1:9000;
                fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
                include        fastcgi_params;
            }
    
    bash-4.3$ /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -v                                 
    nginx version: nginx/1.10.3
    bash-4.3$ /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf
    
    bash-4.3$ ps -ef | grep nginx
         adc 7153496       1   0 12:08:19      -  0:00 /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf 
         adc 7153497 7153496   0 12:08:19      -  0:00 /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf 
    

    htdocs/external/tests/TinyFCGI.java

    class TinyFCGI { 
     public static void main (String args[]) {  
      int count = 0;
       while(new FCGIInterface().FCGIaccept()>= 0) {
       count ++;
       System.out.println("Content-type: text/html\n\n");
       System.out.println("<html>");
       System.out.println("<head><TITLE>FastCGI-Hello Java stdio</TITLE></head>");
       System.out.println("<body>");
       System.out.println("<H3>FastCGI-HelloJava stdio</H3>");
       System.out.println("<br>request number " + count + " running on host " + System.getProperty("SERVER_NAME"));
       System.out.println("<br>request number " + count + " running on host " + System.getProperty("SCRIPT_NAME"));
       System.out.println("</body>");
       System.out.println("</html>"); 
       }
      }
    }
    
    bash-4.3$ java -DFCGI_PORT=9000 TinyFCGI &
    
  70. Former user Account Deleted

    nginx add config

    1.1.0-sg2 - Added nginx configuration for REST json to new driver and toolkit. Very simple to use see source/fastcgi README.md

    http (ut28p63 == your machine)
    http://ut28p63/db2json.db2
    
    ===
    start db2jsonfcgi (using db2jsonngix)
    ===
    bash-4.3$ cd /QOpenSys/usr/lib
    bash-4.3$ ./db2jsonngix -start -connect 127.0.0.1:9002 ./db2jsonfcgi
    
    ===
    start nginx
    ===
    /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf
    
    ===
    config (nginx.conf)
    ===
    bash-4.3$ pwd
    /home/ADC/nginx/conf
    bash-4.3$ ls
    fastcgi_params - /QOpenSys/QIBM/ProdData/OPS/tools/conf
    mime.types - /QOpenSys/QIBM/ProdData/OPS/tools/conf
    nginx.conf - (see below)
    
    
    bash-4.3$ cat nginx.conf 
    
    :
    # pass db2 json to FastCGI server listening on 127.0.0.1:9002
    # ./db2jsonngix -start -connect 127.0.0.1:9002 ./db2jsonfcgi
    #
     location ~ \.db2$ {
         fastcgi_pass   127.0.0.1:9002;
         fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
         include        fastcgi_params;
    }
    # pass db3 json to FastCGI server listening on /home/ADC/nginx/logs/db3.soc
    # ./db2jsonngix -start -connect /tmp/db3.sock ./db2jsonfcgi
    # Note: Bug db2jsonngix db3.sock (db2jsonngix), matches db3.soc (nginx).
    #       Also, using chroot /QOpenSys/db2sock for development db2sock,
    #       but running nginx as root /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx.
     location ~ \.db3$ {
         fastcgi_pass   unix:/QOpenSys/db2sock/tmp/db3.soc;
         fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
         include        fastcgi_params;
    }
    :
    

    The same php test works for ILE CGI, Apache fastcgi, or nginx fastcgi (see source/tests_php). Note: Basic auth is ignored by nginx. I did not bother to read up on this option for nginx.

    <?php
    // export PHP_URL=http://ut28p63/db2/db2json.pgm  (ILE-CGI - works partial)
    // export PHP_URL=http://ut28p63/db2json.db2  (fastcgi-PASE -- unix socket)
    // export PHP_URL=http://ut28p63/db2json.db2  (nginx -- 127.0.0.1:9002)
    // export PHP_URL=http://ut28p63/db2json.db3  (nginx -- unix socket)
    $url        = getenv("PHP_URL"); 
    // nginx ignores basic auth
    $user       = getenv("PHP_UID"); // export PHP_UID=MYID
    $password   = getenv("PHP_PWD"); // export PHP_MYPWD
    
    $clob = myjson();
    print("Input:\n");
    var_dump($clob);
    print("Output:\n");
    
    
    $context  = stream_context_create(
      array('http' =>
        array(
          'method'  => 'POST',
          'header'  => "Content-type: application/x-www-form-urlencoded\r\n".
                       "Authorization: Basic " . base64_encode("$user:$password"),
          'content' => $clob
        )
      )
    );
    $ret = file_get_contents($url, false, $context);
    var_dump($ret);
    
    function myjson() {
    $clob =
    '{"pgm":[{"name":"HELLO",  "lib":"DB2JSON"},
            {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
           ]}';
    return $clob;
    }
    
    
    ?>
    
  71. Chris Hird reporter

    Tony

    I am in the UK at the moment so I will take a look at this when I get back.

    Chris..

  72. Former user Account Deleted

    -> nginx fix namelen-1 unix domain socket

    • Yips db2sock - 1.1.0-sg3 toolkit db2jsonngix fix unix domain sock name-1 len (nginx)
    nginx.conf
    
    # pass db2 json to FastCGI server listening on 127.0.0.1:9002
    # ./db2jsonngix -start -connect 127.0.0.1:9002 ./db2jsonfcgi
    #
    location ~ \.db2$ {
        fastcgi_pass   127.0.0.1:9002;
        fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        include        fastcgi_params;
    }
    # pass db3 json to FastCGI server listening on db3.sock
    # ./db2jsonngix -start -connect /tmp/db3.sock ./db2jsonfcgi
    # Note: Using chroot /QOpenSys/db2sock for development db2sock,
    #       but running nginx as root /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx.
    location ~ \.db3$ {
        fastcgi_pass   unix:/QOpenSys/db2sock/tmp/db3.sock;
        fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        include        fastcgi_params;
    }
    
  73. Former user Account Deleted

    summary

    The information above will show you how to set up nginx as localhost:port or unix domain socket. I believe php fpm supports both options, so we have proved nginx<>fastcgi is working.

    Secondary, we have demonstrated using toolkit json REST to multiple configurations including, ILE-CGI (httpd,conf), fastcgi (fastcgi.conf), and nginx (nginx.conf). I ran all versions from my Linux laptop to IBM i REST using download tests_php.All worked great, except, ILE-CGI (httpd.conf) had a few CCSID(ish) issues. Therefore i recommend either Apache<>fastcgi or niginx<>fasctcgi as both performed reasonably with json REST.

  74. Chris Hird reporter

    Hi Tony

    OK I have set up the environment on the IBM i with nginx running against port 80. I have reconfigured so that .db2 requests run to the db2jsonngix service running against 127.0.0.1:9002 as per your instructions above. I am now confused at to the tests you ran as you have a PHP script which should tie up with the test scripts? Can you explain a little more about how I should set up a test for the db2 extension?

  75. Chris Hird reporter

    Just to add to the above, I simply ran a test for http://sas4:1984/test.db2 and the result back was {"ok":false,"reason":"empty"} So it looks like its working but I obviously need to understand a little more to progress the testing :-) Yes I did change the port for NGINX and yes it seems to be bloody fast!

  76. Former user Account Deleted

    You can run entire php suite of tests over REST to nginx+db2jsonngix. Aka, REST nginx+db2jsonngix replaces ibm_db2/odbc driver for the toolkit tests.

    > export PHP_URL=http://myibmi/db2/db2json.pgm  (ILE-CGI - works partial)
    > export PHP_URL=http://myibmi/db2json.db2  (fastcgi- apache or nginx - works good)
    --- optional ---
    > export PHP_DB=MYIBMI (*LOCAL)
    > export PHP_UID=MYUID
    > export PHP_PWD=MYPWD
    
    Run all tests_json ...
    > php run.php
    
    Run driver tests_json ...
    > cd db2sock/tests/php
    > php run_ibm_db2_set.php
    > php run_ibm_db2_io.php
    > php run_odbc.php
    > php run_odbc.php
    > php run_cgi_basic_auth.php <-- run this one
    
    Note: Basic auth is ignored by nginx (do not have to change).
    
    
    One at a time ...
    > php test0000_do_thing32
    
  77. Chris Hird reporter

    Nice! I configured the Linux NGINX server to call the IBM i db2jsonnginx server and it works exactly the same :-) I did add a new service to listen on a different port (9003) as the other only listened on 127.0.0.1:9002 so now it has 2 services running one for internal and one for external. I would like to know what services are available? Are they exactly the same as the old db2400? Do we have any documentation available I can run through? Performance so far seems pretty good. I am impressed...

    Chris...

  78. Former user Account Deleted

    I would like to know what services are available? Are they exactly the same as the old db2400? Do we have any documentation available I can run through?

    So, clarity.

    toolkit josn -- yes -- We only have toolkit json (so far). That is, ok to run all json tests using db2sock/tests/php/run_cgi_basic_auth.php using nginx+db2jsonnginx.

    db2 json -- no -- A full db2 json driver is not available (yet). Next post will discuss 'full db2 json' driver. There is one test using start of db2 json driver, j0601_query_qcustcdt.json. This only gives a rough idea of db2 json.

    cat tests/json/j0601_query_qcustcdt.json 
    {"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
            {"parm":[{"value":"Jones"},{"value":"Vine"}]},
            {"fetch":[{"rec":"all"}]}
           ]}
    
  79. Chris Hird reporter

    Tony

    I think I have missed something? The export request seems to be pointing to the http root then a file call db2json.db2? I don't even have the db2sock directory anywhere? I just downloaded the latest zip file you mentioned, copied the .a files into the QOpenSys\usr\lib directory and went from there? I can see within the zip file the tests directory under the libdb400-1.1.0-sg5 directory. Can you clarify what I am missing?

    Chris...

  80. Former user Account Deleted

    I don't even have the db2sock directory anywhere? I just downloaded the latest zip file you mentioned, copied the .a files into the QOpenSys\usr\lib directory and went from there?

    Yes. This is the minimum set of files. Aka, you only need the 'driver binaries' to run on any IBM i (/QOpenSys/usr/lib).

    I can see within the zip file the tests directory under the libdb400-1.1.0-sg5 directory. Can you clarify what I am missing?

    The optional tests can be loaded anywhere. Load them on your laptop Linux (if you want). You are already talking to IBM i nginx+db2jsonnginx from Linux, so just run the php tests on laptop (you will need to maintain tests directory structure).

  81. Chris Hird reporter

    OK I see your point. I will copy the test structure to the Linux Server and run directly from there.

    I am interested in the db2 query stuff as much as anything as it is going to help with some customer work I am doing. Eventually would like to look at other IBM i objects and program calls etc.

    Chris...

  82. Former user Account Deleted

    For example:

    On my laptop. I run the tests.

    my laptop:
    > env | grep PHP
    PHP_DB=UT28P63
    PHP_URL=http://ut28p63/frog.db2
    PHP_PWD=NICE2DB2
    PHP_UID=DB2
    
    > cd db2sock/tests/php
    == hello test ==
    > php test0004_hello_pgm_cgi_basic_auth.php
    == run all json tests (requires IBM i ILE LIB DB2JSON for RPG tests) ==
    > php run_cgi_basic_auth.php
    
    my ibm i:
    nginx.conf
    location ~ \.db2$ {
        fastcgi_pass   127.0.0.1:9002;
        fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        include        fastcgi_params;
    }
    
  83. Former user Account Deleted

    I am interested in the db2 query stuff as much as anything as it is going to help with some customer work I am doing. Eventually would like to look at other IBM i objects and program calls etc.

    We have the horse before the cart in your case. That is, we have the toolkit json interface, but do not have total db2 json interface (only one small test j0601_query_qcustcdt.json previous mention).

  84. Chris Hird reporter

    Tony

    OK ran the tests on the Linux Server and all failed (db2_connect() not found) This is probably because running php as a CLI on the Linux server is expecting the local server to run the requests? I need to force the tests to run on the Linux Server and it know that the request is to be passed to the IBM i service? From the browser that is easy as NGINX is configured to send all requests to the IBM i, php CLI on the other hand knows nothing of NGINX or IBM i. If I put the test subdirectory into the NGINX root and run as a browser request through the NGINX port that should work? Am I on the right path?

  85. Chris Hird reporter

    Tony

    I have run the run_cgi_basic_auth.php script and all looks good. :-) Performance was very good as far as i can tell but won't really know until I can get the query stuff running (I need the cart). Sorry I am just hacking my way around this until i know a bit more .. I would like to help if possible so let me know where I can assist and i will get something in my plans..

  86. Former user Account Deleted

    db2 json interface ... not your father's CLI ... probably ... i think

    So, bluntly, DB2 CLI over rest json is like running a jet plane inside a football stadium. The distance around stadium too small for jet to work well. Same is true for the fine glandular DB2 CLI APIs. That is, in a json DB2 API, we really want bigger 'aggregate' actions, like, connect, query, fetch in one json call (below). The 'better API' below our REST DB2 json jet plane flies to Europe and back with records in tow.

    {"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
            {"parm":[{"value":"Jones"},{"value":"Vine"}]},
            {"fetch":[{"rec":"all"}]}
           ]}
    

    In fact, even in binary call CLI API mode (ibm_db or ibm_db2), async interfaces absolutely stink because of fine grain DB2 calls. Witness the unnatural acts taken in node ibm_db to try to force DB2 CLI to co-operate with async DB2 requests. BTW -- MySql is not much better if you check out php async, aka, they gave up the ODBC ghost and simply async'd query API (close eyes, go home, hide at work).

    Anyway, picking a better 'aggregate' DB2 CLI API is really what is needed to have good performing rest DB2 json requests. In fact, this is exactly what my Chief OS archtect wants me to do, so we can build a reasonable nodejs json-based driver (Jesse G.).

    cheap ... mmm ... cheap driver rest ibm_db2 driver you say??

    Yes. I understand desire to have a off box/remote simple solution db2 full CLI rest json driver (nginx+db2jsonnginx). Forget laptop driver at all, just use json over rest to IBM i to run your ibm_db2 scripts. Yep. I know how to do this. In fact, talked to Jesse G. about this exactly. The ultimate cheap solution with no installation of anything beyond some php (or node, or python, etc.).

    However, aggregate driver rest DB2 cli++ was priority one.

  87. Former user Account Deleted

    You can vote (of course) ...

    Do you understand jet plane inside a stadium problem for DB2 CLI/ODBC json? Aka, understand why DB2 CLI/ODBC APIs are too small in mission, and, never will be right for async requests and/or json rest request??

    So, we feel 'right sizing' DB2 aggregate APIs is needed to do good job with DB2 json and/or async json 'drivers'. However, grief is allowed. That is, giving way on simple REST DB2 CLI driver runs 'exactly' like my ibm_db2 script today.

    Yep. It is a big deal changing DB2 APIs to fit the real need of json. Exactly why I put this off for a while to test simpler mechanics of json toolkit (much less controversial).

  88. Chris Hird reporter

    Hmmmm lots to think about here... I am definitely on the jason over rest to ibm_db2 scripts path. But that is only half the battle as I would like to have additional command/program call capabilities etc. Once the db2 stuff is bedded down. The work you have done with db2sock should be able to be extended to deliver some of that functionality.

  89. Chris Hird reporter

    Tony, yes I understand the issues and your analogy of a jet plane in a stadium. Also understand the grief which will inevitably come from updating the DB2 API set to encompass JSON support and async etc. but they are important for any integrated solution in today's IOT world, will that change? I have no clue but for me its something I am facing today and I would like to get a solution.

  90. Former user Account Deleted

    I am definitely on the json over rest to ibm_db2 scripts path

    Allow me to play unfair with so called rest full db2 cli ambitions. Be assured, not personal, simply full db2 cli 'used car' truth in lending.

    Ok, let's talk about hypothetical performance of rest full db2 cli interface.

    tracing db2 cli ... “How much wood could a woodchuck chuck ... ” By Mother Goose.

    The following program is typical using ibm_db2. Simple query, fetch results.

    zzsimple.php:
    
    <?php
    
    require_once('connection.inc');
    
    $conn = db2_connect($db,$username,$password);
    
    $result = db2_exec($conn, "select * from staff");
    
    while ($row = db2_fetch_assoc($result)) {
    printf ("%5d  ",$row['ID']);
    printf ("%-10s ",$row['NAME']);
    printf ("%5d ",$row['DEPT']);
    printf ("%-7s ",$row['JOB']);
    printf ("%5d ", $row['YEARS']);
    printf ("%15s ", $row['SALARY']);
    printf ("%10s ", $row['COMM']);
    print "\n";
    }
    
    ?>
    
    output:
    bash-4.3$ php zzsimple.php 
       10  Sanders       20 Mgr         7        18357.50            
       20  Pernal        20 Sales       8        18171.25     612.45 
    :
      340  Edwards       84 Sales       7        17844.00    1285.00 
      350  Gafney        84 Clerk       5        13030.50     188.00 
    (35 records returned)
    

    Using TRACE=0n ability of new libdb400, we see number calls across REST full db2 CLI counting SQLxxx APIs traced. Aka, transition calls over full DB2 CLI API.

    bash-4.3$ export TRACE=on
    bash-4.3$ php zzsimple.php                          
       10  Sanders       20 Mgr         7        18357.50            
          :
      350  Gafney        84 Clerk       5        13030.50     188.00 
    bash-4.3$ unset TRACE      
    bash-4.3$ grep tbeg /tmp/libdb400_trace_9080643    
    SQLAllocHandle.9080643.1513093563.1.tbeg +++success+++
    :                        
    bash-4.3$ grep -c tbeg /tmp/libdb400_trace_9080643 
    75
    
    (75 transitions script to db2 using SQLxxx APIs)
    

    We see (grep -c tbeg above), 75 CLI transitions/calls from script to DB2 using fine grained DB2 CLI APIs (SQLxxx tbeg). This means 75 REST calls across our wild, wild, west internet to reach our server for only 35 records. Uf Da!!!

    Now, unfair, extrapolate this simple task using full REST DB2 CLI by typical web usage of 10-1000 hits/second. Well, astronomical traffic over full DB2 CLI is perhaps an understatement. Performance by full DB2 CLI REST may simply break the backs of engineers attempting to build faster CPUs and wider bandwidth nets (bloody programmers).

    Yes. I told you. I consider results to be eye opening (if unfair).

    better using DB2 aggregate APIs ... especially json REST DB2

    The following simple aggregate json API reduces 75 'traditional' calls to 1 call across (new db2sock API). I repeat '1 call', not 75 calls across REST DB2 'new' API.

    {"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
            {"parm":[{"value":"Jones"},{"value":"Vine"}]},
            {"fetch":[{"rec":"all"}]}
           ]}
    

    CLIosaurus rex speaks ...

    Of course, ramifications of moving beloved existing interfaces like ibm_db2, to use aggregate json APIs is unsettling (see CLIosaurus rex below). Worse, any change driver level (libdb400.a), impacts profoundly wonderful abstractions like Ruby Rails, ZF2 Database, etc. My view, ALL abstractions like Rails have been medieval questing to find the best wall paper to top fine grain ODBC APIs. Aka, surprising to me, nobody willing to stand up and say ODBC/CLI emperor is not wearing clothes.

    However, any movement in evolution comes with ramifications. Example, whole idea of 'check SQL error' becomes more game of 'trust good 90% time', but will hunt for error in pile of return codes (should it ever fail). The fundamental premise of coding to 'failure' DB2 CLI is challenged. Aka, instead, design accepting real world production code rarely fails.

    More heresy, only during development is every SQL statement intensive error code checking any value. You may take some comfort. The idea 'aggregating' errors is not actually new. In fact, relatively 'low level' ibm_db2 driver already makes many SQL calls behind the curtain, only offering up enough errors to keep a developer happy (art before science).

    Yes, yes, what of our children (baby rails, baby ZF2 database)?

    BTW -- Fairly, I am not exactly sure what happens to baby rails or ZF2.

    ODBCosaurus rex (CLIosaurus rex) ... a parting thought

    We mark 17 years into 21st century, and yet, ODBCosaurus rex is still driving our thinking toward full REST DB2 CLI. ODBC/CLIosaurus rex not evolving, missing overdue great meteor impact in minds of script writers. The one simple truth 'CLI architecture does not fit the scripting mission' .

    the big question ...

    Am i changing your thinking on full db2 cli rest (ibm_db2 'as is')??? Do you want to go aggregate for 1 call (better than ibm_db2), instead of bowing to CLIosaurus rex demanding 75 calls (ibm_db2 'as is')?

    BTW -- Again, I apologise. In my mind our REST DB2 CLI chat is analogous to movie 'The Matrix", where I am asking you to choose the red pill (better than ibm_db2) or the blue pill (ibm_db2 'as is'). I am indeed being unfair, but only for kind purpose.

  91. Chris Hird reporter

    Tony, I have a great deal of fun reading your responses... My view is we need to look at what modern programmers need. I fully support the aggregate route. One of the issues we need to address is the performance of web based interactions to IBM i, they are only going to become more important as we move forward with the modernization of IBM i applications. Changing from 75 requests to 1 must be a good thing, even it it does upset the old one step at a time CLI process.

    If the question is 'should we offer ODBCosaurus rex for the sake of continuity?' My response is the dinosaurs became extinct for very good reason... I am with you on the need to move on.

  92. Chris Hird reporter

    Tony, another thought as I ponder the points you raised.

    With the code you posted it needs to fetch each record in isolation, so the call to the server is made each time the db2_fetch_assoc() is called (I can only imagine the code required to manage the cursor etc in the file). This immediately brings up the question of how the data would be handled should an aggregate call be made? Do we get all of the data in a single burst or is your "fetch" statement going to be the key to the aggregation? I have done some performance balancing previously by only fetching a set number of records and then filling in additional ones as the page down key is pressed (bit like UIM Manager).

    Chris...

  93. Aaron Bartell

    The following simple aggregate json API reduces 75 'traditional' calls to 1 call across (new db2sock API). I repeat '1 call', not 75 calls across REST DB2 'new' API.

    {"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"}, {"parm":[{"value":"Jones"},{"value":"Vine"}]}, {"fetch":[{"rec":"all"}]} ]}

    Do you have the latest syntax documented anywhere? I am able the glean a lot from the unit tests, though one I was particularly curious about was SQL stored procedures.

    Also, in theory, the community, in they so chose, could create a Rails, Node.js, PHP, Python, etc ORM over this json interface. It would take some ODBCosaurus-abstraction-layer massaging, but I believe it could be done.

    I am very curious what performance will look like. Specifically, what if it was good enough that this same interface was used for both 1-tier and 2-tier apps. As long as this is developed in layers we will be able to switch out transport implementations as technology progresses (i.e. today WebSockets, tomorrow http/2, Thurs??). Said another way, both tier approach should be able to work with a single interface.

    Tony, I have a great deal of fun reading your responses...

    Same. :-)

  94. Former user Account Deleted

    If the question is 'should we offer ODBCosaurus rex for the sake of continuity?' My response is the dinosaurs became extinct for very good reason... I am with you on the need to move on.

    Great. 1 + 2 = 3 vs. ODBCosaurus rex. I like our odds as 'meteor men'.

    "Jt is ywrite that euery thing Hymself sheweth in the tastyng" (14th century proverb).

    I suggest we finish a few aggregate extensions to DB2 CLI architecture. This case json based, test with a modified ibm_db2, see we actually like the result (proof of a pudding). I assume you will test with critical eye (is needed).

    Thanks.

  95. Former user Account Deleted

    Do you have the latest syntax documented anywhere? (db2sock json manual)

    No. I suffer from geek 'think, code, test, then document (i guess)'. Many json tests (db2sock/tests/json). Ultimately, working json tests decide manual content. Mmm ... maybe i could generate a user manual (says a geek).

  96. Chris Hird reporter

    Yes I will be on board to test as required. Critical eye is a given.. I will also need to go through the libdb400 source and get familiar so the critical eye has some insight into the process.

    I am now looking at the php build requirements for PASE so I can add php-fpm to nginx on the IBM i. Exciting times and lots to learn. Will probably know a lot more about PASE environment and how it all fits together before I am done :-)

  97. Aaron Bartell

    No. I suffer from geek 'think, code, test, then document (i guess)'.

    I don't want to slow down the good progress happening by asking for docs. I was mostly wanting to gain perspective as to the latest. I will stick to the commit history for now.

  98. Former user Account Deleted

    This immediately brings up the question of how the data would be handled should an aggregate call be made? Do we get all of the data in a single burst or is your "fetch" statement going to be the key to the aggregation? I have done some performance balancing previously by only fetching a set number of records and then filling in additional ones as the page down key is pressed (bit like UIM Manager).

    Yes. Agree. I too have pondered 'pagination' of records by fetch. A balancing act between 'stateless' json requests vs. 'state full' json requests. Aka, idea of collecting the next 80 records of a waiting result set.

    I have some ideas for db2sock, mostly around 'persistent private connection'. Aka, have 'key' will find your connection and waiting result set. However, I do not have 'IPC' code needed rendered into db2sock (yet). Of course, security implications of 'key' picking up where left off ... mmm ... maybe job of Kerberoes (web server admin, not db2sock???). Not fully formed yet.

    If you give it some thought ... I would like to understand if 'web interface' (socket), leaves authentication/security job to the web server (Basic Auth, Kerberos, etc.). Therein, db2sock can simply provide 'keyed' means to return to the waiting result set for next 80 records. Aka, pagination, not security.

    I am not sure. What do you think?

  99. Aaron Bartell

    Therein, db2sock can simply provide 'keyed' means to return to the waiting result set for next 80 records. Aka, pagination, not security.

    I am not sure. What do you think?

    Is there meant to be a single db2sock 'web interface' instance running that all clients communicate with, or will each app have their own 'web interface' that listens on a predefined specific-to-application port. I ask because I am curious how authorization will work (vs authentication). Am I correct to assume that the user that starts the db2sock 'web interface' is the user that will run the SQL statements; and that user will obviously need authority to the DB2 tables in the SQL statement?

    My question might be a moot point in that the situation would cause one approach over the other (i.e. if you want to run db2sock under different profiles then you need to start multiple db2sock 'web interfaces').

    In general I am in favor of not including authentication in the tool (db2sock) and instead let the layer above (Nginx, Apache, Node.js, etc) accomplish authentication. With that said, it would be good to document sample authentication mechanisms (i.e. Basic Auth) so people can be up and running quickly. The community can take care of creating that documentation.

  100. Chris Hird reporter

    I agree lots of questions to be answered as to the best way to handle the aggregation.

    Having a persistent connection should allow some management as we will know the connection id and maybe we have a key process that allows the data to be pre-fetched and delivered to the user on key presentation. Once delivered it has a new key attached for the next pre-fetch result set? The key and data has to be tied to the connection id so if the connection is dropped all of the garbage is collected and cleaned up. The key will also be connection relevant so a request for the same data over a different connection should be impossible?? The connection process should handle the authority etc. all of which can be stored in the connection information? I am with leaving the authorization to the web server side of the fence, if it decides you have authority why should lower process be required to re-interpret the authority.

  101. Former user Account Deleted

    In general I am in favor of not including authentication in the tool (db2sock) and instead let the layer above (Nginx, Apache, Node.js, etc) accomplish authentication ... documentation

    Ok.

    The connection process should handle the authority etc. all of which can be stored in the connection information?

    Yes. In fact, currently, db2sock SQL400Connect (new aggregate), persistent connection is key="db2"+db+uid+pwd+qual. Therefore returning to same 'connection', the key must match (with same stmt handle ... of course). The theory is that you know your uid/password, so the other information is just which database and qualified key connection.

    Simple. So it would seem.

    However, json, dear json, especially REST json, default connect(null, null, null) seems practical over passing any password around (+/- ssl). Witness, all db2sock/tests/json tests simply use implicit default connect(null, null, null). Implication, of course, any 'right to use" REST interface must be stopped/allowed at web server front door, aka, Nginx, Apache (Basic Auth, etc.).

    To point, db2sock ramification of connect(null,null,null) + qual, is simply key="qual". Therefore, web server 'authentication' best be working to hold back hordes of hackers running db2 queries. Mmm, well, i suppose you either trust web server authentication or not.

    ... new key attached for the next pre-fetch result set?

    I suspect not. That is key="db2"+db+uid+pwd+qual along with the stmt handle would return you to the correct waiting result set (next 80 records). There may be an additional + daemon_name to key depending implementation of "persistent private connection" ... but ... too early to understand.

    single db2sock 'web interface' instance running that all clients communicate with, or will each app have their own 'web interface' that listens on ...

    Previous 'key' discussion is the root technology of "be there daemons" related question (above). There are factors no matter which direction.

    http request json -> (nginx/Apache fastcgi) ->(*)
    
    == fastcgi one db2sock all db2 web work (easy memory based) ==
    (1)->key="_db2_"+db+uid+pwd+qual->stmt(s) (one  db2sock)
    
    == fastcgi db2sock manager(s) route work many daemons (IPC based) ==
    (n)->key="_db2_"+deamon+db+uid+pwd+qual->stmt(s)
    -> daemon(1) db2sock->"_db2_"+db+uid+pwd+qual->stmt(s)
    -> daemon(2) db2sock->"_db2_"+db+uid+pwd+qual->stmt(s)
    :
    ->daemon(n) db2sock->"_db2_"+db+uid+pwd+qual->stmt(s)
    
    Note: 
    32,000 conn + stmts 'resources' allowed any given daemon
    (db2 limit 32K 'handles' per job/process ... not due to db2sock)
    

    Answer: Frankly, risking great peril of 'all answers will be revealed' narrative, I just don't know yet. I suspect your own thoughts are good as mine.

  102. Former user Account Deleted

    ... sleep better with daemons

    This is all theory ... so ... (deleted see Mulligan).

    FYI -- If you create a UNIX socket of type SOCK_STREAM (listenSock = socket(AF_UNIX, SOCK_STREAM, 0)), and accept connections on it, then each time you accept a connection, you get a new file descriptor (as the return value of the accept system call). This file descriptor reads data from and writes data to a file descriptor in the client process. Thus it works just like a TCP/IP connection.

  103. Former user Account Deleted

    "Mulligan" -- (in informal golf) an extra stroke allowed after a poor shot, not counted on the scorecard.

    Anyway ...

    "Mulligan" ... pagination (next 80 records)

    Most obvious answer. Pagination (next 80 record). Simply use natural 'path search' of web servers.

    ===
    ./db2jsonngix -start -connect /tmp/alice.sock ./db2jsonfcgi
    url=http://myibmi/goask.alice
    (db2 REST traffic to /tmp/alice.sock)
    ===
    location ~ \.alice$ {
        fastcgi_pass   unix:/tmp/alice.sock;
        fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        include        fastcgi_params;
    }
    
    ===
    ./db2jsonngix -start -connect /tmp/bert.sock ./db2jsonfcgi
    url=http://myibmi/goask.bert
    (db2 REST traffic to /tmp/bert.sock)
    ===
    location ~ \.bert$ {
        fastcgi_pass   unix:/tmp/bert.sock;
        fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        include        fastcgi_params;
    }
    
    ===
    ./db2jsonngix -start -connect /tmp/ernie.sock ./db2jsonfcgi
    url=http://myibmi/goask.ernie
    (db2 REST traffic to /tmp/ernie.sock)
    ===
    location ~ \.ernie$ {
        fastcgi_pass   unix:/tmp/ernie.sock;
        fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        include        fastcgi_params;
    }
    

    Above nginx.conf we have three different db2 'daemons'. The url request suffix (/goask.alice, /goask.bert, /goask.ernie), directs db2 rest json connect(null, null, null, 'qual') + 'stmt' to the correct waiting result set.

    Missing today ... 'stmt' handle "will be" in the return json, so client has all information for the hash connect('db','uid','pwd','qual') + 'stmt'. Of course, likely to be connect(null, null, null, 'qual') + 'stmt' to the correct waiting result set (next 80 records).

    Mulligan. Works. No mess.

  104. Former user Account Deleted

    status

    Becoming apparent, not 'everything json' will be done by Holiday end. So, you are welcome to use db2sock support "as is", post issues, etc. However, look to next year to 'see' more key functions. In fact, request list grows as people in/out lab test potential of db2sock Open Source.

    Happy Holidays!

  105. Former user Account Deleted

    Thanks for your help supporting this project.

    I am closing this issue because topics have become so varied that we will not be able to find information searching issues. That is, I suspect people may want look through issues titles and have a look at resolutions and chat.

    Please open topical new issues from now forward. Thanks for all help to date.

  106. Log in to comment