db2sock and node db2 impacts

Issue #5 new
Former user created an issue

I suspect db2sock (libdb400.a replacement) could have significant impacts on design thinking for node db2 connector and toolkit. I am opening this 'enhancement' issue to discuss a few ideas. In fact, some are radical in thought, so probably best to discuss openly with people who are interested in the long term health of node on IBM i.

Comments (33)

  1. Former user Account Deleted reporter

    My first topic is a 'radical'.

    Redesign node db2 driver (this driver), to be complete json based talking to the new db2sock (libdb400.a replacement). Practical, all DB2 calls will be more like 'toolkit' using json instead of current SQL CLI calls written in 'c code'. Essentaill this driver would only call a new API SQLJson400(json_in, json_out).

    Rational?

    If you look at customer applications, nearly all 'use cases' of node with db2, 'everything' reduces to json. In fact, this current driver goes 'out of the way' to return json. I am merely suggesting this driver 'c code' functions are better served inside db2sock low level 'engine' (libdb400.a, libtkit400.a, libjson400.a).

    More?

    Specifically, i am talking about nearly 'gutting' this current driver, moving all the 'real hard work' c code into the lower level functions. Thereby, all languages can take advantage of the exact same 'aggregate APIs' to deal with the problems of 'connection pool', to/from json, fetchALL, etc. Also, when performance issues arise (always do), we have only one litmis/db2sock area to fix.

    Radical? Yes. What do you think?

  2. Former user Account Deleted reporter

    My second topic is even more 'radical' addition to the above.

    Assuming we conclude above 'redesign' to 'pure json' based node db2 driver is a good idea, aka, we do it. As follow on we can consider the new 'json' db2 driver to work both local (SQLJSON API above), and, remote from anywhere over standard REST. The litmis/db2sock project already started to support both ILE CGI interface and fastcgi (PASE). Sending 'json' through to the SQLJson400 API in libdb400.a is, well, trivial.

    Rational?

    This enables multiple tier node applications running any platform to simply send REST json same way it already runs local. That is, no change to run node+db2 from your PC to IBM i over REST. The only difference will be the performance delay as the json transports across REST (HTTP / Apache).

    More?

    This additional REST libdb400.a idea also allows ANY of the Apache supported authentication schemes to simply work right out of the box. That is, Apache supports kerberos, basic auth, ldap, https, etc. Simply stated, we get all that advanced function for free with litmis/db2sock over REST via Apache today.

  3. Former user Account Deleted reporter

    My next topic is some technical details on how 'pure json' may render for this new db2 driver.

    The node driver (this driver), may only be a few lines of c code. In fact, it may only be one c code call to new SQL400(json_in,json_out,callback) API litmis/db2sock (new aggregate API in libdb400.a). Specifically, all the SQL driver code is removed dealing with connect, prepare, bind, execute, fetch, etc., as SQL400 takes care of everything.

    Where about db2 interface?

    The short of it, the db2 interface becomes 'pure javascript/node'. Specifically, if we choose to keep the current 'user interface', db.prepare, db.exec, db.fetchAll, etc., everything will simply be a JavaScript method/function that creating json to send/receive through SQLJson.

    Side note. Now, if you are a radical thinker (i am), you may start to wonder why there is a db2 interface at all. Thats is, json is already natural in JavaScript, so simply skip the db.exec, db.prepare, etc., and simply send the json to SQL400(json_in,json_out,callback). But ... that may be too radical ...

  4. Former user Account Deleted reporter

    My next topic is for sceptics with performance-will-stink stamped on there foreheads (we all have a little, now, don't we)?

    I suggest the performance of using pure json will be a wash with current c code call individual SQL APIs. In fact, performance may even be significantly better when using the new aggregate SQLThing400 APIs in libdb400.a, because more actual work is accomplished in one call (aka, connect, prepare, execute, fetch, etc.).

    What????

    Ok. So hear my argument. This current old node driver spends inordinate amount of time calling individual SQL APIs like SQLConnect, SQLPrepare, SQLBind, SQLExecute, SQLFetch. To wit, each case, 'internal node variable structures' represent things like numbers, objects, json, essentially convert by interpreter from string forms to actual binary in the structures and 'bound/bind' to fit the SQL interface monkey business (1).

    We propose to replace all that 'monkey business' with one simple call SQLJson400(). The new pure json interface, simply using JavaScript 'db2 interface' to convert JavaScript variables to/from 'json'. JavaScript is very high performing in 'json' convert endeavour (a speciality of JavaScript). In fact (radical), i would offer current node db2 driver is simply getting in the way (too radical?).

    Proof in the testing ...

    So, my theory is we could use a 'pure json' interface to new aggregate SQLJson400 API and it may even be faster than this current 'traditional' driver. However, we will not have to just guess, we can run the drivers side by side to check the theory (science 101).

    Additional radical thinking database ...

    (1) Monkey Business

    The granular APIs of ODBC (superset CLI IBM i), are designed for compiled languages like c code. That is, c program high control over SQLConnect, SQLPrepare, SQLExecute, SQLFetch (tight loop). Basically your c program could control everything from binding to errors every step of the way along the process.

    I suggest that scripting languages never really needed that level of tight control. In fact (radical), the constant feedback of every error on each SQL call gets in the way of beautiful scripts. I mean really, about 80% of the time we just want to send a SQL statement with some parameters and get back the result set answer. More important, we just want to know which of the steps failed during development/debug to correct the problem BEFORE production. Aka, we write the same script connect, prepare, execute, fetch loop, zillions of times ... because we have the wrong API granularity in the database driver (ODBC, perhaps where the 'O' means 'Old').

    Enough says i!

    Lets add an aggregate API like SQLJson400(json_in, json_out, callback) to the 'super duper set' of CLI functions in litmis/db2sock. Make the driver fit the scripting environment, instead of forcing every script to look like a 'c program'.

    I did say radical ...

  5. Former user Account Deleted reporter

    OK, that is more than enough for understanding and feedback of this radical idea.

    Summary:

    1) DB2 interface is 'pure JavaScript'. Every user API db.prepare, db.execute, db.fetchAll, etc., simply maps to json request.

    2) DB2 node c code 'driver' (this driver) only calls new SQLJson400(in.out.callback) API. (Plus/minus node extension in/out of interpreter boiler plate c code).

    3) I suggest performance will be better than current.

    4) I suggest additional REST db2 json capability will be revolutionary. If nothing else, very cool development on the laptop with calls to IBM i REST libdb400.a (Apache).

    Your turn ... feedback ...

  6. Former user Account Deleted reporter

    Oh ... one last thing ... litmis/db2sock is under going a complete rewrite of the SQLJson400 interface. So, current 'junk' json interface used for tests is being completely replaced. Aka, you will just have to wait for real json interface.

    Yes, bummer 'under construction' is litmis/db2sock.

    Anyway, You will have to watch the litmis/db2sock page to see the new 'replaceable' driver elements. That is, the redesign interface, if you do not like the json 'format', you can simply replace with your own 'parser'. This was a response to requests from guys on my lab team (Kevin), and, it is a very good idea, being 'open source' in true sense.

  7. Aaron Bartell

    Some thoughts...

    • I really like the overall flavor of this: make DB2 dead simple for Node.js. Related, I also like the concept of resolving these issues for all PASE languages.

    • This project would have a dependency on db2sock, correct? I would eventually like to see this repo as an NPM on npmjs.com so I am just trying to think of how everything would be packaged because there are client-side and server-side portions that we'd have to think through. Maybe we create an NPM for the Node.js flavor of db2sock?

    • I really like the idea of being able to develop on laptop and make seamless communication to DB2 for i. I can't say how much I like this. Very very cool and much needed. I also think there's much more we could do on this REST front in the future (i.e Websockets, maybe Protocol Buffers). More in my "Opinion..." below.

    • Commitment control hasn't been brought up yet. How would that be addressed should someone ~need~ to control it?

    • Related to the previous bullet, could this be an added interface to this project vs. a replacement in case somebody still wanted to do the manual connect, prepare, bind, exec? This is a radical enough switch that might take more time to marinade to make sure we're not neglecting certain business scenarios.

    • Opinion based on me doing a lot of Node.js the past two years... Setting up Apache is extra infrasturcture I don't really need** because Node.js has excellent built-in web server. If I were authoring this stack for Node.js infrastructure I'd have it be as follows:

    DB24i npm (laptop) -> DB24i npm (server) -> db2sock npm -> Actual DB24i

    ** I realize PHP needs it, but Apache is merely a forced layer for the other PASE langs (Node.js, Python, Ruby).

    Further, and more importantly, Apache doesn't work in containers, though nginx does (assuming my Node.js-as-the-webserver won't fly for Ruby/Python scenarios). Time to switch ponies from Apache to Nginx?

    Also, I am still noodling this thread so please don't consider this response comprehensive.

  8. Former user Account Deleted reporter

    Also, I am still noodling this thread so please don't consider this response comprehensive.

    Well, first, please trust your opinion is always consider with highest regard. Thanks you. The open communication is the only way to end up on the best path for all.

    Time to switch ponies from Apache to Nginx?

    First, nginx vs. Apache - No web server fight for dominance is intended.

    The ngix web service is wonderful. Please use to hearts content.

    The Apache additions like litmis/mama are intended for those people wishing to integrate into current web configurations. Aka, not fight city hall, for those invested in Apache admin training and tools (web gui).

    Further, and more importantly, Apache doesn't work in containers, though nginx does (assuming my Node.js-as-the-webserver won't fly for Ruby/Python scenarios).

    On point of 'chroot', we could enable fastcgi litmis/mama to start chroot applications by simple creating a setuid *SECOFR version enter chroot while starting the application. Therein, the value of chroot isolation is maintained, but the IBM i administrator only has to issue one command (STRTCPSVR).

    The secondary topic refers to built-in python, ruby, node web server used in many 'stand-alone' applications. Frankly, i just don't know if these built-in web server applications run production. In any event, i believe ngix can be the target of Apache proxy/reverse, so in theory, Apache could forward like it does for any other 'directory' filtered request.

    Uff da! I don't want to sound like the Apache guy, but, I also want people to understand 'web configuration' has nearly infinite possibilities. Mix/match is possible in to fit nearly any desire. (I am not a consultant, so having lot's of ways to do things is not going to result in calls to me at 3 am ... like you ... perhaps).

    Commitment control hasn't been brought up yet. How would that be addressed should someone ~need~ to control it?

    The SQLJson400 API has full access to all ODBC/CLI functions, therefore, control of 'commit' can simply be added to the JSON request(s). That is to say, we have 'json request' full access to both simple SQL APIs (like ODBC/CLI today), and, super aggregate APIs (do everything from connect to fetchAll). Meaning, we can send multiple packets of json to same 'connection/statement', including {autocommit:"off"}, when user really wants to control the flow. Perhaps best way to understand is to think like XMLSERVICE (only json), you can keep adding operations into a single 'json' request, and/or, you can 'route' more 'json' requests to 'last' statement/connection (private connection until released by script).

  9. Former user Account Deleted reporter

    Related to the previous bullet, could this be an added interface to this project vs. a replacement in case somebody still wanted to do the manual connect, prepare, bind, exec? This is a radical enough switch that might take more time to marinade to make sure we're not neglecting certain business scenarios.

    100% agreement. As with all radical changes, outcome could be really great or flop on it's face. One simple truth about Open Source ... crowd funded. No matter what wild idea, stakeholder input, help, confirmation (testing), is the real 'ace in the hole'.

  10. Former user Account Deleted reporter

    This project would have a dependency on db2sock, correct?

    Yes.

    I would eventually like to see this repo as an NPM on npmjs.com so I am just trying to think of how everything would be packaged because there are client-side and server-side portions that we'd have to think through. Maybe we create an NPM for the Node.js flavor of db2sock?

    So ... Jesse G. has already announced RPM support intent. I would consider a 'plumbing' function like db2sock to be a perfect RPM candidate. Best thing, you and i already know that RPMs work well in a container (chroot). So, versioning of db2sock could be a 'snap' with RPM.

    db2 current driver or new driver (npm) ... and toolkit too ...

    I agree, npm package would be great. Especially great if npm could load the 'remote' version on your PC.

  11. Former user Account Deleted reporter

    This project would have a dependency on db2sock, correct?

    Part 2 ... Yes and no.

    Yes. On the IBM i. The 'pure Javascript' db2 interface (db.prepare, db.execute, etc.), runs directly the slim db2 npm extension calling SQLJson400(json_in, json_out, callback) exported out of db2sock libdb400.a (new PASE CLI driver).

    No. On PC (aka, any remote node). The 'pure Javascript' db2 interface (db.prepare, db.execute, etc.), sends data over REST to Apache running on IBM i (or ngix). The Apache/ngix, uses a simple CGI/Fastcgi interface to forward 'json' through SQLJson400. That is, any PASE/ILE program can call SQLJson400, so any CGI built for ILE, PASE, ngix, Apache, man-in-moon web server can export libdb400 interface.

    Is this understood???

  12. Former user Account Deleted reporter

    db2 current driver or new driver (npm) ... and toolkit too ...

    I agree, npm package would be great. Especially great if npm could load the 'remote' version on your PC. Specifically, 'pure JavaScript' npm code that makes REST call to Apache/ngix. That is, no npm package 'c code' is required on the PC (remote), because REST is already a function of node on the client. Aka, we only need to package the 'pure JavaScript' db2 interface for npm download (no c driver at all).

    Also, perhaps understood, but i will mention. The 'remote/PC' db2 REST interface npm will NOT require db2 connect LUW ... no money, no driver, no nothing, only node REST already built into node and many languages (php, node, ruby, python).

  13. Former user Account Deleted reporter

    Ok. I think i provided answers/suggestions your current questions

    Summary:

    1) ... overall flavor of this: make DB2 dead simple for Node.js (all languages)

    A: Yes. Idea move many language repeated db2 driver code into lower level db2sock. One stop shopping for consistent function and performance.

    2) This project would have a dependency on db2sock, correct?

    No. The db2 user interface will be 'pure JavaScript' (db.prepare, db.execute, db.fetch, etc.). The resulting map to 'json' occurs in JavaScript then passed either directly to SQLJson400 (slim driver on IBM i), or, REST call to IBM i remote.

    Yes. On IBM i only. A 'very small' db2 extension for each language to call SQLJson400 exported from litmis/db2sock (under construction ... patience please).

    No. Remote (laptop, etc.). Only 'pure JavaScript/node' db2 interface would be needed download via npm. The db2 interface would configur to talk REST to IBM i. Wherein, IBM i Apache, ngix, or any IBM i web server (node, python, php, ruby, etc.) can export the call to SQLJson400 in db2sock.

    3) ... cool ... develop on laptop and make seamless communication to DB2 for i.

    Yep. I thought people would really like this part (revolutionary simplicity).

    4) Commitment control hasn't been brought up yet.

    No problem (i think). Simply stated, SQLJson400 can call any ODBC/CLI API added to json, therefore {"autocommit":"off"} is just another 'json' operation.

    5) ... somebody still wanted to do the manual connect, prepare, bind, exec?

    Again, previous answer. SQLJson400 can call any ODBC/CLI API added to json, therefore 'manual' control of these operations could be routed to the 'same' conn/stmt with mutiple requests. (Aka, i don't know why someone does this seemingly waste of time, multiple 'c program' style thinking ... but ... ok.)

    6) Apache is extra infrastructure ... better ngix ... also, Node.js has excellent built-in web server.

    Web server configuration possibilities are endless. Simply want people to understand, knowledge is the only real barrier to doing something they desire (i think).

    Again, no web server war initiated by me. Welcome all, ngix is fine, node web server is fine, any language web server, so on. The specific Apache options enabled by litmis/mama allow for standalone web servers like node application to be started along with Apache instance for single command operator actions (STRTCPSVR). Also, mentioned, we could enhance litmis/mama to add a setuid 'helper' (*SECOFR), to chroot start any 'container' application of standalone server (including ngix in chroot).

    Ok, back to db2sock coding ...

  14. Former user Account Deleted reporter

    For those wondering ... when will db2sock SQLJson400 be available?? Well, plan is during September (this September 2017). Actually hope to have at least the correct framework up to repository sometime next week (litmis/db2sock).

    Again, the one up in repository is not correct, simply a toy for me to work out general ideas. Aka, do not rely on the odd json like 'dcl-s' and 'dcl-ds' ... i was only working through some of the conversion code (string 2 packed/zoned, etc.).

  15. Aaron Bartell

    No. On PC (aka, any remote node). The 'pure Javascript' db2 interface (db.prepare, db.execute, etc.), sends data over REST to Apache running on IBM i (or ngix). The Apache/ngix, uses a simple CGI/Fastcgi interface to forward 'json' through SQLJson400. That is, any PASE/ILE program can call SQLJson400, so any CGI built for ILE, PASE, ngix, Apache, man-in-moon web server can export libdb400 interface. Is this understood???

    Yes, I understand. One thing I am interested in testing is Apache's approach to processing requests ~efficiently~ vs. Node.js (processes vs. event loop/threads). I don't like the idea of requiring Node.js for Ruby/Python/PHP stacks, but when I remove the technology names (Node.js, Apache, Nginx) and only focus on features that will make it most efficient, well, I keep coming back to Node.js (I haven't researched Nginx enough on this front, though I understand it operates on a similar event model). Especially because it will not only be available as an RPM from IBM, but also because anyone (ISV, hobbyist, business) can ~distribute~ the Node.js RPM. This means a person (ISV, hobbyist, business) could provide a single .zip file that contains everything their app needs to run; no need to SNDPTFORD or other sillyness. Simply run the script and an entire chroot is ready and fully configured in short order(automation/simplicity).

    I hear you when you say you don't want a web server war. Neither do I, I just don't want to stick with something (Apache) that doesn't meet our current or future needs, nor does it have RFEs to address the issues (i.e. Apache would need SSL* directives reintroduced, for example). The reason Apache needs to stay (today) is I don't believe we have FastCGI for Nginx (yet).

    I agree, npm package would be great. Especially great if npm could load the 'remote' version on your PC.

    I have been trying to imagine how packaging would work. I don't have this figured out in my head yet, because when we do npm install on the laptop we don't want to load the C-based components, which would be documented in package.json. Further, it's not a 'development vs. production' thing either, because I have customers that will use this for Node.js on production CentOS (on-premise) to talk to their DB2 for i.

    This is one of the reasons I keep going back to using Node.js instead of Apache. We already know HTTP requests to Apache will most likely be too slow for 2-tier production apps. We could instead develop a "DB2 REST Server" that effectively had the below flow and then keep it running with PM2 (which is to Node.js like mama is to FastCGI).

    HTTP5 Websocket(n1) request->Node.js->SQLJson400->DB24i
    

    n1 - Or normal HTTP GET for those clients that can't upgrade the connection to Websocket.

    Also, perhaps understood, but i will mention. The 'remote/PC' db2 REST interface npm will NOT require db2 connect LUW ... no money, no driver, no nothing, only node REST already built into node and many languages (php, node, ruby, python).

    I was curious if that could be stated out loud :-) Yes, I am glad for this. It will resolve problems for a number of my customers.

    With all the above said, I don't want my commentary to hold up any of your db2sock or SQLJson400 progress. An Apache implementation can exist initially, and then we/I can subsequently implement the Node.js one (which should actually be fairly simple, less than a days work).

  16. Former user Account Deleted reporter

    An Apache implementation can exist initially, and then we/I can subsequently implement the Node.js one (which should actually be fairly simple, less than a days work).

    Correct. Correctly stated, this whole json idea is 'web server' agnostic. In fact, very simple implement in any 'web server' ...

    'Any' language web server (stand-alone app)  ...
    browser->json->nodejs->slim-npm-calls-SQLJson400
    browser->json->python->slim-wheel/egg-calls-SQLJson400
    browser->json->php->slim-pecl-c-calls-SQLJson400
    
    Fastcgi languages  ...
    browser->json->Apache->fastcgi->php->slim-pecl-c-calls-SQLJson400
    
    Apache (included db2sock project) ...
    browser->json->Apache->fastcgi->db2jsonfcgi-calls-SQLJson400
    browser->json->Apache->ILECGI->db2json-calls-SQLJson400
    
    same for ngix ...
    browser->json->ngix->'any language'-calls-SQLJson400
    
    ... on ... and on ... and on ... infinite
    

    We already know HTTP requests to Apache will most likely be too slow for 2-tier production apps

    I don't agree with your Apache 'too slow' argument. In fact, Apache almost has nothing to do with 'too slow', any more than any other 'web server' implementing http protocols. Basically, http protocol is completely normal and expected speed for any 'application' written with REST API intent (most json based). In fact there are millions of REST API http protocol working just fine (web server agnostic). If i may be blunt (and a bit unkind), most 'performance issues' with REST API http protocol applications are 'rooky web programmer' unrealistic expectations for types of applications that can thrive in the environment.

  17. Former user Account Deleted reporter

    Ok, gave you the Apache stick (bad Aaron) ...

    Now the carrot (good Aaron) ...

    HTTP5 Websocket(n1) request->Node.js->SQLJson400->DB24i

    When 'outrageous' web performance expectations rise, 'web sockets' may be needed ... gaming apps ... military ordinance guiding ... web server that is really an ftp of 30,000 records per call (tee hee) ... looping benchmark performance tests ... trying to form a black hole with speed of light particle accelerator ...

  18. Former user Account Deleted reporter

    Speaking in 'whole internet' ecosystem terms ... 'save the json API' (save the whales).

    Dude! I am not trying to be unkind, but advocating to all the 'unwashed' to be developing applications dependent on 'web socket' speed should be a rare exception. In fact, should 'web sockets' became the 'standard' solution to ALL internet programming ... well ... it would absolute doom the internet.

    Do you agree?

  19. Aaron Bartell

    I fear you may be missing the point. Correctly stated, this whole json idea is 'web server' agnostic. In fact, very simple implement in any 'web server' ...

    I get it that it is server agnostic, and I may be using the wrong forum for furthering what I believe should be a long-term objective. In short, I am trying to keep from having many implementations because our community is finite on resources. We will have the Apache implementation out of the gates. More in my next answer below...

    I don't agree with your Apache 'too slow' argument. In fact, Apache almost has nothing to do with 'too slow', any more than any other 'web server' implementing http protocols.

    First, my 'too slow' is stemming from my experiences with the HTTP->XMLSERVICE implementation. I do realize this is a completely different beast within significantly less code/layers; so maybe I am all wet.

    Second, it wasn't good for me to apply "Apache" to my argument and instead I should stick with implementation methodologies. Let me try again...

    HTTP 1.1 is great for many things, but for some things, where significant performance matters, like DB interactions, they (HTML5 Working Group) introduced Websockets. I do realize HTML<>HTTP.

    "Upgrading" (as they say) a connection to Websockets is in effect downgrading it to the efficiencies of stateful, full duplex, raw sockets. All done in a widely accepted and standardized specification (Websockets). Apache supports Websockets but doesn't support calling directly into the new SQLJson400 without developing an Apache mod or getting another player (PHP) involved. Node.js gives us both, with incredible ease.

    Websockets is just the transport argument. The next is how things are sorted out within IBM i, namely jobs. Apache uses new IBM i jobs/processes to accomplish concurrency/parallelism. Node.js uses event loop, threads, ~and~ processes (Cluster). My ~guess~ is adding the event loop and threads will be "less expensive" on the CPU. Node.js was, in part, created to do web communication better than any other currently existing approach. That's why it is paining me to not highly consider it as we do the implementation of the web server portion.

    Again, this doesn't effect the SQLJson400 portion of this discussion so I've considered stopping my pursuit of debating it, but it ~does~ highly effect the 2-tier Node.js db connector npm because the implementation will or won't support Websockets based on our decisions in this thread.

    Given the Node.js db connector API calls will be the same regardless of whether Websockets exists under the covers, we can continue with the Apache approach and later I can test my theories that Node.js is the better server-side DB2 REST web server.

    Dude! I am not trying to be unkind, but advocating to all the 'unwashed' to be developing applications dependent on 'web socket' speed should be a rare exception. In fact, should 'web sockets' became the 'standard' solution to ALL internet programming ... well ... it would absolute doom the internet. Do you agree?

    FWIW, my Meyers & Briggs letters are ENTP, which equates to "Pioneer" personality, which has a personality attribute of enjoying debates, so please don't feel like you need to tip toe around my ideas. Call them bad if you believe they are and if I think you're wrong, or underinformed, I will further engage you :-)

    "...rare exception..." is exactly what's being advocated here. We're talking about throughput efficiencies within our application, namely talking to the DB.

  20. Former user Account Deleted reporter

    Yes, actual SQLJson400 API is 'agnostic' to transport ... more an implementation concern covering last 'web server' part of this conversation.

    1) On IBM i ... direct CLI call ... traditional 'fast' call to QSQSRVR (like any SQL)

    2) Remote ... many options including REST, web sockets, regular custom sockets, ssl sockets, on and on (meaning agnostic).

    Yes, XMLSERVICE is slow(ish), too slow for some applications. Faster, is one reason we are looking at new db2sock family replacement including toolkit (and json too). However, some 'RPG applications' XMLSERVICE was asked to move to the web, well, dubious notions to be sure (needed redesign).

    However, also not letting go ... web sockets deadly allure of speed can cause a 'porting-my-application' backlash mistake (big mistake). Simply put, IBM i folks have to recognize that Windows<socket>IBM i applications are NOT, should NOT, simply be ported 'as is' to web (using web sockets for speed). Web applications, especially 'mobile', must be redesigned to live in a 'stateless' HTTP request world (json http api one type). Frankly, we do not have to look very far to find a unnamed 'web admin gui' that did not heed this lesson "it ain't Windows" ... understood?

  21. Former user Account Deleted reporter

    So, said my piece ...

    You may have your 'web socket' cake and eat it too.

    However, for clarity (my own perhaps), i don't think any of this 'web server' discussion actually affects driver SQLJson400 API (new hybrid CLI API). To wit, SQLJson400 is below "in the db2 driver", and ALL lofty discussion of protocols of transport like web socket, http 1/2, ssl, etc., are above implementation details (server export SQLJson400 stuff in node js code, php, python, fastcgi c app, ile cgi app, etc.).

    Do you agree?

  22. Aaron Bartell

    I feel like we're talking in very broad usage/scope scenarios. We are strictly talking database connections; Node.js(client/laptop) to Node.js(server) and not Browser(client) to Node.js(server). The design is completely in our hands to be written correctly.

    At this point I think the correct implementation (for the Node.js pure javascript client) is to allow for multiple implementations, again, like ruby-itoolkit adapters, so we don't paint ourselves in a corner**. The initial implementation can be plain HTTP with Apache. If someone is so inclined they can write additional implementations.

    ** we are both standing in a corner, each with our own brush and a 5-gallon bucket of paint, looking longingly at the door on the other side. Multiple implementations will allow us to paint a door in our corner.

  23. Aaron Bartell

    To wit, SQLJson400 is below "in the db2 driver", and ALL lofty discussion of protocols of transport like web socket, http 1/2, ssl, etc., are above implementation details (server export SQLJson400 stuff in node js code, php, python, fastcgi c app, ile cgi app, etc.). Do you agree?

    Yes, I agree 100%.

  24. Former user Account Deleted reporter

    To this point though, this is where I liked the way ruby-itoolkit implemented the adapters/interfaces. I believe we should follow a similar approach with this repo's implementation so we aren't locked into a single technology.

    Abstract adapter is theme of afore mentioned litmis/ruby-itoolkit. Add a new transport without affecting application at all beyond configuration.

    Yes. I agree. However, each language would have to make a choice in 'style' for the 'pure language' driver that calls SQLJson400 (local or remote). Which begs another bigger "who doing all the languages" question ... ah ... unanswered.

    Side discussion ... I may be missing one of your points not overtly stated with a big club over head (soft hint). Are you saying, because I include Apache-based ILE-CGI and fastcgi 'sample' in litmis/db2sock project, i am unfairly tilting 'web server' election votes by example (fake news)??? Specific action, litmis/d2sock should also have other 'web server' technology demonstrations like node web sockets, etc???

  25. Former user Account Deleted reporter

    Ops ... we were both typing at the same time (i love issues).

    I think you already answer one of the languages questions ... this project ... you were talking about node language specific implementation.

    Aka, we probably should move the detailed 'every language' using litmis/db2sock to that project.

  26. Aaron Bartell

    Side discussion ... I may be missing one of your points not overtly stated with a big club over head (soft hint). Are you saying, because I include Apache-based ILE-CGI and fastcgi 'sample' in litmis/db2sock project, i am unfairly tilting 'web server' election votes by example (fake news)??? Specific action, litmis/d2sock should also have other 'web server' technology demonstrations like node web sockets, etc???

    Opinion: The IBM i Apache server is dead on arrival for future needs. An Apache server delivered as an RPM, not requiring entry in QUSRSYS/QATMHINSTC, and not stripped of SSL directives could be a perfect fit. But that's not the Apache we have. So I've started migrating to Nginx and Node.js.

    I can understand not embracing Node.js as the web server, but I can't understand why we don't adopt either Nginx or a non-molested Apache.

  27. Aaron Bartell

    I think you already answer one of the languages questions ... this project ... you were talking about node language specific implementation.

    Correct, I am inserting more Node.js-speak because of the repo we're collaborating on.

    Aka, we probably should move the detailed 'every language' using litmis/db2sock to that project.

    Good call.

  28. Former user Account Deleted reporter

    One last thing ...

    We had a splendid design conversation about db2sock and the art of possible. Of course, we will continue to have many conversations to achieve the best outcome.

    After our extended chat, i am convinced we have something worthy of pursuit in litmis/db2sock SQLJson400, and, subsequent new 'pure language' db2 drivers (node 'pure JavaScript').

    Well, it all sounds good on paper (this issue).

    However, no rational developer could offer air tight assurance of success in such a radically different approach. So, while it may sound like I have all the answers, I assure you it only 'sounds' like i have all the answers. I will try to get something up to litmis/db2sock next week (hopefully).

    Again, I thank you for honest very good design considerations. Feel free to chat any time this issues or db2sock.

  29. Former user Account Deleted reporter

    I can understand not embracing Node.js as the web server, but I can't understand why we don't adopt either Nginx or a non-molested Apache.

    This topic is better served in IBM i RFE plan process. I think we already ship ngix (lost track). To be clear, I am a geek. All things are possible under geek. However, I am only a geek with a keyboard and a dream, not to be mistaken for a IBM Vice President.

  30. Aaron Bartell

    This topic is better served in IBM i RFE plan process. I think we already ship ngix (lost track).

    Yes, Nginx is available via 5733OPS, and eventually via RPM (guessing). I don't plan on issuing RFEs for adderssing IBM i Apache because I have a formal port of Nginx.

    Another thing I forgot to mention is that Nginx can be controlled (start/stop/restart) from a shell vs. having to go into CL (STRTCPSVR), which is an Nginx advantage in my eyes.

  31. Former user Account Deleted reporter

    We are going in circles ...

    Technically, no difference between making a PASE version of Apache vs. PASE version of Nginx. As geek(s), this is only political.

    Apache is implementation deployed on IBM i as ILE. Specifically, ILE programs are not deployable in a container (ILE no can do chroot). Also 'integrated' Apache (ILE), has bells and whistles, such as STRTCPSVR command, web gui admin, so on. These 'normal IBM i operations' commands also do not really help 'container/chroot' mission.

    There are always multiple solutions for any problem. In this case we are specifically talking remote access with new 'pure node/JavaScript' db2 idea. I think we can make sure all 'common choices' will be represented (abstract adapter). However, I, geek, prefer not to fight city hall (existing web sites), aka, simply enable the ILE Apache to participate 'easily' with this next level technology in litmis/db2sock.

    if you ask Apache for chroot in IBM i RFE, somebody will answer (with authority). However, when IBM OPS goes to RPMs (Jesse G announced), someone can make your own PASE Apache RPM if you prefer over another web service (problem solved).

  32. Log in to comment