Submitted by Anonymous (not verified) on Wed, 02/05/2014 - 00:00
Forums

Hi,

I have a couple of questions:

1. Can you explain the mechanics of using data queues? Is there one input and output per executing job or is the output queue created per request?

2. I noted that jobs area started and ended per tab used. Is this correct and if yes, can a job be maintained for ‘next’ time to reduce overhead. Have you thought of using group jobs?

3. When using URLs (data) as part of output. Where is the actual ‘call’ executed? Within IBM i (any server software) or locally?

4. What is the throughput by any of your customers during peak? How many jobs during peak? And the response times? What is your share (of overhead) to the average processing?

5. Are threads maintained (in pools) between interactions? 

6. When, in rpg source code, such as was exemplified with several different variants of “useremul” (I hope that was the name of the rpg source code), there are statements referring to resources on the Internet, such as for instance google search, google maps, etc., from where are then these Internet resources fetched, from the IceBreak/IceCap server on IBM i, or from the client computer web browser of the user connected to the IceBreak-/IceCap server?

Niels Liisberg

Wed, 02/05/2014 - 00:00

Hi,

Let me answer your questions one by one:

1. Can you explain the mechanics of using data queues? Is there one input and output per executing job or is the output queue created per request?

Actually – the data queue contains only a handle to shared memory for the client request and the client respond: We are NOT sending data back and forth over the data queues but rather using the data queue as an event mechanism to connect at request to an available job – so there is only one keyed event data queue in the IceBreak system.

The shared memory is what we call Internal Large Objects – or ILOB's. They are the foundation of the high performance you will find in IceBreak.

2. I noted that jobs area started and ended per tab used. Is this correct and if yes, can a job be maintained for ‘next’ time to reduce overhead. Have you thought of using group jobs?

Excellent question – and you are correct: When invoking an IceCap application – we start a new session, and for each tab the user creates a new job is spawned. Now this is also the way user normally works: Opening a new client access session when needed. The big difference is that it is "easy" for the user just to close the tab and therefore end the session:

Now – for that reason you can configure if tabs will reuse a 5250 session for a request and you can define how many sessions you will allow per user. However - if performance is not an issue – just leave it with no limits, since there will be no restrictions for the user.

We have also used group jobs, however some user applications already utilize group jobs so we can not rely on having that feature available. If your applications already use group jobs – then you can get the job done with only one 5250 session.

3. When using URLs (data) as part of output. Where is the actual ‘call’ executed? Within IBM i (any server software) or locally?

Everything is controlled and executed under the covers of the IBMi. From the client's (the browsers) perspective it just makes one or more AJAX requests (REST based web-service requests).

It is true however - When you configure the menu, we expose the command line to run within the selected menu option. This is kept safe on the server side when you use the applications afterwards.

4. What is the throughput by any of your customers during peak? How many jobs during peak? And the response times? What is your share (of overhead) to the average processing?

Compared to plain 5250 – the IceCap JSON responses to about 3 times bigger. And to produce that, we have one job per 250 client requests running one thread per client connection. For each user session there is one IBMi job serving the HTTP data from IceBreak applications and the connections to each 5250 job. Every thing running within the IceBreak subsystem.

So IceBreak and IceCap are rather IBMi centric: And also - it runs completely in the ILE environment, so it is extremely lightweight: An IceBreak job (a user session) only requires 4096K of memory and is purged to disk when not used. We don't use any huge memory or processor consuming environment like java.

Compared to a basic plain 5250 configuration we calculate only an overhead of 2,5% to 5% in processor power. This requirement will change over the time when you add new feature to you application, since you will exploit SQL drop-downlists and free text SQL search quires.

5. Are threads maintained (in pools) between interactions?

Yes we are certainly using a pool of threads: Each browser can connect from one to five – or even more connections - depending on the browser type. Now: Each TCP/IP connections will be served and maintained in a tiny client thread within the IceBreak core. If the request is static (that is a request for a file – a css, html, script, image, PDF's word-doc etc.) the client thread examines if a client supports "compression" and if so - it will server a gzip version of the resource from the IceBreak catched directly within the core IceBreak server. This thread remains alive as long as the client keeps the connection.

Now for dynamic data – like program calls, sql data – this thread will send an event to the event manager which will connect the client to try corresponding IBMI session with a request ILOB and a response ILOB. As soon as the response is ready it will be detached from the session (making it available for next request) and the thread will server the response and eventually compress the response if the client supports gzip.

6. When, in rpg source code, such as was exemplified with several different variants of “useremul” (I hope that was the name of the rpg source code), there are statements referring to resources on the Internet, such as for instance google search, google maps, etc., from where are then these Internet resources fetched, from the IceBreak/IceCap server on IBM i, or from the client computer web browser of the user connected to the IceBreak-/IceCap server?

In my demonstration I did both: I used external resources from the internet – like google maps. This is what we called "integration on the glass" when the client browser is glueing up the components from anywhere on the internet or intranet.

But I also did show you how to integrate a report made in RPG which was a component residing on the same IBMi. We have a "webContainer" component that can use both internal resources or external resources. External components can also be intranet components like internal calendars – PDF's , word-doc , excel sheets etc. from the internal file share on another server within the organization. If internal resources are not available to the client directly, IceBreak has an "include" feature to fetch data i.e. from network file share (like QNTC) and relay the data within a web content.

(The response to this question will high-light on which node Internet needs to be available, i. e. on IBM i or on user computers)

.. Correct – we can do both, so it is a question of strawberry or vanilla taste: How is you infrastructure design? How much is already exposed today?

 

I hope my answers make sense. If not, please do not hesitate to contact me again.

 

Best regards,

Niels Liisberg