Forums

Find answers, ask questions, and connect with our
community all around the world.

Home Forum Omnis General Forum Speed issues between Omnis and Postgres

  • Speed issues between Omnis and Postgres

    Posted by Uwe Smidt on January 18, 2025 at 11:27 pm

    Dear $all,

    When my Omnis app and Postgres are installed locally, they are lightning fast.

    When my Omnis app and Postgres are installed on the VPS (in the cloud), they are lightning fast.

    But when my Omnis app is installed locally, and Postgres on the VPS, each operation takes seconds… i.e. very slow!

    Any ideas?

    Diego Mendoza Muñoz replied 6 months ago 5 Members · 10 Replies
  • 10 Replies
  • Mats Olsson

    Member
    January 19, 2025 at 7:09 am

    But when my Omnis app is installed locally, and Postgres on the VPS, each operation takes seconds… i.e. very slow!

    If I understand your question correct: When you use Postgresql via 127.0.0.1 it is on the same computer (same network) (localhost). It is normally very fast (milliseconds).

    When app and data are on different networks (locally connected via internet to the VPS) the latency increases as the data is transferred through the internet. The speed depends on your connection delay, connection speed etc. And using a DNS lookup (uwe.com) it may take some extra time compared to ip:port.

    Hence localhost (127.0.0.1) and internal IPs (10.x.x.x or 192.x.x.x) are faster than external IPs (123.4.5.6)

    • This reply was modified 6 months, 1 week ago by  Mats Olsson.
  • Paul Mulroney Mulroney

    Member
    January 20, 2025 at 2:40 am

    That’s to be expected. If Postgres and Omnis are on different machines, then you’re transferring the data that you’ve fetched across that link. LANs are very quick, so you don’t notice the delay in transfer. WANs, VPNs etc have slower links, latency etc that results in much longer delays.

    If you are intending to deploy a setup with Postgres on VPS etc, you need to consider how to minimise the amount of data being sent in the $fetch() – for example, in our system we have a “Organisational Details” record that contains, among other things, a company logo. In the old DML days we would single file find that record whenever we open the window. Now that we’re in SQL, if the logo is large eg 6Mb image, then it takes 3 minutes to transfer that binary file… every time we open the window.

    Data optimisation means thinking about things like this:

    * caching details like this so they’re only loaded once per workstation/session,

    * returning only the absolute minimum of data in a query rather than all fields of the record,

    * fetching using an SQL worker so it can run in parallel to other operations and not block the UI controls, and

    * fetching a limited number of records at a time.

    Hope this helps!

  • Uwe Smidt

    Member
    January 20, 2025 at 11:34 am

    Dear Mats & Paul,

    Well, I suppose you are right!

    I am aware local actions are always faster than remote actions, but the scenario ‘Omnis AND Postgres on VPS’ is – in a way – a remote BUT still lightning fast scenario, since I see the results in my browser – which is remote to Omnis and Postgres. But I assume that delivering ‘just’ a finished page is far less data than the back and forth of ten little queries between Omnis & Postgres that will eventually result in one page being sent.

    When I query the remote (VPS) Postgres with my local PGAdmin, the query is executed in 0.2 sec on average – not lightning fast, but fast. Waiting for my local Omnis App (powered by Omnis Studio 11, i.e. the developer app) connected to the remote VPS Postgres to update the screen, takes 5-10 seconds, so I thought that something beyond ‘local vs remote networks’ was wrong.

    You are right, Paul, in recommending optimising my design & code – so far, the amount of data was small, and connection was local.

    But as we are moving on, it helps to have some helping minds like yours’1

    Thank you!

    Bye

    Uwe

    • Mats Olsson

      Member
      January 23, 2025 at 8:38 am

      Another way to only send one single query is to use CTE. Seldom used, but very powerful.

      Basically you create a list or result from different tables using WITH and then gather them afterwards

      Sometimes UNION (ALL) work best, sometimes you need WITH statement.

      WITH 
      query1 AS (SELECT count(1) as count1 FROM table1),
      query2 AS (SELECT count(1) as count2 FROM table2)
      SELECT count1, count2 FROM query1, query2;
      • This reply was modified 6 months, 1 week ago by  Mats Olsson.
      • This reply was modified 6 months, 1 week ago by  Mats Olsson.
      • Diego Mendoza Muñoz

        Member
        January 27, 2025 at 4:35 am

        What would be the difference between using cte and subqueries?

        In your example it has the same result.

        SELECT 
        (SELECT count(1) as count1 FROM table1)
        ,(SELECT count(1) as count2 FROM table2)
        • Mats Olsson

          Member
          January 27, 2025 at 6:22 am

          IMO, there are no big differences.

          Though many subqueries might be harder to read. And using CTE it is simpler to reuse the value in many places instead of multiple extra sub queries.

          At the end of the day, it boils down to a matter of preference.

          I prefer CTE, but I also use sub queries for simpler queries.

          • This reply was modified 6 months ago by  Mats Olsson.
  • Paul Mulroney Mulroney

    Member
    January 21, 2025 at 1:34 am

    Indeed! You’d be surprised at how much of a difference latency makes!

    Instead of a number of small queries, if you could put them into one query and return that result together, there would only be one round-trip time lag to worry about.

    For example, you can return JSON objects from postgres, so you could have something like

    select <some query returning json in one col>

    union

    select <some query returning json in one col>

    union

    etc

    Then you receive a list with each line being the data that you’re requesting.

    Regards

    Paul.

  • Alan Davey

    Member
    January 23, 2025 at 8:57 pm

    Something else to avoid latency and additional network round trips is to use:
    vStatementObj.$execdirect()

    instead of:

    vStatementObj.$prepare()
    vStatementObj.$execute()

    The latter will make two calls to your database with the $prepare in many cases taking just as long as $execute. So by using $execdirect you can cut this time in half.

    Regards,

    Alan

  • Diego Mendoza Muñoz

    Member
    January 27, 2025 at 4:27 am

    The best way to avoid latency in a web application is to make rest calls, create a restful remote task, if you can create a service api or else create a post /restenpoint/exectsql/{apikey} method and send a post the sql query , it works but you have to be careful with security.
    This way all calls are executed locally and will reach your webapp as soon as they finish executing..
    In my case I use pure javascript xhr and I respond to the form and it works well, I make several calls and while the server sends the data I capture and validate them

    • Diego Mendoza Muñoz

      Member
      January 28, 2025 at 3:08 am

      a simple example of the implementation.
      😀

      $sendall($ref.itWorks!!.$assign(ktrue),$ref.hasInteres=ktrue)
      Calculate formInst as $cinst
      JavaScript:var request = new XMLHttpRequest()
      JavaScript:request.open("GET", url, true)
      JavaScript: request.timeout = timeOut;
      JavaScript:request.onload = function() {
      JavaScript:console.log("queue status response REST "+this.status)
      JavaScript: if (this.status=="200"){
      JavaScript: formInst.callMethod("myPublicOrPrivateMethod",this.response);
      JavaScript: } else {
      JavaScript: formInst.callMethod("myPublicOrPrivateMethod",this.response);
      JavaScript: }
      JavaScript:}
      JavaScript:request.send(postData)

Log in to reply.