What is the difference between updates per second and transactions per second?

@yvonneanne, this is the reason why I ask this question.

I ran some back-filling of data onto the main network this past week in preparation for Supernova.

I was able to sustain 1.2-1.3k transactions per second into my demo application for Supernova (here it is if you’re interested) on a single subnet for > 8hrs, amounting to ~2X the average described here Internet Computer performance - Internet Computer Wiki


You can also see that this was 1/3rd of the overall transactions on the IC at the time

I went back this past evening and took a screenshot of the 7 day view of my subnet. Each of the peaks you see over the past week were triggered by me testing out my backfill job (I know this because they started peaking at the exact same time as the job I was running and declined back to 30-100 after my job finished timed exactly to the minute. After initial backfill testing, I ran the final backfill in two batches, one on 6-19 and the other on 6-20 (SF Bay Area PDT time). As you can see, the dashboard peaked showing 1,341.13 update transactions happening per second as a result of my backfill job.

However, I know for a fact that I was not making anywhere near 1,341 update calls to the back-end at this time. Instead, I was making 4 update calls at a time and awaiting for each call’s result before uploading the next chunk, with each update call containing 30-80KB chunks, and taking roughly 5 seconds to process each chunk depending on the size of the chunk and the network bandwidth.

Each chunk contained ~500 records that were inserted into a red-black tree on the backend, with each record inserted into 3 different database indices to support flexible queries. So for each update call (1 update call), 1500 new entities were inserted. So essentially running this process in parallel with 4 different processes results in

4 parallel processes * 1 job, where

1 job = 1 message * 500 items/message * 3 indices/record * (1 message / 5 sec)

We get 4 * 1 * 500 * 3 * (1/5) = 1200 → oddly close to the update transaction count shown in the dashboard.

Otherwise, I was only making (4 update calls / 5 sec) = 0.8 update calls/sec

So unless I’m totally off in how I wrote my code to call the IC or my understanding of how the @dfinity/agent code (JavaScript/TypeScript) makes individual update calls (ingress update calls) to the IC using the generated actor classes, there’s something that doesn’t seem accurate in how the update transaction count is being displayed on the dashboard.

One other piece that raises my suspicion is the measurements shown in this wiki Internet Computer performance - Internet Computer Wiki, which states that their testing found:

“The Internet Computer sustained more than 20’841 updates/second calls to application canisters for a period of four minutes (averaging 672 updates/second per subnet). The update calls measured here are triggered from ingress messages sent from outside the IC.”

I ran my backfill job for at least 8 hours each day (something like 10+ hours on 6-19), from which the dashboard chart shows that my subnet sustained > 1000 updates/sec for the entire 8 hour period.

Can you provide an explanation for what might be happening here and why these interactions showed such a high update transaction count?

1 Like