Any geographically replicated system will take a few seconds to complete a transaction. If you have a single server in front of a single, lightly loaded database, it is of course possible to complete a transaction in significantly under a second. But in a large distributed system (think Amazon, when you click the “Buy” button; or any credit card processor when you make a payment) it takes seconds to ensure that your transaction has been replicated widely enough that, if the specific server you interacted with directly falls over immediately after your interaction, your transaction (purchase, payment) doesn’t just disappear in a puff of smoke.
The difference between the IC and the average small web2 application is that:
- The IC always does replication, else it wouldn’t be tamper proof or censorship resistant. So it’s more like Amazon or Visa in that respect, as opposed to a lightweight web2 app.
- You get all of this replication, tamper and censorship resistance without having to do a whole lot. Building something like Amazon or Visa has from scratch and making it reliable is A LOT of effort.
That being said, you can have fast IC transactions. All you need is a subnet made up of replicas in the same data center/continent; or a single replica. Making it more like the average Web2 app. This just wasn’t a priority thus far.
Also, you can use frontend tricks to hide the latency. E.g. Amazon used to (and maybe still does) immediately react to your clicking the “Buy” button, only to show an error message a few seconds later, saying “sorry, your purchase has failed”. Last summer I had to purchase a plane ticket 3 or 4 times; and got charged every single time only to get an email a few minutes afterwards telling me that the purchase had actually failed and I was (eventually) going to be refunded the money. I.e. it wasn’t only a frontend trick, the backend I was talking to accepted my purchase only to later fail to make the purchase with the airline.
So whenever you see a large, replicated application respond within milliseconds, it’s often just UI trickery (e.g. in your case, incrementing the counter in the UI upon clicking rather than wait for the query to give you the incremented value). And oftentimes it’s also backend trickery, where only a record of your request is persisted; and when your request is actually executed, it can fail and you’ll only find out minutes later. If you happen to check your email.
So yeah, the IC needs a couple of seconds to actually execute a transaction, because it runs in “rounds”: every second a block is created, containing all requests made in the past second; this block needs to travel around the world a couple of times before all subnet replicas agree on it (something that takes about a second); then, all subnet replicas execute your message (which may take up to a second); and then they need another second to agree on the state of the subnet after the execution (which again requires some messages to be sent around the world 2 or 3 times). Plus, in your case, because you don’t have inc()
return the incremented value and instead rely on querying (much too often, BTW), it may take an additional second before the query actually gets to a replica instead of being served from the cache (which also has a time-to-live of 1 second).