Yes this is the case currently (and probably will be to some extent for the foreseeable future), but I don’t think that anyone really wants unpredictable execution latency and throughput as the norm? We won’t get to a predictable environment within a few months for sure, but we can improve gradually.
IIRC, with the sync call endpoint, an ingress call takes around 1.2 seconds to get a response, and the median cross-subnet call is around 6 seconds (though this is largely due to the NNS having a slower finalization rate - otherwise it is indeed probably around 3 seconds). But do note that this gets worse if there are multiple calls being made. Are people OK with an 3x increase? Famously Amazon found that they’d lose 1% of income with every 100ms of latency added, but I can accept that maybe it’s not so bad for everyone.
This is how the original philosophy behind the system. You couldn’t choose the subnet that you deploy canisters on. In principle, creating a canister from a canister could’ve created that canister anywhere, not just on the same subnet - but it didn’t in the implementation. But people have regardless noticed that there was a performance difference between local and cross-net calls and have come to rely on that, and I think that’d be quite difficult to change now. I mean one could of course now make local calls slower on purpose so that they don’t come to rely on the performance, but think that that would leave a lot of people unhappy. Similar with composite queries.
This is already the case today - every inter-canister call costs the same.