.
When I checkout the ic repo at the commit: 0ef2aebde4ff735a1a93efa342dcf966b6df5061 and build the code with the build commands in the readme, the build is successful but the hashes are different than the one in the proposal
The canister module GZip compression that I implemented recently is not related to this issue.
When I try to build the IC OS image from the same commit, I get yet another hash:
$ git status
HEAD detached at 0ef2aebd
$ ./gitlab-ci/tools/docker-run ./gitlab-ci/tools/build-ic
IC-OS Image
05cd757019e276af68a7e2e178dec73ae095131af841d0b44a9c03c947c2d399 update-img.tar.gz
There might be an issue with the build reproducibility; I asked our release engineers for clarification.
@levi reproducibility is hardâŚ
Our IDX team made some changes recently, so that might be why the last few releases worked better.
Would be awesome if you could continue checking in the future and ping me (DM is great) or respond to the announcement message if you get a mismatch.
For each release our team does multiple (say 5 or so) fully independent builds to verify reproducibility. But itâs still not a proof that there wonât be a mismatch in some case or on some system.
Here is the most recent forum post about replica release e4843a1. I ran into the same issue you described, but the issue was that the script fails the wget command when it tries to download the latest update-img.tar.gz file because the file already exists in my ic folder from the build last week. I deleted the file from my ic directory and then reran the build using the script in the proposal and then both sha256 sums match the payload.
Your verification prerequisites are correctâŚubuntu 22.04, wsl2, podman, and git
BTW, if you like to perform these verifications, then you should check out the CodeGov and the Voting Challenge portals on DSCVR. You might find them to have interesting information.
I built on a different PC with Ubuntu 2204 and podman and the used the NNS validation script posted with the proposal. It built cleanly and the hash matched. I will now try to find what is off the baseline.
I think that I am observing a similar pattern. I have two builders, one in an Ubuntu 22.04 Virtual Box VM and one in an Ubuntu 22.04 KVM. Both use exactly the same setup â same version of podman, docker.io, âŚ, everything. Yet the build succeeds on one and fails on the other.
For some time I thought that one VM had more resources than the other, but that first doesnât seem to change things and second I can observe disk and memory use and it never gets to even 50% during the build.
@northman whatâs the size of the image file with the mismatching hash? I get some 22 MB or so.
So it depends⌠currently it is only about 2M. I think the issue pops up when it is not compiled on a platform meeting the published the spec. WSL2 makes available only half the size of the physical ram and the swap is too low. I am uncertain why we need 8 cores but I can confirm that it works fine on the baseline defined by Dfinity 8 cores/16G, 100G swap).
I ran the icos build verification script on the same computer but ran it on bare metal vs inside of WSL2.
The computer is only an I5 quad core with 8 Gig of RAM vs the 8 core 16G per the specs.
It compiled flawlessly and the hashes matched.
Under WSL2 it failed.
I think I will try bumping the memory so it will run on the WSL2 side.
I do not think it is an issue with the icos build script, rather it is a resource issue on the PC.
I donât think the number of CPU cores matters. I ran it in a VM with a single CPU core. Build takes hours, but completes successfully. Memory and disk space may matter but I couldnât see a clear pattern for now.