See the entire conversation

I've got some interesting weird redo problem :) I'd appreciate if you could take a look. Here's an example repo with steps to reproduce: github.com/burdiyan/redo-… TL;DR: virtual target becomes dirty twice when new sources appear/change, but not when the old ones.
burdiyan/redo-virtual-targets-issue
Steps to reproduce a virtual target issue with the redo build system: https://github.com/apenwarr/redo - burdiyan/redo-virtual-targets-issue
github.com
13 replies and sub-replies as of Oct 17 2020

Would be great to understand at least wether it's a redo problem or I'm doing something wrong :)
This is related to depending on generated non-targets. `redo -d` shows: it checks 1.txt, then fake-output, then 2.txt. If 1.txt is dirty, it skips checking (and caching) fake-output early, so it checks it at the end. If 2.txt is dirty, it caches fake-output's mtime early.
This is generally a limitation of redo's "snapshot based" concept of time: if "input" files change *during* a run, outside of redo's control, it tries to fail safe by building extra stuff instead of accidentally missing something.
One workaround is to make fake-output an actual target. Then you `redo-ifchange bar/fake-output.build` instead of `redo bar.build`. This allows redo's cache to be updated at the exact moment the file is changed. Maybe I should update the latex cookbook to show that.
It might be nice if redo could warn about this kind of inconsistency, but unfortunately it can't due to caching: you can only detect the inconsistency if you check the mtime of a file twice, which we intentionally never do, for both snapshotting and for syscall efficiency.
Thanks a lot! I think it clarified some things for me! Did I understand correctly that the workaround you propose is to write additional dummy file that would actually be a correct $3 output of the rule into the same `dist` dir along with all the side-effect outputs?
But damn, I do this "depending on generated non-targets" all the time :) For protobuf for example. I'd find all the proto files, generate code, then `redo-ifchange` generated files to redo if they disappear. So all .proto, .do and the generated code is actually in the same dir.
I think I just've found another workaround. Create a default.exists.do that runs redo-always, then redo-stamps some stable string if $2 exists, otherwise some random string. Then inside the virtual target depend on dist/fake-output.exists instead of the file itself.
It still runs twice though if the whole `dist` gets deleted. But not with every new source file. But for my protobuf example where sources, outputs and .do file are in the same dir I think it should do the trick.
Nah, it just transfers the problem to another layer :) It gets rebuilt twice if side-effect output disappears if there're more than one of them. But it's still better than building twice with every new source file.
So the trick is just to make sure the "run a bunch of stuff" .do step produces actual output that will be deleted if you delete the subdir. That is, default.build.do can trigger on a request for outdir/anything.build, then create outdir, do the build, and creates $3.
If outdir/anything.build is deleted, we know we need to rebuild outdir/anything.build. But because that file was created as $3 from a .do file, it doesn't have to rely on the filesystem caching/snapshotting logic.
Oh, hmm, that makes me think of a way we could have a warning after all: if an mtime is *after* the current build run started, we can warn that someone modified a source file during the run. This would catch the first time you rely on a "source" file that's a side effect.