One way to handle that would be to extend 9p to include a duplicate command of some sort.  Then the utility would simply note that the file is being copied to the same file server it is already located on and ask that file server to duplicate it.  If the file server does not support the command it reports an error back and cp falls back on its existing behavior.

That prevents the issue of cp needing to be familiar with the individual file servers and minimizes the required changes, while still allowing for the possibility of taking advantage of the more advanced functionality being proposed.


On 8/26/24 03:43, Steve Simon wrote:

sorry to be negative, but i think this is not a good idea.

copying directories does happen but it rare.

cp is small, neat, and clean. adding special optimisations for some backends/filesystems would be a mistake. don’t forget cp is often used with filesystems other than the main backend file store, whatever that is.

an example of how badly this can go can be seen in the qnx source where every utility knows how to handle every fileserver:


i would suggest keep it simple.

-Steve



On 26 Aug 2024, at 5:47 am, James Cook <falsifian@falsifian.org> wrote:


There's nothing in 9p that allows copy to be smart about COW,
so it isn't.

It may be worth noting cp could nonetheless be made (mostly) COW by being clever about noticing recently-read data being written back. Hammer2 does this. At [0]: "... This means that typical situations such as when copying files or whole directory hierarchies will naturally de-duplicate. Simply reading filesystem data in makes it available for deduplication later."

I have no idea whether that would be appropriate for gefs.

--
James

[0] http://apollo.backplane.com/DFlyMisc/hammer2.txt