Hello Christian,

Please could you clarify the circumstances a little bit?

1. Are you looking for a data structure that you set
up for a fixed R once and then query many times for
different X? Or are you looking for a dynamic data
structure in which you keep changing R? Or are you
looking for a 'functional data structure' where the
older versions of R are preserved? Or for a functional
data structure where R is fixed, but the queries X
are constructed incrementally?

2. Is R sparse, i.e. is |R| << |\X|*|\Y|?

If not, bitvectors might not be so bad after all.

If yes, you might want to look for a data structure
storing the sets {x}' = {y : (x, y) in R} in such a
way that they can be intersected efficiently. If R
is fixed, {x}' can be represented as a sorted array
of the elements. These arrays can be intersected
quickly (see Knuth/TACP) but the asymptotically optimal
algorithms are rather tricky (if I recall correctly,
Knuth doesn't give them directly but only cites them).

A straight-forward algorithm is taking the shorter array
and looking up the elements in the longer one by binary
search. This nicely generalizes to your case of computing
X' for a given X: Get all {x}' for x in X and sort them
into increasing |{x}'|; this takes O(|X| log |X|).
Then for all y in {x}', x such that |{x}'| is minimal,
look up y in {u}' by binary search for all u in X\{x}
or until it is not found anymore; this takes
O(Sum[log |{u}'| : u in X\{x}]) per candidate y.
Simplifying the estimation (aehem...) you end up with
O( min { |{x}'| : x in X } * |X| log max { |{x}'| : x in X } ),
which is at least independent on |\Y|, if that is your
main concern.

Sebastian.