caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
* [Caml-list] [ANN] ppx_protobuf
@ 2014-05-02 14:29 Peter Zotov
  2014-05-03 16:08 ` Malcolm Matalka
  2014-05-06  4:29 ` Alain Frisch
  0 siblings, 2 replies; 15+ messages in thread
From: Peter Zotov @ 2014-05-02 14:29 UTC (permalink / raw)
  To: caml-list

Greetings.

I have just released the first version of ppx_protobuf, a complete
Protocol Buffers implementation. Unlike Google's implementation,
ppx_protobuf derives the message structure directly from OCaml type
definitions, which allows a much more seamless integration with
OCaml's types. In particular, ppx_protobuf natively supports
sum types, while maintaining full backwards compatibility with
protoc.

ppx_protobuf uses the extension points API, and thus requires
a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
an unreleased version of ppx_tools. It is probably easiest
to install both from the source repositories[1][2].

The API is extensively documented at [3].

[1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
[2]: https://github.com/alainfrisch/ppx_tools.git
[3]: 
https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md

-- 
   WBR, Peter Zotov.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-02 14:29 [Caml-list] [ANN] ppx_protobuf Peter Zotov
@ 2014-05-03 16:08 ` Malcolm Matalka
  2014-05-03 16:24   ` Peter Zotov
  2014-05-06  4:29 ` Alain Frisch
  1 sibling, 1 reply; 15+ messages in thread
From: Malcolm Matalka @ 2014-05-03 16:08 UTC (permalink / raw)
  To: Peter Zotov; +Cc: caml-list

Nice, great work!

I'm not actually a huge fan of mixing type definitions and the protocols
they can be encoded/decoded from.  How hard would it be to take a module
definition accessors on a type and produce a new module with
encode/decode functions?  That way I could create JSON, XML, Protobufs,
etc modules from one module.

Just an idea!

Peter Zotov <whitequark@whitequark.org> writes:

> Greetings.
>
> I have just released the first version of ppx_protobuf, a complete
> Protocol Buffers implementation. Unlike Google's implementation,
> ppx_protobuf derives the message structure directly from OCaml type
> definitions, which allows a much more seamless integration with
> OCaml's types. In particular, ppx_protobuf natively supports
> sum types, while maintaining full backwards compatibility with
> protoc.
>
> ppx_protobuf uses the extension points API, and thus requires
> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
> an unreleased version of ppx_tools. It is probably easiest
> to install both from the source repositories[1][2].
>
> The API is extensively documented at [3].
>
> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
> [2]: https://github.com/alainfrisch/ppx_tools.git
> [3]: https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>
> -- 
>   WBR, Peter Zotov.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-03 16:08 ` Malcolm Matalka
@ 2014-05-03 16:24   ` Peter Zotov
  2014-05-03 18:46     ` Malcolm Matalka
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zotov @ 2014-05-03 16:24 UTC (permalink / raw)
  To: Malcolm Matalka; +Cc: caml-list

On 2014-05-03 20:08, Malcolm Matalka wrote:
> Nice, great work!
> 
> I'm not actually a huge fan of mixing type definitions and the 
> protocols
> they can be encoded/decoded from.  How hard would it be to take a 
> module
> definition accessors on a type and produce a new module with
> encode/decode functions?  That way I could create JSON, XML, Protobufs,
> etc modules from one module.

Do you suggest generating the following signature instead of the current
one?

type t = ... [@@protobuf]
module Protobuf_t : sig
   val decode : Protobuf.Decoder.t -> t
   val encode : Protobuf.Encoder.t -> t -> unit
end

This would be similar to what deriving currently does.

In principle, this is not a complex change. It would add just a few 
lines
to ppx_protobuf.

However, I don't like it conceptually. I think the flat signature is
more natural, it mimics what one would usually write by hand without
introducing too much deep nesting of modules. You may notice how
ppx_protobuf doesn't generate the signature items for you; this is
because ppx_protobuf is a mere implementation detail, a convenient
way to generate the serializer/deserializer.

I'm not going to oppose addition of such a mode for two reasons:
   * I don't like fighting over minute details.
   * More importantly, deriving, when rewritten with ppx in mind,
     will surely contain this mode for compatibility. ppx_protobuf
     will be (ideally) rewritten over deriving some day.

I will happily merge a PR adding such a mode to ppx_protobuf.

> 
> Just an idea!
> 
> Peter Zotov <whitequark@whitequark.org> writes:
> 
>> Greetings.
>> 
>> I have just released the first version of ppx_protobuf, a complete
>> Protocol Buffers implementation. Unlike Google's implementation,
>> ppx_protobuf derives the message structure directly from OCaml type
>> definitions, which allows a much more seamless integration with
>> OCaml's types. In particular, ppx_protobuf natively supports
>> sum types, while maintaining full backwards compatibility with
>> protoc.
>> 
>> ppx_protobuf uses the extension points API, and thus requires
>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>> an unreleased version of ppx_tools. It is probably easiest
>> to install both from the source repositories[1][2].
>> 
>> The API is extensively documented at [3].
>> 
>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>> [2]: https://github.com/alainfrisch/ppx_tools.git
>> [3]: 
>> https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>> 
>> --
>>   WBR, Peter Zotov.

-- 
Peter Zotov
sip:whitequark@sipnet.ru


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-03 16:24   ` Peter Zotov
@ 2014-05-03 18:46     ` Malcolm Matalka
  2014-05-03 18:52       ` Peter Zotov
  0 siblings, 1 reply; 15+ messages in thread
From: Malcolm Matalka @ 2014-05-03 18:46 UTC (permalink / raw)
  To: Peter Zotov; +Cc: caml-list

The idea I mean is more to do this at the module level than the type
level, like a functor.  So rather than defining protobuf for a type
definition, define it for a module, and have some convention for how to
pick out setter/getter functions.  Then create a new module from that.

For example of the top of my head:

module Foo = sig
   type t
   val set_x : t -> int -> t
   val get_x : t -> int
end

Then I can do:

module Foo_protobuf = Protobuf.Make(Foo)

In this case I stole how most people to functors to make it clear the
translation is actually module to module.

The reason I prefer this is because I can also do:

module Foo_xml = Xml.Make(Foo)
module Foo_json = Json.Make(Foo)

By separating the mechanism for creating the decoders from the type
definition, I can add decoders for any type I want without disturbing
the original definition.  This feels more right to me.  But I have no
idea how to do it.


Peter Zotov <whitequark@whitequark.org> writes:

> On 2014-05-03 20:08, Malcolm Matalka wrote:
>> Nice, great work!
>>
>> I'm not actually a huge fan of mixing type definitions and the protocols
>> they can be encoded/decoded from.  How hard would it be to take a module
>> definition accessors on a type and produce a new module with
>> encode/decode functions?  That way I could create JSON, XML, Protobufs,
>> etc modules from one module.
>
> Do you suggest generating the following signature instead of the current
> one?
>
> type t = ... [@@protobuf]
> module Protobuf_t : sig
>   val decode : Protobuf.Decoder.t -> t
>   val encode : Protobuf.Encoder.t -> t -> unit
> end
>
> This would be similar to what deriving currently does.
>
> In principle, this is not a complex change. It would add just a few lines
> to ppx_protobuf.
>
> However, I don't like it conceptually. I think the flat signature is
> more natural, it mimics what one would usually write by hand without
> introducing too much deep nesting of modules. You may notice how
> ppx_protobuf doesn't generate the signature items for you; this is
> because ppx_protobuf is a mere implementation detail, a convenient
> way to generate the serializer/deserializer.
>
> I'm not going to oppose addition of such a mode for two reasons:
>   * I don't like fighting over minute details.
>   * More importantly, deriving, when rewritten with ppx in mind,
>     will surely contain this mode for compatibility. ppx_protobuf
>     will be (ideally) rewritten over deriving some day.
>
> I will happily merge a PR adding such a mode to ppx_protobuf.
>
>>
>> Just an idea!
>>
>> Peter Zotov <whitequark@whitequark.org> writes:
>>
>>> Greetings.
>>>
>>> I have just released the first version of ppx_protobuf, a complete
>>> Protocol Buffers implementation. Unlike Google's implementation,
>>> ppx_protobuf derives the message structure directly from OCaml type
>>> definitions, which allows a much more seamless integration with
>>> OCaml's types. In particular, ppx_protobuf natively supports
>>> sum types, while maintaining full backwards compatibility with
>>> protoc.
>>>
>>> ppx_protobuf uses the extension points API, and thus requires
>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>>> an unreleased version of ppx_tools. It is probably easiest
>>> to install both from the source repositories[1][2].
>>>
>>> The API is extensively documented at [3].
>>>
>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>>> [2]: https://github.com/alainfrisch/ppx_tools.git
>>> [3]: https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>>>
>>> --
>>>   WBR, Peter Zotov.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-03 18:46     ` Malcolm Matalka
@ 2014-05-03 18:52       ` Peter Zotov
  2014-05-04  4:49         ` Malcolm Matalka
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zotov @ 2014-05-03 18:52 UTC (permalink / raw)
  To: Malcolm Matalka; +Cc: caml-list

On 2014-05-03 22:46, Malcolm Matalka wrote:
> The idea I mean is more to do this at the module level than the type
> level, like a functor.  So rather than defining protobuf for a type
> definition, define it for a module, and have some convention for how to
> pick out setter/getter functions.  Then create a new module from that.

Oh! You want a functor which would be able to examine the structure
of the module that was passed to it.

It's probably technically feasible (you need a syntactic extension
which would essentially serialize the module that will be passed), but
it is a really horrible solution:

   * You won't be able to report some interesting errors (such as
     incorrect annotations... [@key -1] until runtime.
   * It will be really slow, because the implementation of the functor
     will have to traverse the lists of fields dynamically and invoke
     accessors one by one. My current implementation directly pattern
     matches the input.
   * It is just really complicated and does too much at runtime.

> 
> For example of the top of my head:
> 
> module Foo = sig
>    type t
>    val set_x : t -> int -> t
>    val get_x : t -> int
> end
> 
> Then I can do:
> 
> module Foo_protobuf = Protobuf.Make(Foo)
> 
> In this case I stole how most people to functors to make it clear the
> translation is actually module to module.
> 
> The reason I prefer this is because I can also do:
> 
> module Foo_xml = Xml.Make(Foo)
> module Foo_json = Json.Make(Foo)
> 
> By separating the mechanism for creating the decoders from the type
> definition, I can add decoders for any type I want without disturbing
> the original definition.  This feels more right to me.  But I have no
> idea how to do it.
> 
> 
> Peter Zotov <whitequark@whitequark.org> writes:
> 
>> On 2014-05-03 20:08, Malcolm Matalka wrote:
>>> Nice, great work!
>>> 
>>> I'm not actually a huge fan of mixing type definitions and the 
>>> protocols
>>> they can be encoded/decoded from.  How hard would it be to take a 
>>> module
>>> definition accessors on a type and produce a new module with
>>> encode/decode functions?  That way I could create JSON, XML, 
>>> Protobufs,
>>> etc modules from one module.
>> 
>> Do you suggest generating the following signature instead of the 
>> current
>> one?
>> 
>> type t = ... [@@protobuf]
>> module Protobuf_t : sig
>>   val decode : Protobuf.Decoder.t -> t
>>   val encode : Protobuf.Encoder.t -> t -> unit
>> end
>> 
>> This would be similar to what deriving currently does.
>> 
>> In principle, this is not a complex change. It would add just a few 
>> lines
>> to ppx_protobuf.
>> 
>> However, I don't like it conceptually. I think the flat signature is
>> more natural, it mimics what one would usually write by hand without
>> introducing too much deep nesting of modules. You may notice how
>> ppx_protobuf doesn't generate the signature items for you; this is
>> because ppx_protobuf is a mere implementation detail, a convenient
>> way to generate the serializer/deserializer.
>> 
>> I'm not going to oppose addition of such a mode for two reasons:
>>   * I don't like fighting over minute details.
>>   * More importantly, deriving, when rewritten with ppx in mind,
>>     will surely contain this mode for compatibility. ppx_protobuf
>>     will be (ideally) rewritten over deriving some day.
>> 
>> I will happily merge a PR adding such a mode to ppx_protobuf.
>> 
>>> 
>>> Just an idea!
>>> 
>>> Peter Zotov <whitequark@whitequark.org> writes:
>>> 
>>>> Greetings.
>>>> 
>>>> I have just released the first version of ppx_protobuf, a complete
>>>> Protocol Buffers implementation. Unlike Google's implementation,
>>>> ppx_protobuf derives the message structure directly from OCaml type
>>>> definitions, which allows a much more seamless integration with
>>>> OCaml's types. In particular, ppx_protobuf natively supports
>>>> sum types, while maintaining full backwards compatibility with
>>>> protoc.
>>>> 
>>>> ppx_protobuf uses the extension points API, and thus requires
>>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>>>> an unreleased version of ppx_tools. It is probably easiest
>>>> to install both from the source repositories[1][2].
>>>> 
>>>> The API is extensively documented at [3].
>>>> 
>>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>>>> [2]: https://github.com/alainfrisch/ppx_tools.git
>>>> [3]: 
>>>> https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>>>> 
>>>> --
>>>>   WBR, Peter Zotov.

-- 
Peter Zotov
sip:whitequark@sipnet.ru

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-03 18:52       ` Peter Zotov
@ 2014-05-04  4:49         ` Malcolm Matalka
  2014-05-04  8:55           ` Peter Zotov
  0 siblings, 1 reply; 15+ messages in thread
From: Malcolm Matalka @ 2014-05-04  4:49 UTC (permalink / raw)
  To: Peter Zotov; +Cc: caml-list

Not exactly. I don't mean I want a functor, I just used that style to
express that I think it would be best if these sort of things worked on
a module-to-module level rather than type.  That way I can separate out
the data type and it's business logic from its encoding/decoding logic.
I want to decouple a type definition from all of the transformations
that can be done on the type.  Everything an still happen at a
preprocessor point, but I just want it to happen on a module level.


Peter Zotov <whitequark@whitequark.org> writes:

> On 2014-05-03 22:46, Malcolm Matalka wrote:
>> The idea I mean is more to do this at the module level than the type
>> level, like a functor.  So rather than defining protobuf for a type
>> definition, define it for a module, and have some convention for how to
>> pick out setter/getter functions.  Then create a new module from that.
>
> Oh! You want a functor which would be able to examine the structure
> of the module that was passed to it.
>
> It's probably technically feasible (you need a syntactic extension
> which would essentially serialize the module that will be passed), but
> it is a really horrible solution:
>
>   * You won't be able to report some interesting errors (such as
>     incorrect annotations... [@key -1] until runtime.
>   * It will be really slow, because the implementation of the functor
>     will have to traverse the lists of fields dynamically and invoke
>     accessors one by one. My current implementation directly pattern
>     matches the input.
>   * It is just really complicated and does too much at runtime.
>
>>
>> For example of the top of my head:
>>
>> module Foo = sig
>>    type t
>>    val set_x : t -> int -> t
>>    val get_x : t -> int
>> end
>>
>> Then I can do:
>>
>> module Foo_protobuf = Protobuf.Make(Foo)
>>
>> In this case I stole how most people to functors to make it clear the
>> translation is actually module to module.
>>
>> The reason I prefer this is because I can also do:
>>
>> module Foo_xml = Xml.Make(Foo)
>> module Foo_json = Json.Make(Foo)
>>
>> By separating the mechanism for creating the decoders from the type
>> definition, I can add decoders for any type I want without disturbing
>> the original definition.  This feels more right to me.  But I have no
>> idea how to do it.
>>
>>
>> Peter Zotov <whitequark@whitequark.org> writes:
>>
>>> On 2014-05-03 20:08, Malcolm Matalka wrote:
>>>> Nice, great work!
>>>>
>>>> I'm not actually a huge fan of mixing type definitions and the protocols
>>>> they can be encoded/decoded from.  How hard would it be to take a module
>>>> definition accessors on a type and produce a new module with
>>>> encode/decode functions?  That way I could create JSON, XML, Protobufs,
>>>> etc modules from one module.
>>>
>>> Do you suggest generating the following signature instead of the current
>>> one?
>>>
>>> type t = ... [@@protobuf]
>>> module Protobuf_t : sig
>>>   val decode : Protobuf.Decoder.t -> t
>>>   val encode : Protobuf.Encoder.t -> t -> unit
>>> end
>>>
>>> This would be similar to what deriving currently does.
>>>
>>> In principle, this is not a complex change. It would add just a few lines
>>> to ppx_protobuf.
>>>
>>> However, I don't like it conceptually. I think the flat signature is
>>> more natural, it mimics what one would usually write by hand without
>>> introducing too much deep nesting of modules. You may notice how
>>> ppx_protobuf doesn't generate the signature items for you; this is
>>> because ppx_protobuf is a mere implementation detail, a convenient
>>> way to generate the serializer/deserializer.
>>>
>>> I'm not going to oppose addition of such a mode for two reasons:
>>>   * I don't like fighting over minute details.
>>>   * More importantly, deriving, when rewritten with ppx in mind,
>>>     will surely contain this mode for compatibility. ppx_protobuf
>>>     will be (ideally) rewritten over deriving some day.
>>>
>>> I will happily merge a PR adding such a mode to ppx_protobuf.
>>>
>>>>
>>>> Just an idea!
>>>>
>>>> Peter Zotov <whitequark@whitequark.org> writes:
>>>>
>>>>> Greetings.
>>>>>
>>>>> I have just released the first version of ppx_protobuf, a complete
>>>>> Protocol Buffers implementation. Unlike Google's implementation,
>>>>> ppx_protobuf derives the message structure directly from OCaml type
>>>>> definitions, which allows a much more seamless integration with
>>>>> OCaml's types. In particular, ppx_protobuf natively supports
>>>>> sum types, while maintaining full backwards compatibility with
>>>>> protoc.
>>>>>
>>>>> ppx_protobuf uses the extension points API, and thus requires
>>>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>>>>> an unreleased version of ppx_tools. It is probably easiest
>>>>> to install both from the source repositories[1][2].
>>>>>
>>>>> The API is extensively documented at [3].
>>>>>
>>>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>>>>> [2]: https://github.com/alainfrisch/ppx_tools.git
>>>>> [3]: https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>>>>>
>>>>> --
>>>>>   WBR, Peter Zotov.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-04  4:49         ` Malcolm Matalka
@ 2014-05-04  8:55           ` Peter Zotov
  2014-05-04 15:18             ` Malcolm Matalka
  2014-05-04 20:34             ` Gerd Stolpmann
  0 siblings, 2 replies; 15+ messages in thread
From: Peter Zotov @ 2014-05-04  8:55 UTC (permalink / raw)
  To: Malcolm Matalka; +Cc: caml-list

On 2014-05-04 08:49, Malcolm Matalka wrote:
> Not exactly. I don't mean I want a functor, I just used that style to
> express that I think it would be best if these sort of things worked on
> a module-to-module level rather than type.  That way I can separate out
> the data type and it's business logic from its encoding/decoding logic.
> I want to decouple a type definition from all of the transformations
> that can be done on the type.  Everything an still happen at a
> preprocessor point, but I just want it to happen on a module level.

Still not a good idea. Consider the annotations like @key and @encoding:
where would you specify them? If right on the type signature, then what
is the point of separation?

> 
> 
> Peter Zotov <whitequark@whitequark.org> writes:
> 
>> On 2014-05-03 22:46, Malcolm Matalka wrote:
>>> The idea I mean is more to do this at the module level than the type
>>> level, like a functor.  So rather than defining protobuf for a type
>>> definition, define it for a module, and have some convention for how 
>>> to
>>> pick out setter/getter functions.  Then create a new module from 
>>> that.
>> 
>> Oh! You want a functor which would be able to examine the structure
>> of the module that was passed to it.
>> 
>> It's probably technically feasible (you need a syntactic extension
>> which would essentially serialize the module that will be passed), but
>> it is a really horrible solution:
>> 
>>   * You won't be able to report some interesting errors (such as
>>     incorrect annotations... [@key -1] until runtime.
>>   * It will be really slow, because the implementation of the functor
>>     will have to traverse the lists of fields dynamically and invoke
>>     accessors one by one. My current implementation directly pattern
>>     matches the input.
>>   * It is just really complicated and does too much at runtime.
>> 
>>> 
>>> For example of the top of my head:
>>> 
>>> module Foo = sig
>>>    type t
>>>    val set_x : t -> int -> t
>>>    val get_x : t -> int
>>> end
>>> 
>>> Then I can do:
>>> 
>>> module Foo_protobuf = Protobuf.Make(Foo)
>>> 
>>> In this case I stole how most people to functors to make it clear the
>>> translation is actually module to module.
>>> 
>>> The reason I prefer this is because I can also do:
>>> 
>>> module Foo_xml = Xml.Make(Foo)
>>> module Foo_json = Json.Make(Foo)
>>> 
>>> By separating the mechanism for creating the decoders from the type
>>> definition, I can add decoders for any type I want without disturbing
>>> the original definition.  This feels more right to me.  But I have no
>>> idea how to do it.
>>> 
>>> 
>>> Peter Zotov <whitequark@whitequark.org> writes:
>>> 
>>>> On 2014-05-03 20:08, Malcolm Matalka wrote:
>>>>> Nice, great work!
>>>>> 
>>>>> I'm not actually a huge fan of mixing type definitions and the 
>>>>> protocols
>>>>> they can be encoded/decoded from.  How hard would it be to take a 
>>>>> module
>>>>> definition accessors on a type and produce a new module with
>>>>> encode/decode functions?  That way I could create JSON, XML, 
>>>>> Protobufs,
>>>>> etc modules from one module.
>>>> 
>>>> Do you suggest generating the following signature instead of the 
>>>> current
>>>> one?
>>>> 
>>>> type t = ... [@@protobuf]
>>>> module Protobuf_t : sig
>>>>   val decode : Protobuf.Decoder.t -> t
>>>>   val encode : Protobuf.Encoder.t -> t -> unit
>>>> end
>>>> 
>>>> This would be similar to what deriving currently does.
>>>> 
>>>> In principle, this is not a complex change. It would add just a few 
>>>> lines
>>>> to ppx_protobuf.
>>>> 
>>>> However, I don't like it conceptually. I think the flat signature is
>>>> more natural, it mimics what one would usually write by hand without
>>>> introducing too much deep nesting of modules. You may notice how
>>>> ppx_protobuf doesn't generate the signature items for you; this is
>>>> because ppx_protobuf is a mere implementation detail, a convenient
>>>> way to generate the serializer/deserializer.
>>>> 
>>>> I'm not going to oppose addition of such a mode for two reasons:
>>>>   * I don't like fighting over minute details.
>>>>   * More importantly, deriving, when rewritten with ppx in mind,
>>>>     will surely contain this mode for compatibility. ppx_protobuf
>>>>     will be (ideally) rewritten over deriving some day.
>>>> 
>>>> I will happily merge a PR adding such a mode to ppx_protobuf.
>>>> 
>>>>> 
>>>>> Just an idea!
>>>>> 
>>>>> Peter Zotov <whitequark@whitequark.org> writes:
>>>>> 
>>>>>> Greetings.
>>>>>> 
>>>>>> I have just released the first version of ppx_protobuf, a complete
>>>>>> Protocol Buffers implementation. Unlike Google's implementation,
>>>>>> ppx_protobuf derives the message structure directly from OCaml 
>>>>>> type
>>>>>> definitions, which allows a much more seamless integration with
>>>>>> OCaml's types. In particular, ppx_protobuf natively supports
>>>>>> sum types, while maintaining full backwards compatibility with
>>>>>> protoc.
>>>>>> 
>>>>>> ppx_protobuf uses the extension points API, and thus requires
>>>>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>>>>>> an unreleased version of ppx_tools. It is probably easiest
>>>>>> to install both from the source repositories[1][2].
>>>>>> 
>>>>>> The API is extensively documented at [3].
>>>>>> 
>>>>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>>>>>> [2]: https://github.com/alainfrisch/ppx_tools.git
>>>>>> [3]: 
>>>>>> https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>>>>>> 
>>>>>> --
>>>>>>   WBR, Peter Zotov.

-- 
Peter Zotov
sip:whitequark@sipnet.ru

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-04  8:55           ` Peter Zotov
@ 2014-05-04 15:18             ` Malcolm Matalka
  2014-05-04 22:21               ` Peter Zotov
  2014-05-04 20:34             ` Gerd Stolpmann
  1 sibling, 1 reply; 15+ messages in thread
From: Malcolm Matalka @ 2014-05-04 15:18 UTC (permalink / raw)
  To: Peter Zotov; +Cc: caml-list

In my fantasy scenario you could annotate the accessor functions in a
module.

Peter Zotov <whitequark@whitequark.org> writes:

> On 2014-05-04 08:49, Malcolm Matalka wrote:
>> Not exactly. I don't mean I want a functor, I just used that style to
>> express that I think it would be best if these sort of things worked on
>> a module-to-module level rather than type.  That way I can separate out
>> the data type and it's business logic from its encoding/decoding logic.
>> I want to decouple a type definition from all of the transformations
>> that can be done on the type.  Everything an still happen at a
>> preprocessor point, but I just want it to happen on a module level.
>
> Still not a good idea. Consider the annotations like @key and @encoding:
> where would you specify them? If right on the type signature, then what
> is the point of separation?
>
>>
>>
>> Peter Zotov <whitequark@whitequark.org> writes:
>>
>>> On 2014-05-03 22:46, Malcolm Matalka wrote:
>>>> The idea I mean is more to do this at the module level than the type
>>>> level, like a functor.  So rather than defining protobuf for a type
>>>> definition, define it for a module, and have some convention for how to
>>>> pick out setter/getter functions.  Then create a new module from that.
>>>
>>> Oh! You want a functor which would be able to examine the structure
>>> of the module that was passed to it.
>>>
>>> It's probably technically feasible (you need a syntactic extension
>>> which would essentially serialize the module that will be passed), but
>>> it is a really horrible solution:
>>>
>>>   * You won't be able to report some interesting errors (such as
>>>     incorrect annotations... [@key -1] until runtime.
>>>   * It will be really slow, because the implementation of the functor
>>>     will have to traverse the lists of fields dynamically and invoke
>>>     accessors one by one. My current implementation directly pattern
>>>     matches the input.
>>>   * It is just really complicated and does too much at runtime.
>>>
>>>>
>>>> For example of the top of my head:
>>>>
>>>> module Foo = sig
>>>>    type t
>>>>    val set_x : t -> int -> t
>>>>    val get_x : t -> int
>>>> end
>>>>
>>>> Then I can do:
>>>>
>>>> module Foo_protobuf = Protobuf.Make(Foo)
>>>>
>>>> In this case I stole how most people to functors to make it clear the
>>>> translation is actually module to module.
>>>>
>>>> The reason I prefer this is because I can also do:
>>>>
>>>> module Foo_xml = Xml.Make(Foo)
>>>> module Foo_json = Json.Make(Foo)
>>>>
>>>> By separating the mechanism for creating the decoders from the type
>>>> definition, I can add decoders for any type I want without disturbing
>>>> the original definition.  This feels more right to me.  But I have no
>>>> idea how to do it.
>>>>
>>>>
>>>> Peter Zotov <whitequark@whitequark.org> writes:
>>>>
>>>>> On 2014-05-03 20:08, Malcolm Matalka wrote:
>>>>>> Nice, great work!
>>>>>>
>>>>>> I'm not actually a huge fan of mixing type definitions and the protocols
>>>>>> they can be encoded/decoded from.  How hard would it be to take a module
>>>>>> definition accessors on a type and produce a new module with
>>>>>> encode/decode functions?  That way I could create JSON, XML, Protobufs,
>>>>>> etc modules from one module.
>>>>>
>>>>> Do you suggest generating the following signature instead of the current
>>>>> one?
>>>>>
>>>>> type t = ... [@@protobuf]
>>>>> module Protobuf_t : sig
>>>>>   val decode : Protobuf.Decoder.t -> t
>>>>>   val encode : Protobuf.Encoder.t -> t -> unit
>>>>> end
>>>>>
>>>>> This would be similar to what deriving currently does.
>>>>>
>>>>> In principle, this is not a complex change. It would add just a few lines
>>>>> to ppx_protobuf.
>>>>>
>>>>> However, I don't like it conceptually. I think the flat signature is
>>>>> more natural, it mimics what one would usually write by hand without
>>>>> introducing too much deep nesting of modules. You may notice how
>>>>> ppx_protobuf doesn't generate the signature items for you; this is
>>>>> because ppx_protobuf is a mere implementation detail, a convenient
>>>>> way to generate the serializer/deserializer.
>>>>>
>>>>> I'm not going to oppose addition of such a mode for two reasons:
>>>>>   * I don't like fighting over minute details.
>>>>>   * More importantly, deriving, when rewritten with ppx in mind,
>>>>>     will surely contain this mode for compatibility. ppx_protobuf
>>>>>     will be (ideally) rewritten over deriving some day.
>>>>>
>>>>> I will happily merge a PR adding such a mode to ppx_protobuf.
>>>>>
>>>>>>
>>>>>> Just an idea!
>>>>>>
>>>>>> Peter Zotov <whitequark@whitequark.org> writes:
>>>>>>
>>>>>>> Greetings.
>>>>>>>
>>>>>>> I have just released the first version of ppx_protobuf, a complete
>>>>>>> Protocol Buffers implementation. Unlike Google's implementation,
>>>>>>> ppx_protobuf derives the message structure directly from OCaml type
>>>>>>> definitions, which allows a much more seamless integration with
>>>>>>> OCaml's types. In particular, ppx_protobuf natively supports
>>>>>>> sum types, while maintaining full backwards compatibility with
>>>>>>> protoc.
>>>>>>>
>>>>>>> ppx_protobuf uses the extension points API, and thus requires
>>>>>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>>>>>>> an unreleased version of ppx_tools. It is probably easiest
>>>>>>> to install both from the source repositories[1][2].
>>>>>>>
>>>>>>> The API is extensively documented at [3].
>>>>>>>
>>>>>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>>>>>>> [2]: https://github.com/alainfrisch/ppx_tools.git
>>>>>>> [3]: https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>>>>>>>
>>>>>>> --
>>>>>>>   WBR, Peter Zotov.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-04  8:55           ` Peter Zotov
  2014-05-04 15:18             ` Malcolm Matalka
@ 2014-05-04 20:34             ` Gerd Stolpmann
  1 sibling, 0 replies; 15+ messages in thread
From: Gerd Stolpmann @ 2014-05-04 20:34 UTC (permalink / raw)
  To: Peter Zotov; +Cc: Malcolm Matalka, caml-list

Am Sonntag, den 04.05.2014, 12:55 +0400 schrieb Peter Zotov:
> On 2014-05-04 08:49, Malcolm Matalka wrote:
> > Not exactly. I don't mean I want a functor, I just used that style to
> > express that I think it would be best if these sort of things worked on
> > a module-to-module level rather than type.  That way I can separate out
> > the data type and it's business logic from its encoding/decoding logic.
> > I want to decouple a type definition from all of the transformations
> > that can be done on the type.  Everything an still happen at a
> > preprocessor point, but I just want it to happen on a module level.
> 
> Still not a good idea. Consider the annotations like @key and @encoding:
> where would you specify them? If right on the type signature, then what
> is the point of separation?

Which just leads to the question why annotations are no real module
entities. We would need something like

module Foo = sig
  type t
  val set_x : t -> int -> t
  val get_x : t -> int
end

module Foo_protobuf_ann = sig
  import Foo
  annotate t [@@ whatever]
  ...
end

module Foo_protobuf = Protobuf.Make(Foo_protobuf_ann)

The "functor application" would still be fake, of course.

Gerd



> 
> > 
> > 
> > Peter Zotov <whitequark@whitequark.org> writes:
> > 
> >> On 2014-05-03 22:46, Malcolm Matalka wrote:
> >>> The idea I mean is more to do this at the module level than the type
> >>> level, like a functor.  So rather than defining protobuf for a type
> >>> definition, define it for a module, and have some convention for how 
> >>> to
> >>> pick out setter/getter functions.  Then create a new module from 
> >>> that.
> >> 
> >> Oh! You want a functor which would be able to examine the structure
> >> of the module that was passed to it.
> >> 
> >> It's probably technically feasible (you need a syntactic extension
> >> which would essentially serialize the module that will be passed), but
> >> it is a really horrible solution:
> >> 
> >>   * You won't be able to report some interesting errors (such as
> >>     incorrect annotations... [@key -1] until runtime.
> >>   * It will be really slow, because the implementation of the functor
> >>     will have to traverse the lists of fields dynamically and invoke
> >>     accessors one by one. My current implementation directly pattern
> >>     matches the input.
> >>   * It is just really complicated and does too much at runtime.
> >> 
> >>> 
> >>> For example of the top of my head:
> >>> 
> >>> module Foo = sig
> >>>    type t
> >>>    val set_x : t -> int -> t
> >>>    val get_x : t -> int
> >>> end
> >>> 
> >>> Then I can do:
> >>> 
> >>> module Foo_protobuf = Protobuf.Make(Foo)
> >>> 
> >>> In this case I stole how most people to functors to make it clear the
> >>> translation is actually module to module.
> >>> 
> >>> The reason I prefer this is because I can also do:
> >>> 
> >>> module Foo_xml = Xml.Make(Foo)
> >>> module Foo_json = Json.Make(Foo)
> >>> 
> >>> By separating the mechanism for creating the decoders from the type
> >>> definition, I can add decoders for any type I want without disturbing
> >>> the original definition.  This feels more right to me.  But I have no
> >>> idea how to do it.
> >>> 
> >>> 
> >>> Peter Zotov <whitequark@whitequark.org> writes:
> >>> 
> >>>> On 2014-05-03 20:08, Malcolm Matalka wrote:
> >>>>> Nice, great work!
> >>>>> 
> >>>>> I'm not actually a huge fan of mixing type definitions and the 
> >>>>> protocols
> >>>>> they can be encoded/decoded from.  How hard would it be to take a 
> >>>>> module
> >>>>> definition accessors on a type and produce a new module with
> >>>>> encode/decode functions?  That way I could create JSON, XML, 
> >>>>> Protobufs,
> >>>>> etc modules from one module.
> >>>> 
> >>>> Do you suggest generating the following signature instead of the 
> >>>> current
> >>>> one?
> >>>> 
> >>>> type t = ... [@@protobuf]
> >>>> module Protobuf_t : sig
> >>>>   val decode : Protobuf.Decoder.t -> t
> >>>>   val encode : Protobuf.Encoder.t -> t -> unit
> >>>> end
> >>>> 
> >>>> This would be similar to what deriving currently does.
> >>>> 
> >>>> In principle, this is not a complex change. It would add just a few 
> >>>> lines
> >>>> to ppx_protobuf.
> >>>> 
> >>>> However, I don't like it conceptually. I think the flat signature is
> >>>> more natural, it mimics what one would usually write by hand without
> >>>> introducing too much deep nesting of modules. You may notice how
> >>>> ppx_protobuf doesn't generate the signature items for you; this is
> >>>> because ppx_protobuf is a mere implementation detail, a convenient
> >>>> way to generate the serializer/deserializer.
> >>>> 
> >>>> I'm not going to oppose addition of such a mode for two reasons:
> >>>>   * I don't like fighting over minute details.
> >>>>   * More importantly, deriving, when rewritten with ppx in mind,
> >>>>     will surely contain this mode for compatibility. ppx_protobuf
> >>>>     will be (ideally) rewritten over deriving some day.
> >>>> 
> >>>> I will happily merge a PR adding such a mode to ppx_protobuf.
> >>>> 
> >>>>> 
> >>>>> Just an idea!
> >>>>> 
> >>>>> Peter Zotov <whitequark@whitequark.org> writes:
> >>>>> 
> >>>>>> Greetings.
> >>>>>> 
> >>>>>> I have just released the first version of ppx_protobuf, a complete
> >>>>>> Protocol Buffers implementation. Unlike Google's implementation,
> >>>>>> ppx_protobuf derives the message structure directly from OCaml 
> >>>>>> type
> >>>>>> definitions, which allows a much more seamless integration with
> >>>>>> OCaml's types. In particular, ppx_protobuf natively supports
> >>>>>> sum types, while maintaining full backwards compatibility with
> >>>>>> protoc.
> >>>>>> 
> >>>>>> ppx_protobuf uses the extension points API, and thus requires
> >>>>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
> >>>>>> an unreleased version of ppx_tools. It is probably easiest
> >>>>>> to install both from the source repositories[1][2].
> >>>>>> 
> >>>>>> The API is extensively documented at [3].
> >>>>>> 
> >>>>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
> >>>>>> [2]: https://github.com/alainfrisch/ppx_tools.git
> >>>>>> [3]: 
> >>>>>> https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
> >>>>>> 
> >>>>>> --
> >>>>>>   WBR, Peter Zotov.
> 
> -- 
> Peter Zotov
> sip:whitequark@sipnet.ru
> 

-- 
------------------------------------------------------------
Gerd Stolpmann, Darmstadt, Germany    gerd@gerd-stolpmann.de
My OCaml site:          http://www.camlcity.org
Contact details:        http://www.camlcity.org/contact.html
Company homepage:       http://www.gerd-stolpmann.de
------------------------------------------------------------



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-04 15:18             ` Malcolm Matalka
@ 2014-05-04 22:21               ` Peter Zotov
  2014-05-04 22:38                 ` Daniel Bünzli
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zotov @ 2014-05-04 22:21 UTC (permalink / raw)
  To: Malcolm Matalka; +Cc: caml-list, info

On 2014-05-04 19:18, Malcolm Matalka wrote:
> In my fantasy scenario you could annotate the accessor functions in a
> module.

I have just found this article:
http://cedeela.fr/universal-serialization-and-deserialization.html

Hopefully it can give some inspiration on how to implement such
a serialization library.

> 
> Peter Zotov <whitequark@whitequark.org> writes:
> 
>> On 2014-05-04 08:49, Malcolm Matalka wrote:
>>> Not exactly. I don't mean I want a functor, I just used that style to
>>> express that I think it would be best if these sort of things worked 
>>> on
>>> a module-to-module level rather than type.  That way I can separate 
>>> out
>>> the data type and it's business logic from its encoding/decoding 
>>> logic.
>>> I want to decouple a type definition from all of the transformations
>>> that can be done on the type.  Everything an still happen at a
>>> preprocessor point, but I just want it to happen on a module level.
>> 
>> Still not a good idea. Consider the annotations like @key and 
>> @encoding:
>> where would you specify them? If right on the type signature, then 
>> what
>> is the point of separation?
>> 
>>> 
>>> 
>>> Peter Zotov <whitequark@whitequark.org> writes:
>>> 
>>>> On 2014-05-03 22:46, Malcolm Matalka wrote:
>>>>> The idea I mean is more to do this at the module level than the 
>>>>> type
>>>>> level, like a functor.  So rather than defining protobuf for a type
>>>>> definition, define it for a module, and have some convention for 
>>>>> how to
>>>>> pick out setter/getter functions.  Then create a new module from 
>>>>> that.
>>>> 
>>>> Oh! You want a functor which would be able to examine the structure
>>>> of the module that was passed to it.
>>>> 
>>>> It's probably technically feasible (you need a syntactic extension
>>>> which would essentially serialize the module that will be passed), 
>>>> but
>>>> it is a really horrible solution:
>>>> 
>>>>   * You won't be able to report some interesting errors (such as
>>>>     incorrect annotations... [@key -1] until runtime.
>>>>   * It will be really slow, because the implementation of the 
>>>> functor
>>>>     will have to traverse the lists of fields dynamically and invoke
>>>>     accessors one by one. My current implementation directly pattern
>>>>     matches the input.
>>>>   * It is just really complicated and does too much at runtime.
>>>> 
>>>>> 
>>>>> For example of the top of my head:
>>>>> 
>>>>> module Foo = sig
>>>>>    type t
>>>>>    val set_x : t -> int -> t
>>>>>    val get_x : t -> int
>>>>> end
>>>>> 
>>>>> Then I can do:
>>>>> 
>>>>> module Foo_protobuf = Protobuf.Make(Foo)
>>>>> 
>>>>> In this case I stole how most people to functors to make it clear 
>>>>> the
>>>>> translation is actually module to module.
>>>>> 
>>>>> The reason I prefer this is because I can also do:
>>>>> 
>>>>> module Foo_xml = Xml.Make(Foo)
>>>>> module Foo_json = Json.Make(Foo)
>>>>> 
>>>>> By separating the mechanism for creating the decoders from the type
>>>>> definition, I can add decoders for any type I want without 
>>>>> disturbing
>>>>> the original definition.  This feels more right to me.  But I have 
>>>>> no
>>>>> idea how to do it.
>>>>> 
>>>>> 
>>>>> Peter Zotov <whitequark@whitequark.org> writes:
>>>>> 
>>>>>> On 2014-05-03 20:08, Malcolm Matalka wrote:
>>>>>>> Nice, great work!
>>>>>>> 
>>>>>>> I'm not actually a huge fan of mixing type definitions and the 
>>>>>>> protocols
>>>>>>> they can be encoded/decoded from.  How hard would it be to take a 
>>>>>>> module
>>>>>>> definition accessors on a type and produce a new module with
>>>>>>> encode/decode functions?  That way I could create JSON, XML, 
>>>>>>> Protobufs,
>>>>>>> etc modules from one module.
>>>>>> 
>>>>>> Do you suggest generating the following signature instead of the 
>>>>>> current
>>>>>> one?
>>>>>> 
>>>>>> type t = ... [@@protobuf]
>>>>>> module Protobuf_t : sig
>>>>>>   val decode : Protobuf.Decoder.t -> t
>>>>>>   val encode : Protobuf.Encoder.t -> t -> unit
>>>>>> end
>>>>>> 
>>>>>> This would be similar to what deriving currently does.
>>>>>> 
>>>>>> In principle, this is not a complex change. It would add just a 
>>>>>> few lines
>>>>>> to ppx_protobuf.
>>>>>> 
>>>>>> However, I don't like it conceptually. I think the flat signature 
>>>>>> is
>>>>>> more natural, it mimics what one would usually write by hand 
>>>>>> without
>>>>>> introducing too much deep nesting of modules. You may notice how
>>>>>> ppx_protobuf doesn't generate the signature items for you; this is
>>>>>> because ppx_protobuf is a mere implementation detail, a convenient
>>>>>> way to generate the serializer/deserializer.
>>>>>> 
>>>>>> I'm not going to oppose addition of such a mode for two reasons:
>>>>>>   * I don't like fighting over minute details.
>>>>>>   * More importantly, deriving, when rewritten with ppx in mind,
>>>>>>     will surely contain this mode for compatibility. ppx_protobuf
>>>>>>     will be (ideally) rewritten over deriving some day.
>>>>>> 
>>>>>> I will happily merge a PR adding such a mode to ppx_protobuf.
>>>>>> 
>>>>>>> 
>>>>>>> Just an idea!
>>>>>>> 
>>>>>>> Peter Zotov <whitequark@whitequark.org> writes:
>>>>>>> 
>>>>>>>> Greetings.
>>>>>>>> 
>>>>>>>> I have just released the first version of ppx_protobuf, a 
>>>>>>>> complete
>>>>>>>> Protocol Buffers implementation. Unlike Google's implementation,
>>>>>>>> ppx_protobuf derives the message structure directly from OCaml 
>>>>>>>> type
>>>>>>>> definitions, which allows a much more seamless integration with
>>>>>>>> OCaml's types. In particular, ppx_protobuf natively supports
>>>>>>>> sum types, while maintaining full backwards compatibility with
>>>>>>>> protoc.
>>>>>>>> 
>>>>>>>> ppx_protobuf uses the extension points API, and thus requires
>>>>>>>> a recent (>= 2014-04-29) 4.02 (trunk) compiler. It also requires
>>>>>>>> an unreleased version of ppx_tools. It is probably easiest
>>>>>>>> to install both from the source repositories[1][2].
>>>>>>>> 
>>>>>>>> The API is extensively documented at [3].
>>>>>>>> 
>>>>>>>> [1]: https://github.com/whitequark/ocaml-ppx_protobuf.git
>>>>>>>> [2]: https://github.com/alainfrisch/ppx_tools.git
>>>>>>>> [3]: 
>>>>>>>> https://github.com/whitequark/ocaml-ppx_protobuf/blob/master/README.md
>>>>>>>> 
>>>>>>>> --
>>>>>>>>   WBR, Peter Zotov.

-- 
Peter Zotov
sip:whitequark@sipnet.ru

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-04 22:21               ` Peter Zotov
@ 2014-05-04 22:38                 ` Daniel Bünzli
  0 siblings, 0 replies; 15+ messages in thread
From: Daniel Bünzli @ 2014-05-04 22:38 UTC (permalink / raw)
  To: Peter Zotov; +Cc: Malcolm Matalka, caml-list, info

Le lundi, 5 mai 2014 à 00:21, Peter Zotov a écrit :
> Hopefully it can give some inspiration on how to implement such
> a serialization library.

If you are interested in combinator based approaches you can have a look at the following papers (sorry for the ocamldoc) :

 {b References.}  

Martin Elsman.  
{{:http://www.cs.ioc.ee/tfp-icfp-gpce05/tfp-proc/07num.pdf}
{e Type-Specialized Serialization with Sharing}}.  
Proceedings of the 6th Symposium on Trends in Functional Programming,  
2005, pp. 88-102.  


Andrew J. Kennedy.  
{{:http://dx.doi.org/10.1017/S0956796804005209}{e Pickler combinators}}.
J. Functional Programming, 2004, 14 (6), pp. 727-739.



Best,

Daniel



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-02 14:29 [Caml-list] [ANN] ppx_protobuf Peter Zotov
  2014-05-03 16:08 ` Malcolm Matalka
@ 2014-05-06  4:29 ` Alain Frisch
  2014-05-06  4:59   ` Peter Zotov
  2014-05-06 10:42   ` Malcolm Matalka
  1 sibling, 2 replies; 15+ messages in thread
From: Alain Frisch @ 2014-05-06  4:29 UTC (permalink / raw)
  To: Peter Zotov, caml-list, wg-camlp4

On 5/2/2014 4:29 PM, Peter Zotov wrote:
> I have just released the first version of ppx_protobuf, a complete
> Protocol Buffers implementation.

This is a very cool project, and a good first public use of extension 
points!

An aspect of attributes that is not fully settled is how to use 
namespacing in order to ensure that multiple tools interact nicely.
This topic will hopefully be explored by the community to reach a good 
consensus.

For instance, ppx_protobuf relies on attributes with quite generic names 
such as @default or @key, that might also be useful to other tools.  It 
might very well be the case that the same @default attribute (with the 
same value) would actually be useful to both ppx_protobuf and another 
deriving-like extension.  This is good, since attributes are not 
designed to be necessarily targeted to only one specific tool.  But in 
some cases, one might want to use a different @default attribute for 
different tools.  What about supporting both a short form @default and a 
more qualified one @protobuf.default?  This should support both situations.

Another point: for record fields, you interpret attributes at the 
toplevel of their type. I did not look precisely at the semantics of 
ppx_protobuf, but it seems that it might be more logical to attach them 
to the field directly (do you confirm?):

   type defaults = {
      results [@key 1] [@default 10]: int;
   } [@@protobuf]

I understand that this form is syntactically "more intrusive" in the 
non-decorated type definition.  Is it the reason to use:

   type defaults = {
      results : int [@key 1] [@default 10];
   } [@@protobuf]

?

I don't see anything wrong with doing so, although it might be worth 
supporting both forms.


-- Alain

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-06  4:29 ` Alain Frisch
@ 2014-05-06  4:59   ` Peter Zotov
  2014-05-06  7:33     ` Alain Frisch
  2014-05-06 10:42   ` Malcolm Matalka
  1 sibling, 1 reply; 15+ messages in thread
From: Peter Zotov @ 2014-05-06  4:59 UTC (permalink / raw)
  To: Alain Frisch; +Cc: caml-list, wg-camlp4

On 2014-05-06 08:29, Alain Frisch wrote:
> On 5/2/2014 4:29 PM, Peter Zotov wrote:
>> I have just released the first version of ppx_protobuf, a complete
>> Protocol Buffers implementation.
> 
> This is a very cool project, and a good first public use of extension 
> points!

Thanks!

> 
> An aspect of attributes that is not fully settled is how to use
> namespacing in order to ensure that multiple tools interact nicely.
> This topic will hopefully be explored by the community to reach a good
> consensus.
> 
> (a suggestion to recognize both [@x] and [@protobuf.x])

I have designed ppx_protobuf's usage of attributes with exactly this in 
mind;
[@default] especially would be useful for a wide range of type-driven 
code
generators.

I actually intended to release it with the support for namespaced 
attribute
variants ([@protobuf.key]), it has simply been forgotten. I'll include 
it
in the next release.

> 
> Another point: for record fields, you interpret attributes at the
> toplevel of their type. I did not look precisely at the semantics of
> ppx_protobuf, but it seems that it might be more logical to attach
> them to the field directly (do you confirm?):
> 
>   type defaults = {
>      results [@key 1] [@default 10]: int;
>   } [@@protobuf]
> 
> I understand that this form is syntactically "more intrusive" in the
> non-decorated type definition.  Is it the reason to use:
> 
>   type defaults = {
>      results : int [@key 1] [@default 10];
>   } [@@protobuf]
> 
> ?
> 
> I don't see anything wrong with doing so, although it might be worth
> supporting both forms.

The issue here is that I want to support "immediate tuples" (i.e.
"field : int * int", or, more importantly, "A of int * int", which
are semantically equivalent to and represented as a Protobuf message.
As such, I felt it would be only consistent to have the same syntax
for specifying options on an immediate tuple of several elements:

   results : int [@encoding zigzag] * int [@encoding bits32]

and for specifying options on a "tuple of one element":

   results : int [@encoding zigzag]

I'm not entirely happy with this scheme; the way it gives rise to the 
message
structure is at best confusing, as adding or removing a tuple element 
can
add or remove nesting and thus break protocol compatibility. In 
addition,
the [@key] attribute on, for example, a field itself would currently be 
ignored.
While this behavior can be fixed for the most common misplacements, I 
feel like
it's a drawback intrinsic to the extension points mechanism: misplaced 
or misnamed
attributes are going to be silently ignored.

Do you have any ideas for a solution? I have toyed with an idea of
a "verifier extension" which would ascertain the lack of attributes 
after
all the rewriter passes have presumably removed the attributes known
to them, but it wouldn't work  with generic attributes like [@default] 
that
must be shared between extensions.

> 
> 
> -- Alain

-- 
Peter Zotov
sip:whitequark@sipnet.ru


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-06  4:59   ` Peter Zotov
@ 2014-05-06  7:33     ` Alain Frisch
  0 siblings, 0 replies; 15+ messages in thread
From: Alain Frisch @ 2014-05-06  7:33 UTC (permalink / raw)
  To: Peter Zotov; +Cc: caml-list, wg-camlp4

On 05/06/2014 06:59 AM, Peter Zotov wrote:
> While this behavior can be fixed for the most common misplacements, I
> feel like
> it's a drawback intrinsic to the extension points mechanism: misplaced
> or misnamed
> attributes are going to be silently ignored.

Indeed.

> Do you have any ideas for a solution? I have toyed with an idea of
> a "verifier extension" which would ascertain the lack of attributes after
> all the rewriter passes have presumably removed the attributes known
> to them, but it wouldn't work  with generic attributes like [@default] that
> must be shared between extensions.

I'm not convinced by the idea of checking the absence of attributes 
after -ppx rewriters are applied.  You mention one reason (generic 
attributes, which shouldn't be discarded by any specific -ppx), but 
there are others:

  - Even non-generic attributes might be better left in the Parsetree 
sent to OCaml, so that they appear in particular in the dumped typedtree 
(.cmt/.cmti), which could be useful for further processing (e.g. a 
version of ocamldoc based on .cmti files).

  - Attributes are not only designed to support -ppx rewriters.  They 
can also be used by tools which inspect compiled .cmt/.cmti/.cmi files, 
or stand-alone tools which work on source files but are not used as 
preprocessors.

  - There is also the case of "optional" -ppx rewriters (e.g. a code 
instrumentation tool which could be applied or not).


An "attribute checker" (either integrated in a generic style-checking 
tool such as Mascot or as a stand-alone tool,  or maybe even as an 
"identity" -ppx so that it is always included in the compilation chain) 
would need some way to know which attributes are allowed and in which 
context (it's fair to let each tool check the advanced conditions on the 
payload and constraints such as nesting conditions).  For instance, each 
tool could come with a small text file describing the attributes it 
recognizes (and in which contexts), maybe also with a rough description 
of admissible payloads and -- why not -- some succinct documentation 
about the attribute.  This information could be useful not only for the 
attribute checker, but potentially by other tools as well (e.g. an IDE).


-- Alain

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Caml-list] [ANN] ppx_protobuf
  2014-05-06  4:29 ` Alain Frisch
  2014-05-06  4:59   ` Peter Zotov
@ 2014-05-06 10:42   ` Malcolm Matalka
  1 sibling, 0 replies; 15+ messages in thread
From: Malcolm Matalka @ 2014-05-06 10:42 UTC (permalink / raw)
  To: Alain Frisch; +Cc: caml-list, Peter Zotov, wg-camlp4

[-- Attachment #1: Type: text/plain, Size: 2227 bytes --]

I think making the application of a ppx look more like a functor
application solves the name spacing issue for the most part.
Den 6 maj 2014 06:29 skrev "Alain Frisch" <alain@frisch.fr>:

> On 5/2/2014 4:29 PM, Peter Zotov wrote:
>
>> I have just released the first version of ppx_protobuf, a complete
>> Protocol Buffers implementation.
>>
>
> This is a very cool project, and a good first public use of extension
> points!
>
> An aspect of attributes that is not fully settled is how to use
> namespacing in order to ensure that multiple tools interact nicely.
> This topic will hopefully be explored by the community to reach a good
> consensus.
>
> For instance, ppx_protobuf relies on attributes with quite generic names
> such as @default or @key, that might also be useful to other tools.  It
> might very well be the case that the same @default attribute (with the same
> value) would actually be useful to both ppx_protobuf and another
> deriving-like extension.  This is good, since attributes are not designed
> to be necessarily targeted to only one specific tool.  But in some cases,
> one might want to use a different @default attribute for different tools.
>  What about supporting both a short form @default and a more qualified one
> @protobuf.default?  This should support both situations.
>
> Another point: for record fields, you interpret attributes at the toplevel
> of their type. I did not look precisely at the semantics of ppx_protobuf,
> but it seems that it might be more logical to attach them to the field
> directly (do you confirm?):
>
>   type defaults = {
>      results [@key 1] [@default 10]: int;
>   } [@@protobuf]
>
> I understand that this form is syntactically "more intrusive" in the
> non-decorated type definition.  Is it the reason to use:
>
>   type defaults = {
>      results : int [@key 1] [@default 10];
>   } [@@protobuf]
>
> ?
>
> I don't see anything wrong with doing so, although it might be worth
> supporting both forms.
>
>
> -- Alain
>
> --
> Caml-list mailing list.  Subscription management and archives:
> https://sympa.inria.fr/sympa/arc/caml-list
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs
>

[-- Attachment #2: Type: text/html, Size: 2939 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-05-06 10:42 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-02 14:29 [Caml-list] [ANN] ppx_protobuf Peter Zotov
2014-05-03 16:08 ` Malcolm Matalka
2014-05-03 16:24   ` Peter Zotov
2014-05-03 18:46     ` Malcolm Matalka
2014-05-03 18:52       ` Peter Zotov
2014-05-04  4:49         ` Malcolm Matalka
2014-05-04  8:55           ` Peter Zotov
2014-05-04 15:18             ` Malcolm Matalka
2014-05-04 22:21               ` Peter Zotov
2014-05-04 22:38                 ` Daniel Bünzli
2014-05-04 20:34             ` Gerd Stolpmann
2014-05-06  4:29 ` Alain Frisch
2014-05-06  4:59   ` Peter Zotov
2014-05-06  7:33     ` Alain Frisch
2014-05-06 10:42   ` Malcolm Matalka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).