And now it is time to write a Java API over the top of it. That doesn't seem so bad, does it?
Well, yes. Yes, it does.
If you have a C++ entity in memory (an "entity" is at the top level), I want a Java object that corresponds to that entity and provides some methods. Some of those methods allow you to create a new Java object that will provide access to additional methods related to an object that belongs to the entity -- say, for example, the metadata that describes the time periods in the model. If you wanted to change the time metadata, you'd get a copy of the time object, call the methods that the Java wrapper for the time object provides, then commit it back to the entity object so that the changes to metadata and the related data would be saved in the entity.
Fundamentally, this is a lot like a COM interface, in that you open an interface, then open other interfaces that the first interface gives you access to. It's a reasonably well-understood model.
The rest of the group doesn't want to do that. They want one big (and it will be big) interface that allows you to do anything that you might ever want to do to the entity. Aside from being, in my opinion, unmanageably large, it also seems to take all of the bookkeeping that Java would normally help you do and require that you code it by hand, because now the C++ layer has to keep track of the fact that you asked for a new copy of the time object and you'll have to ask the entity interface to have it destroyed, should you decide to abandon your changes instead of just destroying the Java wrapper object or letting it go out of scope.
I think the latter approach is a really bad idea.
Am I just wrong? Or is there something that I'm missing that will cause me to see the light?
*sigh*