The server sends back a selection of binary C++ objects. These are then converted to XML and passed back through the Java web service to the COM object, which may (or may not) then massage the XML before passing it back to the C# layer.
This is kind of sort of tolerable for a variety of architectural reasons, none of which I will go into here.
The problem is that we're not necessarily reading and writing consistent XML when we issue requests to the COM / Java / C++ / Server layers and when we convert the binary objects returned by the server into XML. For the most part (not all the time, but often enough to be annoying), each programmer who is working with a request that needs to be passed to the back-end server writes a brand-new XML command, even if we are already hitting the same back-end server call. This is because (again, for the most part) we don't have a common set of C# utility functions that would write the XML for us and submit it and because the C++ objects that we get from the server side don't know how to render themselves as XML.
And thus, we keep reinventing the wheel.
I just started seriously working on this project a few weeks ago and I have spent far too much of my time trying to write utility functions on the C# side and XML renderers on the C++ side. (And occasionally, a C++ constructor that accepts an XML description of the object so that I can create the object from the client-side XML.)
But it seems like the only sane way to do this.
Unless I'm seriously off base here.
Thoughts?