Them if someone come on later to implement the Dog class,
that person does not take the Cat class and make changes so that it acts as a Cat and also a dog.
Sure, but that's no example for my problem.
In fact i don't want a dog, i want a cat.
Let's say evolution goes on, and my cat has a slightly different genome than the original cat.
Actually i'm lucky, because your functionality requires no modification.
This is actually a sign that both your functionality, and my new genome, are probably good.
This is the upside, and what actually matters.
Now the downside is: I can not easily implement my new genome, even it is just a plain simple modification of data. This is why i personally dislike and what i often associate with 'OOP' and its problems.
I am well aware that my modification of data violates higher goals as expressed by principles and paradigms.
I know that my modification will break if you do changes.
I know from the perspective of those higher goals, my way of doing it, is wrong.
But i know even more that those higher goals are just wishful thinking.
No matter how good we are at it, or how gracefully we follow SOLID etc.,
our software still does not end up that modular, reusable and easy to maintain, as we have hoped. No?
Thus we should not argue about those things. OOP does not always work well, and nothing else does either.
Actually i'm sorry that i brought you into a situation where you maybe felt a need for defense. That's not my intent.
I just want to provide feedback, so you can add this to your list of potential issues, and how to do eventual future improvements.
Anyway for what I can see, we I get form what you explain is that class ndShapeStatic_bvh does what you need, but the builder is the problem.
Not quite.
Problems:
* Using the builder is the ONLY way to generate a static bvh at all. But i don't need a builder, because my geometry is already 'built'. My geometry processing pipeline is responsible to output this data respecting Newtons low level data structures. For reasons of efficiency, which weight higher than encapsulation programming principles.
So there should be a way to access this low level data, plus some documentation about this data.
It is expected this causes maintenance cost of updating the geometry processing pipeline on breaking changes on Newtons side, which is fine.
* ndShapeStatic_bvh makes assumptions, which are only guaranteed to hold within your tightly coupling to the builder and its intended use.
The assumption i mean is that adjacency is missing and has to be computed on your side.
That's not necessarily the case. Any preprocessing pipeline must have adjacancy information and wants to give this as input.
Notice: In my case, your adjacancy algorithm even fails in special cases.
The reason is a double sided triangle, making the tip of the toe from the Armadillo model.
Your algorithm does not know one edge should be open to connect to the toe.
It thinks the edge is closed, and the adjacent face is the other side of the tip, instead the actual toe.
My meshes were cut at this spot, so your algorithm has no way know about the toe and fails for my case of segmented 'mega mesh'.
That's just one reason why i need to take care about geometry in detail and require to transfer this to Newton precisely.
If this is correct, then you only need to override the builder, thsi class below
No, not correct.
By replacing the builder, i still have no way to pass adjacency information.
ndShapeStatic_bvh calculates this from scratch no matter what.
I can fix this easily using this:
- Code: Select all
D_COLLISION_API ndShapeStatic_bvh(const ndPolygonSoupBuilder& builder, const bool computeAdjacency = true);
That's a little change i can do on every update, so i don't request you would add this.
Despite that, my current hacky solution is close to optimal. I'm happy.
But on tho long run, segmented mega meshes might become used more often. (I came up with that term just now, but feels pretty descriptive.)
We have seen what heightfields and instances of small scale models give us, but we are not really happy with is technical and visual restrictions.
Plus, modeling / scanning all those small scale models is expensive.
Thus, i think what we want is to generate worlds mostly procedurally, but using proper simulations instead dump perlin noise.
Imitating the real world, what we get is a huge, connected mesh. There should be no divisions and no seams.
At this point we must segment our mesh into multiple smaller patches to process it.
That's where i am, and i expect some problems from established legacy standards.
If you follow the idea, you can clearly see the problem: Our segment must describe the geometry across the splitting boundary.
That's why i solve this by adding extra normals to those missing faces, and linking them on the edges.
So that's a very small change, not affecting things like acceleration structure.
Ideally i would be able to do those small changes with out a need hack OOP principles.
This is what i do to get the pointer:
- Code: Select all
#if 1 // HACK: access private bvh->m_indices
ndInt32* indices = *( (ndInt32**) (((uint8_t*)bvh)+160) );
#else // get offset
ndInt32* indices = bvh->m_indices;
ndInt32** ppindices = &bvh->m_indices;
size_t offset = offsetof(ndShapeStatic_bvh, m_indices); // MSVC debug+release, Clang release: 160
uint8_t* pptr = (uint8_t*)bvh; pptr += offset;
ndInt32* indices2 = *((ndInt32**)pptr);
SystemTools::Log("offset of ndShapeStatic_bvh::m_indices: %i\nppindices %i == pptr %i\nindices %i == indices2 %i\n", offset, ppindices, pptr, indices, indices2);
#endif
Fu@#, not even offsetof works with private members!
Why, the hell?
I hope the guys at C++ committee just like to torture me.
Otherwise they take their higher goals just a little bit too serious, imho.
So now you have been warned.
If you don't stop this, i'll add #define private public to my preprocessor definitions, and call it a lucky day without further obstacles
if you decide to do what I said in previous post, you can save your data using that function,
and them load it with a mesh viewer to see if is right.
No, no. I can understand, visualize and verify your related data structures at this point. At least those related to my issue.
Consider it solved for now. Just said for feedback.