Memory Usage for NewtonTreeCollisionEndBuild

A place to discuss everything related to Newton Dynamics.

Moderators: Sascha Willems, walaber

Memory Usage for NewtonTreeCollisionEndBuild

Postby Adversus » Sat Dec 01, 2012 5:09 pm

After upgrading from core 200 to core 300 (.2) I'm having a crash due to lack of memory in this function "NewtonTreeCollisionEndBuild(treeCollision, 1);" whenever it tries to optimize it.

I don't think anything is wrong except for the fact that my target device is lacking in memory however it used to work in core 200 so is there a define somewhere I could set to try and help me out?
Adversus
 
Posts: 29
Joined: Tue Oct 12, 2010 8:39 am

Re: Memory Usage for NewtonTreeCollisionEndBuild

Postby Adversus » Sat Dec 01, 2012 5:15 pm

btw the mesh I'm using has around 2000 faces and before calling that function there's 43 MB's left.

The world size/scale is huge and I used to be able to set a max min for that however now that doesn't exist. Maybe that helped before?

It's also got nothing to do with my previous issue of running out of stack memory which I was able to increase.
Adversus
 
Posts: 29
Joined: Tue Oct 12, 2010 8:39 am

Re: Memory Usage for NewtonTreeCollisionEndBuild

Postby Julio Jerez » Sat Dec 01, 2012 5:31 pm

In newtopn core 300, the intermediate data tyoe for MeshCollsion tree is double, therefore is use twice as much memory that in core 200.
Hoewever the final mesh size if about eth same because as is was in core 200.
2000 face is a very small numbe of face for runniong pout of memeory in a PC, are you usin this on a different platfrem?
you cans always load preserialized collision trees which use zero intermediate memory in to build then.

BTW the final optimized meshs of core 300 are actually smalled that\n they were in core 200, this is because the optimization can now makemore riguruos check since it use whne calculating the constrained optimization of the polygon meshes.
this lead to better convex shapes aproximation. howver on the othe side the mesh also now have an explicit connectivity information whi make the face indice twice as large.
all in all is difficutl to predict which is smaller in size, but the one thing that is certain is that core 300 mesh are by far better. My guess is that you run out of memory because foe the build proccess.
In my test in win32 I run oput of memory building a mesh for a hield of 1000 x 1000 (two million faces), but it work fine in win64.

I suggest serializing you meshed and preload then at run time.


In core 300 the worlds are unbounded in sized. eh end application is respsable for controlling the world size.
Thsi si wa good thing that was almost the number one request from user
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Memory Usage for NewtonTreeCollisionEndBuild

Postby Adversus » Sat Dec 01, 2012 7:24 pm

Thanks Julio for your quick response.

Yea I know I can serialize them so that's an option but was still hoping I could change something so I didn't have to for now. I just wanted to get core 300 up and running as I remembered you saying that it's broadphase was much faster and I'm at the point now of my port where everything is done and I'm just trying to get as many FPS back as possible. The device I have has only 64MB total.

Yea it crashes the second time it hits the dgPolygonSoupDatabaseBuilder constructor i.e. instances created temporarily due to the optimization step so it might be that.

However I needed to change two functions in dgTypes which are called here to below, is that right? I'm not using multithreading.

Even assuming it takes double the memory from core200 it still seems strange it runs out now because this is the smallest level and the larger levels are probably twice as big and they ran fine.

What do you mean "Build process", what should I be looking for? Remember that it uses the same project settings a core200. And how can the "application" control the world size? Is there a function I'm missing.

Code: Select all
DG_INLINE dgInt32 dgAtomicExchangeAndAdd (dgInt32* const addend, dgInt32 amount)
{
   // it is a pity that pthread does not supports cross platform atomics, it would be nice if it did
   #if (defined (_WIN_32_VER) || defined (_WIN_64_VER) || defined (_MINGW_32_VER) || defined (_MINGW_64_VER))
      return _InterlockedExchangeAdd((long*) addend, long (amount));
   #endif

   #if (defined (_POSIX_VER) || defined (_MACOSX_VER))
      return __sync_fetch_and_add ((int32_t*)addend, amount );
   #else   
      dgInt32 temp = *addend;
      *addend += amount;
      return temp;
   #endif
}

DG_INLINE dgInt32 dgInterlockedExchange(dgInt32* const ptr, dgInt32 value)
{
   // it is a pity that pthread does not supports cross platform atomics, it would be nice if it did
   #if (defined (_WIN_32_VER) || defined (_WIN_64_VER) || defined (_MINGW_32_VER) || defined (_MINGW_64_VER))
      return _InterlockedExchange((long*) ptr, value);
   #elif (defined (_POSIX_VER) || defined (_MACOSX_VER))
      __sync_synchronize();
      return __sync_lock_test_and_set((int32_t*)ptr, value);
   #else   
      dgInt32 temp = *ptr;
      *ptr = value;
      return temp;
   #endif
}
Adversus
 
Posts: 29
Joined: Tue Oct 12, 2010 8:39 am

Re: Memory Usage for NewtonTreeCollisionEndBuild

Postby Julio Jerez » Sat Dec 01, 2012 8:21 pm

if you want to get somethopng goping you can skip optimizing the collision mesh.
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Memory Usage for NewtonTreeCollisionEndBuild

Postby Adversus » Sat Dec 01, 2012 8:53 pm

I just changed the dgVertexArray to use dgArray<dgBigVector>(1024 * 32, allocator) (and the index array) and that works fine now. Instead of using 10MB per vertex array it now uses a fraction of that.
Adversus
 
Posts: 29
Joined: Tue Oct 12, 2010 8:39 am

Re: Memory Usage for NewtonTreeCollisionEndBuild

Postby Julio Jerez » Sat Dec 01, 2012 10:34 pm

how much was using before or what value did you set? 10 MB should like a bug

of I see yo umean this calss

Code: Select all
   class dgVertexArray: public dgArray<dgBigVector>
   {   
      public:
      dgVertexArray(dgMemoryAllocator* const allocator)
         :dgArray<dgBigVector>(1024 * 256, allocator)
      {
      }
   };

   class dgIndexArray: public dgArray<dgInt32>
   {
      public:
      dgIndexArray(dgMemoryAllocator* const allocator)
         :dgArray<dgInt32>(1024 * 256, allocator)
      {
      }
   };



yes that defaul is too large, I chnegd to this

Code: Select all
   class dgVertexArray: public dgArray<dgBigVector>
   {   
      public:
      dgVertexArray(dgMemoryAllocator* const allocator)
         :dgArray<dgBigVector>(1024 * 32, allocator)
      {
      }
   };

   class dgIndexArray: public dgArray<dgInt32>
   {
      public:
      dgIndexArray(dgMemoryAllocator* const allocator)
         :dgArray<dgInt32>(1024 * 32, allocator)
      {
      }
   };


the problem is that the optimizer uses few of those simulatnewully as temprary variables, and at 10m a piece yes it could go over 64m eassly,
but that container gowe on demmad ther si no reason to make that large initially. It can even be smaller than teh value you try
thnk fo teh observation, it is check in now.
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles


Return to General Discussion

Who is online

Users browsing this forum: No registered users and 2 guests