Status on the Player Controller

A place to discuss everything related to Newton Dynamics.

Moderators: Sascha Willems, walaber

Re: Status on the Player Controller

Postby Julio Jerez » Fri Jan 20, 2012 7:04 am

It may be that I had not done the triggers part for compound yet. I take a look.
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Status on the Player Controller

Postby Spek » Sat Jan 28, 2012 8:23 am

Hey again.

Already had a chance on that bug or the AI demo?

I made the Delphi header for the AI functions so I could see a bit what it offers. Clean and simple. But one thing, how to connect an AI agent with a Newton body? Most function I saw are to construct a statemachine, and maintaining agents...

Rick
Spek
 
Posts: 66
Joined: Sat Oct 04, 2008 8:54 am

Re: Status on the Player Controller

Postby Julio Jerez » Sat Jan 28, 2012 11:10 am

Wow you are fast, excellent. I am going to put lot of attention on the NewtonAI module.

I am plug-in the functionality now.
I want to make with the Vehicle, but I realize that I have to add a support that I remove form version 1.5 of Newton which was the ability of sub sampling the solver at variable time step.
This is a powerful feature that is good continue collision, and complex joint contraptions.

with core 200 I removed that because the iterative solver do not like that sine sub time step are convert for variable high frequency accelerations. I replace that with more advanced CC collision.

Removing that powerful feature force me to make special joints that try to solve the problem on the joint itself by trying to predict event with convex cast, and ray casting.
for that reason I removed the car joint.

The truth is that Accuracy in numerical integration is archived by reducing the step more than anything else. so I am bring that functionality back.
It turn out that what I thought was a fundamental problem in the iterative solve was in fact a bug on the way I was averaging the derivative. I discovered that thank to Sweeney how show me a demo where even running at the same rate the iterative solver was yielding very different respond.
I fix that Bug and now both solver can take variable time step. so they is not a technology limitation anymore.
In addition I am also added already the ability to run different joint on different solve all selected by the user.

What does this have to do with the car and the player? well the car and continues collision were the two
feature that were using that functionality of core 100.

so when I star modifying the car to using the new feature, I realized I do not have implemented yet.

I thought maybe the car is too complex I will make the player first, and I also realized that the play uses the same support from the engine.

So I say I need to have that functionality first so that I can continues with the high lever stuff.

The problem is that as oppose most other functionality of core 300, they are evolve functionality converted from core 200.
But core 200 does not have that functionality, that functionality is is core 100, and core 100 is gone.

I lose the source safe database with the old compute and SNV started with late core 200.
So I am implementing the feature again form the beginning.

I started and I made progress but it is no expose to the interface yet.

I will be a few day only I get to the point that is useable by the joint.

This is why you see the Car and Play no usable yet.
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Status on the Player Controller

Postby Spek » Sun Jan 29, 2012 7:49 am

Take your time to fix things :) But do you have an idea already how a Body will be used in combination with an AI agent? Asking this so I can already start coding a few things. I suppose that:
- You create a body, as usual (using a cylinder or something like that)
- Apply a "Player" joint on it? ("CustomPlayerController" class as shown in the demo)
- Create an AI agent and define its state machine
- Somehow link the the body and agent
- Let custom callbacks help controlling the agent/body, and use to trigger animations, sounds, et cetera

Don't know if a lot will change, but otherwise I'll have to start implementing a PlayerController class as done in the demo.
Spek
 
Posts: 66
Joined: Sat Oct 04, 2008 8:54 am

Re: Status on the Player Controller

Postby Julio Jerez » Sun Jan 29, 2012 11:40 am

This is how is set up in general.

The NewtonAI is a self contained world for NewtonAIAgent veru much like the Newton world is a self contained world for rigid bodies.
The Newton AI manage only NewtonAI agents and these agents can be connected to other agent with a link the same way the Newton world connect bodies with joints.
These conception allow then use to establish relation between agents that can be permanent of temporal.
and agent can be anything, for example a particle, a rigid body, a sound, even an input handle.
what give personality to the NewtonAI agent is the user data is assigned to it.

If you look at the folder NewtonTool I am making a high level manage for the AI were stared some high level agents like the car and the player. In the future there will be other like navigation and Plath finding which I will use to demonstrate player and car demo.

The NewtonAI has an Update function, and it also has a special Agent call the GameStateAgent .

The Game state Agent is where the application code the General Game login that control the entire application. the update function for this agent is call first and then all of the update function for all other agents are call in a round robin way.

Basically the Agents system is a graph the connected all again and the Game state is always the root node of the graph.

Here is a example how it can work.

say you are making a first person shooter game.

you have firs game element that are all different and you need to relate the all together.

The first thing element is the general game control, This handles inputs, Cameral, the state of the game lake organizing enemies in grouo and so.

you code that is the call back of eth Game State Agent, you can have the Agent has a pointer to the Player Agent with is simple another Agent.
It does not know what the player is or how is suppose to work, it simple send signal to that agent if it has one, and hope the agent obey the order.
And example of that is you hit the key "pick weapon"

The Game state will read that key input, and after some process in it update function it determine that this key mean to tell the player to pick some weapon.

if the Game state agent has a pointer to a player, it will send the signal Pick weapon.
the play maybe have a way to pick weapon or may not.
Say for example this player have a way to pick weapon (and we will see how that work later)
The again will do that if it decided that this is a sensible order.
say that you have different player that do different action in respond to the same order.
for exampel for play A [pick weapon is pick a pistol. for player B it could be pick a rifle,
a player could be a transport vehicle an Pick weapon can be something else or simple ignored.
a player could be mu weapon guy an pick weapon can be cycling between weapon, and so on.
you can even implement a player who is a moron like Gomer Pyle and pick weapon is literally pick a weapon from the floor.

you can see that decoupling the Game logic from what you control already simply the logic. The strategy make it possible that the game can be impleneted even of eth player or the players do not exist yet.

Now let us say we make the Game player.

The Player is another AI Agent that can be link to a series of sub AI agent.
remember I mentioned that agent can be connected

The idea is taken from how the Brain of Mammal work. The brain is organize on layers.
Take for example the human brain, there are a few region that can be identified like the cerebral cortex, the locomotion system, immune system, vision, olfactory, etc

these layer function largely independent and they solve many of the mundane task by them self without telling the cerebral cortex what they did. However when it the present of some even that they do not know how to handle, they ask for the cerebral cortex for instructions. The cerebral cortex analyzed the event and mane order that part to do some action, or it may even order other systems to assist in respond to that event.

an example of that is the olfactory system.
Any mammal do not think about breading, they do it all the time while they are alive.
however of a mammal come across a smell they are not familiar to them, the cerebral cortex is immediately notified.
he cerebral cortex analyzed the signal and if for example the smell is something they smell before and was food, they order the locomotion system to head in the direction of the smell. They also order the olfactory system to focus of that smell so that the locomotion system can get better guidance.
If for example they smell is an odor but they have no record of it, the can order the sensory system to cover the nose, the also order all system to be causes and approach the in the direction of the event.
A smart cerebral cortex may record that event for future response.
If the event is form a predator and they had prior record, they may instruct the locomotion system to run away.
the small can be some harmful fume than can kill the against, and they may react to scape, or to protect. and so on.
This can extend to the vision system, the locomotion system and all other systems.

I hope that by now you start to get how an complex agent can be coded and they can be program to learn, or to be stupid, you can even make the evolve, for example the off ring of and agent can inherit the database of eth parent.

The way this is achieved with Newton AI is that the Agent represent the Brain system with AI against that are linked together.
Each one of these Sub agent have their own independent states that only respond to that system. They can even have sub AI agents than can do complex task and repot only to the parent AI, and example of that will be the new player.

If you look the class
Code: Select all
class dAIAgentCharacterController: public dAIAgent
{
   // locomotion states
   class dLocomotionAgent: public dAIAgent
   {
      public:
      enum
      {
         m_idle,
         m_walk,
         m_strafe,
      };

      dLocomotionAgent (dAI* const manager);
      virtual ~dLocomotionAgent ();
      virtual void Update (dFloat timestep, int threadID);
   };

   public:
   dAIAgentCharacterController (dAI* const manager, NewtonBody* const playerBody);
   virtual ~dAIAgentCharacterController ();
   virtual void Update (dFloat timestep, int threadID);


   dLocomotionAgent* m_locomotionSystem;
};

so far it only has a Locomotion subsystem, but I will add the sensory and some other optional
The Locomotion system is when a Newton Player controller will leave.
These system will use a physics join to make the player walk, run, swim, jump, and do all of the stuff a player do. It is here where all the stuff will be coordinated.

for example when the class dAIAgentCharacterController which is the cerebral cortex of the player tell the play walk in this direction, the dLocomotionAgent Agent will go about his business walking, so as alone ahs it does not get a different instruction it will be walking. The dLocomotionAgent will sue the physics engine to scan the environment (ray cast, convex cast and collision, etc) and as long as the input that it get is the same as the one if was given by the cerebral cortex, it will continue walking without tell anyone.
If the input it receives is for an obstacle or some different like a step, a ramp, a wall, another player, anything it will notify the cerebral cortex, dAIAgentCharacterController and way for instructions.
The cerebral cortex may not know what that is or it may know.
if it knows, say for example is a small object. an may decide to say continue walking, and it may send a signal to that other AI agent to be push away.
if the over is a step or a ramp, it will send the locomotion system to change to a sate to so that actions.
If the dAIAgentCharacterController get and instruction from another another Agent (the Game state for example) say the order is Jump.
The dAIAgentCharacterController will simple tell the Locomotion system to stops what was doing and do a jump.
The locomotion system will get the signal, and it may do immediately if is in condition to do so, or it may bring the action of the stuff that was doing to a point where the jump can be executed, or it may simple do nothing if it no capable of jumping.

does that gives you an idea?
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Status on the Player Controller

Postby Julio Jerez » Sun Jan 29, 2012 2:24 pm

This AI idea is something I have being thing for a long time.
It is not just for game, It will be direct for complex systems. In reality there si no much difference between the way the brain of a mammal, a organization of a corporation, a multi processor computer, an government and any complex organization.
Basically they are made from different systems that all work together under the direction for a leader or a president, but each system is autonomous.
It is the fact that the can work autonomous that make then function as a unit.
It is only in video games that Game programmers and designers still have the mentality of one unit with game entity.
When it comes to the player and how the controller will work in newton.
The locomotion system will have a high lever model that will be a skeleton passes by the application.
The locomotion system will create one AI agent for each one of those body limbs

Say for example the player passes a skeleton that is a simplifies human skeleton
Something like two capsule for legs, one capsule for body two more capsule for arms and one capsule for head.
Each one of those capsule will work as a play controller, each doing I on collision and passing massages.
So the problem of standing on ramp or steeping on obstacle even going around cornel is automatically resolve because each capsule will do what capsule do.
If you think how to prevent the entanglement? That the job of the sensory system
The sensory system can be a big capsule the is doing closes distance and path finding if when a system can present a problem, thong like getting too close to a wall, or another player.
The system will be notifying those events to the main cerebral cortex which will send the signal to the locomotion to do something else lei stop or change directions.
No longer will the player work in a perpetual collision state when a capsule hit a wall and scrape against it, those system are error prompt and bound to fail.
All it takes is a wall with a relatively complex mesh for the system either fail or come down to a crawl.
With this methods the sensory system can detect the complexity of eth mesh and take action before the mesh hit the complex mesh.
If you think about, a human or another animal do no walk around the world scraping again wall.
If they did that and they fail once they can get hurt, the system can fail in a human but there are other systems that come to the rescue.
All of this may sound to complex and expensive but in reality it is not. I will argue it will be cheaper than what everyone does now, which is a one function solve everything, I am think many function solving simple things, and the combination for the simple thong make a complex behavior.
Plus the system is designed to work in parallel so the all those sub system will be running simultaneously.
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Status on the Player Controller

Postby Spek » Sun Jan 29, 2012 3:28 pm

it takes a little while before I can absorb the text, I'm not a fast learner :) But let me do an attempt to summarize things, see if I get it a bit:

* AgentWorld
Basically the manager of all (active) AI entities, whatever that are. Responsible for updating them, calling their callbacks, et cetera.

* Agents / sub Agents
Not sure why we would want to link agents right away, though you can think a complex system as a set of sub-systems. A car for example could have the "driver", "climate control", "Cruise Control", et cetera. If done well, it allows to code independant, reusable
modules in a clean way.

In my typical case, a splitup might be "pathfinding", "sensory", "movement", and "decission making" which basically tells where we want to go, depending on a substate ("idle", "sleep", "combat", "searching", ...).

* Game State
Top level states. For a common game example, that could be "Main menu", "Playing", "Paused", "Using inventory" or "Control vehicle".



Since the AI functions are pretty abstract, you can make it as simple or complex you like. The magical trick is always to find the good way in between. I agree making one huge function that controls the entire player is problematic when it comes to solving bugs and reusing it for another AI-thing that works slightly different. Having dozens of mini-subsystems makes the whole thing complex too, as you need to understand the relations and responsibilities between all of them. But chosing the right strategy is something no engine can fix with pre-baked functions :)


In my case, there are already state-machines that work in pretty much the same way. Thus a tree of state-machines (global-game status --> playing --> normal / driving / .., and so on). The most interesting part will be the combination of an agent, triggered by sensing the Newton world (collision, raycast, trigger volumes, et cetera). Or what you call "Locomotion". Also sensing the environment with (convex) raycast checks is important.

I wonder, does Newton give a "player" from itself that does the environment-sensing? So far I used some raycasts to detect stairs and such, but doing it right is difficult! Then again, I understand you can't just make an universal "player" class. Super Mario is different business than controlling a giant with 4 legs.

Thanks for the explanation!
Spek
 
Posts: 66
Joined: Sat Oct 04, 2008 8:54 am

Re: Status on the Player Controller

Postby Julio Jerez » Mon Jan 30, 2012 2:38 pm

In my case, there are already state-machines that work in pretty much the same way. Thus a tree of state-machines (global-game status --> playing --> normal / driving / .., and so on). The most interesting part will be the combination of an agent, triggered by sensing the Newton world (collision, raycast, trigger volumes, et cetera). Or what you call "Locomotion". Also sensing the environment with (convex) raycast checks is important.

Yes s stet machine is very much standard feature of any complex application.
Be a math app, a physics engine, a business app, a game of even a word processor.
Believe or not a matrix is also a state machine.
Basically a state machine is a graph with a pointer that indicates the current node.
However do not understimate the implementations detail. We have not talk of the functionality of the AI engine.
AI Agents and States are simply atomic blocks upon all the AI system isl build.
But the engine will have a core functionality that will make stuff like Path, and Navigation, and scripting.
this is where the ability of connecting AI Agents to other agents become important.

Similar to collision trees, the engine will have function to make Negation maps.
There will be Navigating maps type similar to collision in the physics engine.

Say for example you already have your Player and NPC represented as AI agents.
The next step it the creation of a Navigations maps.
Let us say we will use a polygonal mesh for that.
Basically you will pass the faces of a mesh to a NewtonMeshEffect, You will add Id to the faces,
Then you will call the AI Engine to build a Negation map object.

This Navigation Map will be similar to a collision tree, basically inside it will have the map as a set of sectors each one be an AI Agent.

Now when a GamePlay AI agent is placed in the world he will be connected to the AI Agents the he is in
This feature automatically connects all AI agents, because any AI agent will be connected to at least one Navigation Agent.
Polygonal maps are just one type of Map, you can have path made of lines, Heighfield, or anything you want. And the all will work under the same interface.
As you can see the skeleton of the Newton AI engine is very similar to the Skeleton of the physics engine.

The engine will also use the Auto sleep, to manage AI agent updates in the same way it use auto sleep for physics objects.

Navigating Agents who are active get updates, so no longer physics triggers are necessary I never like that idea anyway. Active trigger can run script, of any kind LUA can be a candidate.
But in the future there is also Newton Script, which is a object oriented language the combine the best of Java and C sharp and discard what I bad.
But do not be scare all this come after the Player controller is up and running.
BU the goal is to make Netwon Phsyics and Newton AI the best friend of the independent game programmer
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Status on the Player Controller

Postby Spek » Thu Feb 02, 2012 7:07 am

That's a noble strive :) Probably I won't use the Newton state machines throughout the application, as I would have to rebuild a lot then. But it's a good start for new programs, as it forces you into thinking the right way. Especially game-logic and AI programming quickly becomes a gigantic mess with thousands of conditions if not done properly.



---edit---
Another weird thing I just found while trying to port the playercontroller code. NewtonWorldConvexCast doesn't find anything (tried different colliders and target positions), the filter-callback is never called either. When running the same thing with Newton2.33 it works though...

---edit2---
Got the playerController fully ported & working in Delphi now (hurray!), but in Newton2. When using Newton3, the issues from above play a role:
- Convexcast does not hit anything (thus you're always falling)
- Can't create a custom joint with the parent set on NULL (NewtonConstraintCreateUserJoint ( ... aParent );
- Couldn't compile the new dJointLibrary DLL to use the Newton3(?) character controller VS2008 gave some errors when trying to build the project...
Spek
 
Posts: 66
Joined: Sat Oct 04, 2008 8:54 am

Re: Status on the Player Controller

Postby Sweenie » Tue Feb 28, 2012 6:50 pm

Regarding ConvexCast in 3.00.
Just like you I couldn't get it to work, tried for a long time and couldn't understand what was wrong... until I decided to debug into the Newton source...
Code: Select all
dgInt32 dgBroadPhaseCollision::ConvexCast (dgCollision* const shape, const dgMatrix& p0, const dgVector& p1, dgFloat32& timetoImpact, OnRayPrecastAction prefilter, void* const userData, dgConvexCastReturnInfo* const info, dgInt32 maxContacts, dgInt32 threadIndex) const
{
   _ASSERTE (0);
   return 0;
}

Now that is some fast convex cast code. :lol:
Unfortunately not very accurate though. :mrgreen:
Sweenie
 
Posts: 503
Joined: Mon Jan 24, 2005 7:59 am
Location: Sweden

Re: Status on the Player Controller

Postby Spek » Tue Feb 28, 2012 7:02 pm

That's revolutionary code indeed :D
Still using Newton2 for now, so I can happily run (and jump) around with my player controller
Spek
 
Posts: 66
Joined: Sat Oct 04, 2008 8:54 am

Re: Status on the Player Controller

Postby Julio Jerez » Wed Feb 29, 2012 10:08 am

Sweenie wrote:Regarding ConvexCast in 3.00.
Code: Select all
maxContacts, dgInt32 threadIndex) const
{
   _ASSERTE (0);
   return 0;
}
Now that is some fast convex cast code. :lol:
Unfortunately not very accurate though. :mrgreen:


ye sthat really fast, now you see why I could no complete the playe and the car.

I will complete the Raycast car only before I comvetrh that port the convexcast from 200
Julio Jerez
Moderator
Moderator
 
Posts: 12426
Joined: Sun Sep 14, 2003 2:18 pm
Location: Los Angeles

Re: Status on the Player Controller

Postby pHySiQuE » Mon Mar 05, 2012 6:48 pm

I have a player controller with crouching, jumping, and an adjustable max angle they won't slide off of. I am converting the code to C++, but after I do I will post it here for you guys.
pHySiQuE
 
Posts: 608
Joined: Fri Sep 02, 2011 9:54 pm

Re: Status on the Player Controller

Postby shybovycha » Wed Mar 07, 2012 6:55 am

pHySiQuE wrote:I have a player controller with crouching, jumping, and an adjustable max angle they won't slide off of. I am converting the code to C++, but after I do I will post it here for you guys.


Waiting impatiently =)
Image
User avatar
shybovycha
 
Posts: 52
Joined: Fri Oct 23, 2009 6:15 am
Location: Poland

Re: Status on the Player Controller

Postby Spek » Wed Mar 07, 2012 11:16 am

Talking about players, do you guys have problems with stair climbing? Either I made a little bug somewhere while converting the C++ PlayerControllerJoint code to Delphi, or...

In the attachment you see the stair I''m talking about. I can climb it, but then suddenly somewhere halfway I will get stuck. When rotating or moving backward-forward a bit, the bastard proceeds. But that's a bit annoying of course. Maybe the physical (collisionTree) shape is a difficulty here? The stairsteps are quite high, but not higher than the "maxStepHeight" setting.
Attachments
newtonStair.jpg
newtonStair.jpg (37.11 KiB) Viewed 3296 times
Spek
 
Posts: 66
Joined: Sat Oct 04, 2008 8:54 am

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 2 guests