How might Sentis be used for physics? - Some afternoon thoughts Image may be NSFW.
Clik here to view.Image may be NSFW.
Clik here to view.
After going down a rabbit-hole of deep learning in chemistry it got me wondering how can you use it in physics simulations in general. (Note: this is not a deep-learning model, it is a different use of the API to utilise the GPU). How would we use an ONNX not as a deep learning model, but as a way to run simulations on the GPU?
Galaxy Simluation
Here is a N-body simulation running 500 stars with a simple ONNX run every frame which the inputs are an array of positions and the output is an array of accelerations.
Galaxy simulation 500 stars - Unity Sentis
About 70FPS. (NVidia 1080)
This model has no trainable weights so what is the connection to machine learning? Image may be NSFW.
Clik here to view.
Energy functions
As people probably know, physical systems can be described by an energy function called the Hamiltonian which is a function of position x and momentum p. For example the kinetic energy of a particle is given by H=p^2/2m. For a system of gravitationally bound particles the energy function is:
H(x,p) = \sum\limits_i \frac{(p_i)^2}{2m_i} - \sum\limits_{ij} G\frac{m_i m_j}{|x_i-x_j|}
The equations of motion which keep the energy constant over time are:
\frac{\partial x}{\partial t} = \frac{\partial H}{\partial p}
\frac{\partial p}{\partial t} = -\frac{\partial H}{\partial x}
So this is almost - but not quite - the equations of gradient descent because of the minus sign. This is because we want to keep energy constant not try to reduce it to zero in this case. We are not trying to train the model, we are just going to use the model to run a simulation. The learning rate is now just the timestep so we want to keep that steady.
We could write our energy function in pytorch and use a script to get the training onnx. And when updating the (x,p) pair just remember to use a minus sign when updating p. But because our energy function is so simple, we can just calculate it by hand:
\Delta p_i = \sum\limits_j G\frac{m_i m_j(x_i-x_j)}{|x_i-x_j|^3}
\Delta x_i = p_i/m_i
In other words update the momentum in the direction towards the other gravitational bodies and update the position by an amount proportional to the momentum. These are well known equations for most people doing physics simulations in games.
One question is, what is the best way to utlise the GPU for the top equation \Delta p? I find creating a torch model was quite a nice way of doing it, but perhaps other people might find it easier to write a compute shader, do it another way.
Conclusion
So we see here a connection between energy functions in deep learning (which we want to minimise) vs energy functions in physics (which we may want to stay constant).
The takeaway idea from this is that the following use the essentially the same ideas:
physics simulations ↔ training
Whether this turns out to be useful remains to be seen. Image may be NSFW.
Clik here to view.
P.S. I can get to about 1200 at 25fps. Can anyone can anybody run an n-body simulation with 1000 stars at 60FPS?
4 posts - 3 participants