How can I perform a deterministic physics simulation?Water/Ocean simulation and physicsgpgpu vs. physX for...
Why do player start with fighting for the corners in go?
What is the white camera in Thelma And Louise '91?
Define tcolorbox in math mode
Is there a general term for the items in a directory?
Can birds evolve without trees?
Accurately recalling the key - can everyone do it?
Is it moral to remove/hide certain parts of a photo, as a photographer?
Is this popular optical illusion made of a grey-scale image with coloured lines?
Has J.J.Jameson ever found out that Peter Parker is Spider-Man?
Find the missing country
Move label of an angle in Tikz
Why is “deal 6 damage” a legit phrase?
Does the problem of P vs NP come under the category of Operational Research?
Why do my fried eggs start browning very fast?
Why did the United States not resort to nuclear weapons in Vietnam?
A conjectural trigonometric identity
Can I say "Gesundheit" if someone is coughing?
What is Albrecht Dürer's Perspective Machine drawing style?
How is Sword Coast North governed?
Does the use of a new concept require a prior definition?
Map vs. Table for index-specific operations on 2D arrays
How do I solve such questions on paramagnetism and ferromagnetism?
How to structure presentation to avoid getting questions that will be answered later in the presentation?
Should I take up a Creative Writing online course?
How can I perform a deterministic physics simulation?
Water/Ocean simulation and physicsgpgpu vs. physX for physics simulationphysics model simulationNumerical stability in continuous physics simulationDeterministic calculation in JavaScriptColliders for moving inside a spaceship when the whole spaceship is made of a single mesh (in Unity)Bird flight simulation (physics)Floating point determinism with respect to procedural generation, clustering and GPU offloadingCalculating displacement in physics simulationHow to implement a “Parent Constraint” in a custom 2D rigid body physics engine
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
$begingroup$
I'm creating a physics game involving rigid bodies in which players move pieces and parts to solve a puzzle. A hugely important aspect of the game is that when players start a simulation, it runs the same everywhere, regardless of their operating system, processor, etc...
There is room for a lot of complexity and simulations may run for a long time, so it's important that the physics engine is completely deterministic with regards to its floating point operations, otherwise a solution may appear to "solve" on one player's machine and "fail" on another.
How can I acieve this determinism in my game? I am willing to use a variety of frameworks and languages, including Javascript, C++, Java, Python, and C#.
I have been tempted by Box2D (C++) as well as its equivalents in other languages, as it seems to meet my needs, but it lacks floating point determinism, particularly with trigonometric functions.
The best option I've seen thus far has been Box2D's Java equivalent (JBox2D). It appears to make an attempt at floating point determinism by using StrictMath
rather than Math
for many operations, but it's unclear whether this engine will guarantee everything I need as I haven't built the game yet.
Is it possible to use or modify an existing engine to suit my needs? Or will I need to build an engine on my own?
EDIT: I'll further explain how the game is supposed to work. The player is given a puzzle or level, which contains obstacles and a goal. Initially, a simulation is not running. They can then use pieces or tools provided to them to build a machine. Once they press start, the simulation begins and they can no longer edit their machine. If the machine solves the puzzle, the player has beaten the level. Otherwise, they will have to press stop, alter their machine, and try again.
physics physics-engine floating-point
New contributor
$endgroup$
add a comment |
$begingroup$
I'm creating a physics game involving rigid bodies in which players move pieces and parts to solve a puzzle. A hugely important aspect of the game is that when players start a simulation, it runs the same everywhere, regardless of their operating system, processor, etc...
There is room for a lot of complexity and simulations may run for a long time, so it's important that the physics engine is completely deterministic with regards to its floating point operations, otherwise a solution may appear to "solve" on one player's machine and "fail" on another.
How can I acieve this determinism in my game? I am willing to use a variety of frameworks and languages, including Javascript, C++, Java, Python, and C#.
I have been tempted by Box2D (C++) as well as its equivalents in other languages, as it seems to meet my needs, but it lacks floating point determinism, particularly with trigonometric functions.
The best option I've seen thus far has been Box2D's Java equivalent (JBox2D). It appears to make an attempt at floating point determinism by using StrictMath
rather than Math
for many operations, but it's unclear whether this engine will guarantee everything I need as I haven't built the game yet.
Is it possible to use or modify an existing engine to suit my needs? Or will I need to build an engine on my own?
EDIT: I'll further explain how the game is supposed to work. The player is given a puzzle or level, which contains obstacles and a goal. Initially, a simulation is not running. They can then use pieces or tools provided to them to build a machine. Once they press start, the simulation begins and they can no longer edit their machine. If the machine solves the puzzle, the player has beaten the level. Otherwise, they will have to press stop, alter their machine, and try again.
physics physics-engine floating-point
New contributor
$endgroup$
$begingroup$
Is there any way you can provide more detail on how this game is supposed to work? If you need all of the objects to fall into the EXACT same place at the beginning of a puzzle, you could bake the start of the simulation. That way, all of the objects are playing a pre-determined animation to get them into their initial positions. Then, when you want the player to interact with the objects in a non-deterministic way, just turn on the physics simulation for those objects.
$endgroup$
– JPSmithpg
7 hours ago
$begingroup$
I've edited the OP to further explain how the game is supposed to work. The challenge isn't getting everything in the right place when the user first sees the level; it's making sure that when the user presses start, each simulation (given the exact position and orientation of every piece of the machine) produces the same result across all platforms.
$endgroup$
– jvn91173
7 hours ago
$begingroup$
You have not set yourself an easy task if you want to target many different CPUs & and want to be able to publish updates to your game without changing any past version results. Before we get too deep into this, it's worth investigating if there's anything you can do at the game design level to simplify the problem. For instance, can you snap puzzle piece positions/orientations to a coarser resolution than raw floating point, restrict the vocabulary of the moving parts/interactions, or add rounding/coalescing steps?
$endgroup$
– DMGregory♦
7 hours ago
$begingroup$
@DMGregory Snapping positions and/or rounding is an interesting thought. However, isn't this still subject to the same struggles that I faced before? For example, if sine on architecture 1 produces 0.49999999, and sine on architecture 2 produces 0.50000001, don't we have a problem once we round to the nearest unit?
$endgroup$
– jvn91173
7 hours ago
$begingroup$
I'd strongly recommend keeping transcendentals out of your inner loop if you can. The rounding suggestion was more about limiting accumulation of drift. If two simulations are going to differ, you want them to do it early and obviously so you can spot the error (eg. in unit tests) and identify the exact simulation step that you need to correct, rather than letting small differences accumulate quietly in the lowest bits where they usually don't produce a visible deviation (until one player finds a situation 10000 steps later where they do, and who knows where the initial divergence happened).
$endgroup$
– DMGregory♦
6 hours ago
add a comment |
$begingroup$
I'm creating a physics game involving rigid bodies in which players move pieces and parts to solve a puzzle. A hugely important aspect of the game is that when players start a simulation, it runs the same everywhere, regardless of their operating system, processor, etc...
There is room for a lot of complexity and simulations may run for a long time, so it's important that the physics engine is completely deterministic with regards to its floating point operations, otherwise a solution may appear to "solve" on one player's machine and "fail" on another.
How can I acieve this determinism in my game? I am willing to use a variety of frameworks and languages, including Javascript, C++, Java, Python, and C#.
I have been tempted by Box2D (C++) as well as its equivalents in other languages, as it seems to meet my needs, but it lacks floating point determinism, particularly with trigonometric functions.
The best option I've seen thus far has been Box2D's Java equivalent (JBox2D). It appears to make an attempt at floating point determinism by using StrictMath
rather than Math
for many operations, but it's unclear whether this engine will guarantee everything I need as I haven't built the game yet.
Is it possible to use or modify an existing engine to suit my needs? Or will I need to build an engine on my own?
EDIT: I'll further explain how the game is supposed to work. The player is given a puzzle or level, which contains obstacles and a goal. Initially, a simulation is not running. They can then use pieces or tools provided to them to build a machine. Once they press start, the simulation begins and they can no longer edit their machine. If the machine solves the puzzle, the player has beaten the level. Otherwise, they will have to press stop, alter their machine, and try again.
physics physics-engine floating-point
New contributor
$endgroup$
I'm creating a physics game involving rigid bodies in which players move pieces and parts to solve a puzzle. A hugely important aspect of the game is that when players start a simulation, it runs the same everywhere, regardless of their operating system, processor, etc...
There is room for a lot of complexity and simulations may run for a long time, so it's important that the physics engine is completely deterministic with regards to its floating point operations, otherwise a solution may appear to "solve" on one player's machine and "fail" on another.
How can I acieve this determinism in my game? I am willing to use a variety of frameworks and languages, including Javascript, C++, Java, Python, and C#.
I have been tempted by Box2D (C++) as well as its equivalents in other languages, as it seems to meet my needs, but it lacks floating point determinism, particularly with trigonometric functions.
The best option I've seen thus far has been Box2D's Java equivalent (JBox2D). It appears to make an attempt at floating point determinism by using StrictMath
rather than Math
for many operations, but it's unclear whether this engine will guarantee everything I need as I haven't built the game yet.
Is it possible to use or modify an existing engine to suit my needs? Or will I need to build an engine on my own?
EDIT: I'll further explain how the game is supposed to work. The player is given a puzzle or level, which contains obstacles and a goal. Initially, a simulation is not running. They can then use pieces or tools provided to them to build a machine. Once they press start, the simulation begins and they can no longer edit their machine. If the machine solves the puzzle, the player has beaten the level. Otherwise, they will have to press stop, alter their machine, and try again.
physics physics-engine floating-point
physics physics-engine floating-point
New contributor
New contributor
edited 7 hours ago
jvn91173
New contributor
asked 8 hours ago
jvn91173jvn91173
162 bronze badges
162 bronze badges
New contributor
New contributor
$begingroup$
Is there any way you can provide more detail on how this game is supposed to work? If you need all of the objects to fall into the EXACT same place at the beginning of a puzzle, you could bake the start of the simulation. That way, all of the objects are playing a pre-determined animation to get them into their initial positions. Then, when you want the player to interact with the objects in a non-deterministic way, just turn on the physics simulation for those objects.
$endgroup$
– JPSmithpg
7 hours ago
$begingroup$
I've edited the OP to further explain how the game is supposed to work. The challenge isn't getting everything in the right place when the user first sees the level; it's making sure that when the user presses start, each simulation (given the exact position and orientation of every piece of the machine) produces the same result across all platforms.
$endgroup$
– jvn91173
7 hours ago
$begingroup$
You have not set yourself an easy task if you want to target many different CPUs & and want to be able to publish updates to your game without changing any past version results. Before we get too deep into this, it's worth investigating if there's anything you can do at the game design level to simplify the problem. For instance, can you snap puzzle piece positions/orientations to a coarser resolution than raw floating point, restrict the vocabulary of the moving parts/interactions, or add rounding/coalescing steps?
$endgroup$
– DMGregory♦
7 hours ago
$begingroup$
@DMGregory Snapping positions and/or rounding is an interesting thought. However, isn't this still subject to the same struggles that I faced before? For example, if sine on architecture 1 produces 0.49999999, and sine on architecture 2 produces 0.50000001, don't we have a problem once we round to the nearest unit?
$endgroup$
– jvn91173
7 hours ago
$begingroup$
I'd strongly recommend keeping transcendentals out of your inner loop if you can. The rounding suggestion was more about limiting accumulation of drift. If two simulations are going to differ, you want them to do it early and obviously so you can spot the error (eg. in unit tests) and identify the exact simulation step that you need to correct, rather than letting small differences accumulate quietly in the lowest bits where they usually don't produce a visible deviation (until one player finds a situation 10000 steps later where they do, and who knows where the initial divergence happened).
$endgroup$
– DMGregory♦
6 hours ago
add a comment |
$begingroup$
Is there any way you can provide more detail on how this game is supposed to work? If you need all of the objects to fall into the EXACT same place at the beginning of a puzzle, you could bake the start of the simulation. That way, all of the objects are playing a pre-determined animation to get them into their initial positions. Then, when you want the player to interact with the objects in a non-deterministic way, just turn on the physics simulation for those objects.
$endgroup$
– JPSmithpg
7 hours ago
$begingroup$
I've edited the OP to further explain how the game is supposed to work. The challenge isn't getting everything in the right place when the user first sees the level; it's making sure that when the user presses start, each simulation (given the exact position and orientation of every piece of the machine) produces the same result across all platforms.
$endgroup$
– jvn91173
7 hours ago
$begingroup$
You have not set yourself an easy task if you want to target many different CPUs & and want to be able to publish updates to your game without changing any past version results. Before we get too deep into this, it's worth investigating if there's anything you can do at the game design level to simplify the problem. For instance, can you snap puzzle piece positions/orientations to a coarser resolution than raw floating point, restrict the vocabulary of the moving parts/interactions, or add rounding/coalescing steps?
$endgroup$
– DMGregory♦
7 hours ago
$begingroup$
@DMGregory Snapping positions and/or rounding is an interesting thought. However, isn't this still subject to the same struggles that I faced before? For example, if sine on architecture 1 produces 0.49999999, and sine on architecture 2 produces 0.50000001, don't we have a problem once we round to the nearest unit?
$endgroup$
– jvn91173
7 hours ago
$begingroup$
I'd strongly recommend keeping transcendentals out of your inner loop if you can. The rounding suggestion was more about limiting accumulation of drift. If two simulations are going to differ, you want them to do it early and obviously so you can spot the error (eg. in unit tests) and identify the exact simulation step that you need to correct, rather than letting small differences accumulate quietly in the lowest bits where they usually don't produce a visible deviation (until one player finds a situation 10000 steps later where they do, and who knows where the initial divergence happened).
$endgroup$
– DMGregory♦
6 hours ago
$begingroup$
Is there any way you can provide more detail on how this game is supposed to work? If you need all of the objects to fall into the EXACT same place at the beginning of a puzzle, you could bake the start of the simulation. That way, all of the objects are playing a pre-determined animation to get them into their initial positions. Then, when you want the player to interact with the objects in a non-deterministic way, just turn on the physics simulation for those objects.
$endgroup$
– JPSmithpg
7 hours ago
$begingroup$
Is there any way you can provide more detail on how this game is supposed to work? If you need all of the objects to fall into the EXACT same place at the beginning of a puzzle, you could bake the start of the simulation. That way, all of the objects are playing a pre-determined animation to get them into their initial positions. Then, when you want the player to interact with the objects in a non-deterministic way, just turn on the physics simulation for those objects.
$endgroup$
– JPSmithpg
7 hours ago
$begingroup$
I've edited the OP to further explain how the game is supposed to work. The challenge isn't getting everything in the right place when the user first sees the level; it's making sure that when the user presses start, each simulation (given the exact position and orientation of every piece of the machine) produces the same result across all platforms.
$endgroup$
– jvn91173
7 hours ago
$begingroup$
I've edited the OP to further explain how the game is supposed to work. The challenge isn't getting everything in the right place when the user first sees the level; it's making sure that when the user presses start, each simulation (given the exact position and orientation of every piece of the machine) produces the same result across all platforms.
$endgroup$
– jvn91173
7 hours ago
$begingroup$
You have not set yourself an easy task if you want to target many different CPUs & and want to be able to publish updates to your game without changing any past version results. Before we get too deep into this, it's worth investigating if there's anything you can do at the game design level to simplify the problem. For instance, can you snap puzzle piece positions/orientations to a coarser resolution than raw floating point, restrict the vocabulary of the moving parts/interactions, or add rounding/coalescing steps?
$endgroup$
– DMGregory♦
7 hours ago
$begingroup$
You have not set yourself an easy task if you want to target many different CPUs & and want to be able to publish updates to your game without changing any past version results. Before we get too deep into this, it's worth investigating if there's anything you can do at the game design level to simplify the problem. For instance, can you snap puzzle piece positions/orientations to a coarser resolution than raw floating point, restrict the vocabulary of the moving parts/interactions, or add rounding/coalescing steps?
$endgroup$
– DMGregory♦
7 hours ago
$begingroup$
@DMGregory Snapping positions and/or rounding is an interesting thought. However, isn't this still subject to the same struggles that I faced before? For example, if sine on architecture 1 produces 0.49999999, and sine on architecture 2 produces 0.50000001, don't we have a problem once we round to the nearest unit?
$endgroup$
– jvn91173
7 hours ago
$begingroup$
@DMGregory Snapping positions and/or rounding is an interesting thought. However, isn't this still subject to the same struggles that I faced before? For example, if sine on architecture 1 produces 0.49999999, and sine on architecture 2 produces 0.50000001, don't we have a problem once we round to the nearest unit?
$endgroup$
– jvn91173
7 hours ago
$begingroup$
I'd strongly recommend keeping transcendentals out of your inner loop if you can. The rounding suggestion was more about limiting accumulation of drift. If two simulations are going to differ, you want them to do it early and obviously so you can spot the error (eg. in unit tests) and identify the exact simulation step that you need to correct, rather than letting small differences accumulate quietly in the lowest bits where they usually don't produce a visible deviation (until one player finds a situation 10000 steps later where they do, and who knows where the initial divergence happened).
$endgroup$
– DMGregory♦
6 hours ago
$begingroup$
I'd strongly recommend keeping transcendentals out of your inner loop if you can. The rounding suggestion was more about limiting accumulation of drift. If two simulations are going to differ, you want them to do it early and obviously so you can spot the error (eg. in unit tests) and identify the exact simulation step that you need to correct, rather than letting small differences accumulate quietly in the lowest bits where they usually don't produce a visible deviation (until one player finds a situation 10000 steps later where they do, and who knows where the initial divergence happened).
$endgroup$
– DMGregory♦
6 hours ago
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
I will end up saying to use fixed point number. However, this is a ride I am taking you on.
Floating point is deterministic. Well, it should be. It is complicated.
There is plenty of literature on floating point numbers:
- What Every Computer Scientist Should Know About Floating-Point Arithmetic
- THE NEW IEEE-754 STANDARD FOR FLOATING POINT ARITHMETIC
- IEEE 754-2008 revision
And how they are problematic:
Consistency of Floating Point Results or Why doesn’t my application always give the same answer?.
Cross-Platform Issues With Floating-Point Arithmetics in C++.
The pitfalls of verifying floating-point computations.
Is floating point math deterministic?.
What could cause a deterministic process to generate floating point errors.
Consistency: how to defeat the purpose of IEEE floating point.
For abstract. At least, on a single thread, the same operations, with the same data, happening in the same order, should be deterministic. Thus, we can start by worrying about inputs, and reordering.
One such input that causes problems is time.
Firat of all, you should always compute the same timestep. I am not saying to not measure time, I am saying that you will not pass time to the physics simulation, because variations in time are a source of noise in the simulation.
Why do you measure time if you are not passing it to the physics simulation? You want to measure the elapsed time to know when a simulation step should be called, and - assuming you are using sleep - how much time to sleep.
Thus:
- Measure time: Yes
- Use time in simulation: No
Now, instruction reordering. There are two sources of reordering: 1) the CPU...
It is possible to ensure that instructions happen in the same order in each CPU core, up until the point where they need to interact. The problem is as follows: if a CPU core needs to do an instruction that requires a value that has to be retrieved from the cache of another core, it is likely that it will pospone that instruction to one that uses values that are in its own cache, while it waits for the value from the other core.
Physics has plenty of opportunities for parallelism. However, if you want to avoid these problems - and keep the code easy to follow anyway - have your physics be single threaded.
Addendum: This reminds me, you want to miminize trips to RAM. They have a similar effect on the instruction order and a bigger effect on performance.
Thus:
- Single threaded: Yes
- Optimize for CPU cache: Yes
... and 2) the compiler.
It could decide that f * a + b
is the same as b + f * a
, however that may have a different result. It could also compile to fmadd, or it could decide take multiple lines like that that happen togheter and write them with SIMD, or some other optimization I cannot think of right now. And remember we want the same operations to happen on the same order, it comes to reason that we want to control what operations happen.
And no, using double will not save you.
You need to worry about the compiler and its configuration, in particular to synchronize floating point numbers across the network. You need to get the builds to agree to do the same thing.
Arguebly, writing assembly would be ideal. That way you decide what operation to do. However, that could be a problem for supporting multiple platforms.
Thus:
- Ern... Hmm... Use a compiler that let you configure how they deal with floating point numbers. For example see /fp (Specify floating-point behavior)
.
That brings me to different idea: keep your values small and truncate the results.
Due to the way floats are represented in memory, large values are going to lose precision. It comes to reason that keeping your values small (clamp) mitigates the problem. Thus, no huge speeds and no large rooms. Which also means you can use discrete physics because you have less risk of tunneling.
On the other hand, small errors will accumulate. So, truncate. I mean, scale and cast to an integer type. That way you know nothing is building up. There will be operations you can do staying with the integer type. When you need to go back to floating point you cast and unscale.
Note I say scale. The idea is that 1 unit will actually be represented as a power of two (16384 for example). Whatever it is, make it a constant and use it. You are basically using it as fixed point number. In fact, if you can use proper fixed point numbers from some reliable library much better.
I am saying truncate. About the rounding problem, it means you cannot trust the last bit of whaever value you got after the cast. So, before the cast scale to get one bit more than you need, and truncate it afterwads.
Thus:
- Keep values small: Yes
- Careful rounding: Yes
- Fixed point numbers when possible: Yes
Wait, why do you need floating point? Could you not work only with an integer type? Oh, right. Trigonometry and radication. You can compute tables for trigonometry and radication and have them baked in your source. Or you can implement the algorithms used to compute them with floating point number, except using fixed point numbers instead. Yes, you need to balance memory, performance and precision. Yet, you could stay out of floating point numbers, and stay deterministic.
Did you know they did stuff like that for the original PlayStation? Please Meet My Dog, Patches.
By the way, I am not saying to not use floating point for graphics. Just for the physics. I mean, sure, the positions will depend on the physics. However, as you know a collider does not have to match a model. We do not want to see the results of truncation of the models.
Thus: USE FIXED POINT NUMBERS.
See also:
- 3D Graphics on Mobile Devices - Part 2: Fixed Point Math
- The neglected art of Fixed Point arithmetic
- CORDIC Algorithm Simulation Code
$endgroup$
add a comment |
$begingroup$
Use double floating point precision, instead of single floating point precision. Although not perfect, it is accurate enough to be deemed deterministic in your physics.
If you truly need perfect determinism, use fixed point math. This will give you less precision, but deterministic results. I am not aware of any physics engines that use fixed point math, so you may need to write your own if you wanted to go this route. (Something I would advise against.)
$endgroup$
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
add a comment |
$begingroup$
Use the Memento Pattern.
In your initial run, save off the positional data each frame, or whatever benchmarks you need. If that is too unperformance, only do it every n frames.
Then when you reproduce the simulation, follow the arbitrary physics, but update the positional data every n frames.
Overly simplified pseudo-code:
function Update():
if(firstRun) then (SaveData(frame, position));
else if(reproducedRun) then (this.position = GetData(frame));
$endgroup$
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "53"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
jvn91173 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgamedev.stackexchange.com%2fquestions%2f174320%2fhow-can-i-perform-a-deterministic-physics-simulation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I will end up saying to use fixed point number. However, this is a ride I am taking you on.
Floating point is deterministic. Well, it should be. It is complicated.
There is plenty of literature on floating point numbers:
- What Every Computer Scientist Should Know About Floating-Point Arithmetic
- THE NEW IEEE-754 STANDARD FOR FLOATING POINT ARITHMETIC
- IEEE 754-2008 revision
And how they are problematic:
Consistency of Floating Point Results or Why doesn’t my application always give the same answer?.
Cross-Platform Issues With Floating-Point Arithmetics in C++.
The pitfalls of verifying floating-point computations.
Is floating point math deterministic?.
What could cause a deterministic process to generate floating point errors.
Consistency: how to defeat the purpose of IEEE floating point.
For abstract. At least, on a single thread, the same operations, with the same data, happening in the same order, should be deterministic. Thus, we can start by worrying about inputs, and reordering.
One such input that causes problems is time.
Firat of all, you should always compute the same timestep. I am not saying to not measure time, I am saying that you will not pass time to the physics simulation, because variations in time are a source of noise in the simulation.
Why do you measure time if you are not passing it to the physics simulation? You want to measure the elapsed time to know when a simulation step should be called, and - assuming you are using sleep - how much time to sleep.
Thus:
- Measure time: Yes
- Use time in simulation: No
Now, instruction reordering. There are two sources of reordering: 1) the CPU...
It is possible to ensure that instructions happen in the same order in each CPU core, up until the point where they need to interact. The problem is as follows: if a CPU core needs to do an instruction that requires a value that has to be retrieved from the cache of another core, it is likely that it will pospone that instruction to one that uses values that are in its own cache, while it waits for the value from the other core.
Physics has plenty of opportunities for parallelism. However, if you want to avoid these problems - and keep the code easy to follow anyway - have your physics be single threaded.
Addendum: This reminds me, you want to miminize trips to RAM. They have a similar effect on the instruction order and a bigger effect on performance.
Thus:
- Single threaded: Yes
- Optimize for CPU cache: Yes
... and 2) the compiler.
It could decide that f * a + b
is the same as b + f * a
, however that may have a different result. It could also compile to fmadd, or it could decide take multiple lines like that that happen togheter and write them with SIMD, or some other optimization I cannot think of right now. And remember we want the same operations to happen on the same order, it comes to reason that we want to control what operations happen.
And no, using double will not save you.
You need to worry about the compiler and its configuration, in particular to synchronize floating point numbers across the network. You need to get the builds to agree to do the same thing.
Arguebly, writing assembly would be ideal. That way you decide what operation to do. However, that could be a problem for supporting multiple platforms.
Thus:
- Ern... Hmm... Use a compiler that let you configure how they deal with floating point numbers. For example see /fp (Specify floating-point behavior)
.
That brings me to different idea: keep your values small and truncate the results.
Due to the way floats are represented in memory, large values are going to lose precision. It comes to reason that keeping your values small (clamp) mitigates the problem. Thus, no huge speeds and no large rooms. Which also means you can use discrete physics because you have less risk of tunneling.
On the other hand, small errors will accumulate. So, truncate. I mean, scale and cast to an integer type. That way you know nothing is building up. There will be operations you can do staying with the integer type. When you need to go back to floating point you cast and unscale.
Note I say scale. The idea is that 1 unit will actually be represented as a power of two (16384 for example). Whatever it is, make it a constant and use it. You are basically using it as fixed point number. In fact, if you can use proper fixed point numbers from some reliable library much better.
I am saying truncate. About the rounding problem, it means you cannot trust the last bit of whaever value you got after the cast. So, before the cast scale to get one bit more than you need, and truncate it afterwads.
Thus:
- Keep values small: Yes
- Careful rounding: Yes
- Fixed point numbers when possible: Yes
Wait, why do you need floating point? Could you not work only with an integer type? Oh, right. Trigonometry and radication. You can compute tables for trigonometry and radication and have them baked in your source. Or you can implement the algorithms used to compute them with floating point number, except using fixed point numbers instead. Yes, you need to balance memory, performance and precision. Yet, you could stay out of floating point numbers, and stay deterministic.
Did you know they did stuff like that for the original PlayStation? Please Meet My Dog, Patches.
By the way, I am not saying to not use floating point for graphics. Just for the physics. I mean, sure, the positions will depend on the physics. However, as you know a collider does not have to match a model. We do not want to see the results of truncation of the models.
Thus: USE FIXED POINT NUMBERS.
See also:
- 3D Graphics on Mobile Devices - Part 2: Fixed Point Math
- The neglected art of Fixed Point arithmetic
- CORDIC Algorithm Simulation Code
$endgroup$
add a comment |
$begingroup$
I will end up saying to use fixed point number. However, this is a ride I am taking you on.
Floating point is deterministic. Well, it should be. It is complicated.
There is plenty of literature on floating point numbers:
- What Every Computer Scientist Should Know About Floating-Point Arithmetic
- THE NEW IEEE-754 STANDARD FOR FLOATING POINT ARITHMETIC
- IEEE 754-2008 revision
And how they are problematic:
Consistency of Floating Point Results or Why doesn’t my application always give the same answer?.
Cross-Platform Issues With Floating-Point Arithmetics in C++.
The pitfalls of verifying floating-point computations.
Is floating point math deterministic?.
What could cause a deterministic process to generate floating point errors.
Consistency: how to defeat the purpose of IEEE floating point.
For abstract. At least, on a single thread, the same operations, with the same data, happening in the same order, should be deterministic. Thus, we can start by worrying about inputs, and reordering.
One such input that causes problems is time.
Firat of all, you should always compute the same timestep. I am not saying to not measure time, I am saying that you will not pass time to the physics simulation, because variations in time are a source of noise in the simulation.
Why do you measure time if you are not passing it to the physics simulation? You want to measure the elapsed time to know when a simulation step should be called, and - assuming you are using sleep - how much time to sleep.
Thus:
- Measure time: Yes
- Use time in simulation: No
Now, instruction reordering. There are two sources of reordering: 1) the CPU...
It is possible to ensure that instructions happen in the same order in each CPU core, up until the point where they need to interact. The problem is as follows: if a CPU core needs to do an instruction that requires a value that has to be retrieved from the cache of another core, it is likely that it will pospone that instruction to one that uses values that are in its own cache, while it waits for the value from the other core.
Physics has plenty of opportunities for parallelism. However, if you want to avoid these problems - and keep the code easy to follow anyway - have your physics be single threaded.
Addendum: This reminds me, you want to miminize trips to RAM. They have a similar effect on the instruction order and a bigger effect on performance.
Thus:
- Single threaded: Yes
- Optimize for CPU cache: Yes
... and 2) the compiler.
It could decide that f * a + b
is the same as b + f * a
, however that may have a different result. It could also compile to fmadd, or it could decide take multiple lines like that that happen togheter and write them with SIMD, or some other optimization I cannot think of right now. And remember we want the same operations to happen on the same order, it comes to reason that we want to control what operations happen.
And no, using double will not save you.
You need to worry about the compiler and its configuration, in particular to synchronize floating point numbers across the network. You need to get the builds to agree to do the same thing.
Arguebly, writing assembly would be ideal. That way you decide what operation to do. However, that could be a problem for supporting multiple platforms.
Thus:
- Ern... Hmm... Use a compiler that let you configure how they deal with floating point numbers. For example see /fp (Specify floating-point behavior)
.
That brings me to different idea: keep your values small and truncate the results.
Due to the way floats are represented in memory, large values are going to lose precision. It comes to reason that keeping your values small (clamp) mitigates the problem. Thus, no huge speeds and no large rooms. Which also means you can use discrete physics because you have less risk of tunneling.
On the other hand, small errors will accumulate. So, truncate. I mean, scale and cast to an integer type. That way you know nothing is building up. There will be operations you can do staying with the integer type. When you need to go back to floating point you cast and unscale.
Note I say scale. The idea is that 1 unit will actually be represented as a power of two (16384 for example). Whatever it is, make it a constant and use it. You are basically using it as fixed point number. In fact, if you can use proper fixed point numbers from some reliable library much better.
I am saying truncate. About the rounding problem, it means you cannot trust the last bit of whaever value you got after the cast. So, before the cast scale to get one bit more than you need, and truncate it afterwads.
Thus:
- Keep values small: Yes
- Careful rounding: Yes
- Fixed point numbers when possible: Yes
Wait, why do you need floating point? Could you not work only with an integer type? Oh, right. Trigonometry and radication. You can compute tables for trigonometry and radication and have them baked in your source. Or you can implement the algorithms used to compute them with floating point number, except using fixed point numbers instead. Yes, you need to balance memory, performance and precision. Yet, you could stay out of floating point numbers, and stay deterministic.
Did you know they did stuff like that for the original PlayStation? Please Meet My Dog, Patches.
By the way, I am not saying to not use floating point for graphics. Just for the physics. I mean, sure, the positions will depend on the physics. However, as you know a collider does not have to match a model. We do not want to see the results of truncation of the models.
Thus: USE FIXED POINT NUMBERS.
See also:
- 3D Graphics on Mobile Devices - Part 2: Fixed Point Math
- The neglected art of Fixed Point arithmetic
- CORDIC Algorithm Simulation Code
$endgroup$
add a comment |
$begingroup$
I will end up saying to use fixed point number. However, this is a ride I am taking you on.
Floating point is deterministic. Well, it should be. It is complicated.
There is plenty of literature on floating point numbers:
- What Every Computer Scientist Should Know About Floating-Point Arithmetic
- THE NEW IEEE-754 STANDARD FOR FLOATING POINT ARITHMETIC
- IEEE 754-2008 revision
And how they are problematic:
Consistency of Floating Point Results or Why doesn’t my application always give the same answer?.
Cross-Platform Issues With Floating-Point Arithmetics in C++.
The pitfalls of verifying floating-point computations.
Is floating point math deterministic?.
What could cause a deterministic process to generate floating point errors.
Consistency: how to defeat the purpose of IEEE floating point.
For abstract. At least, on a single thread, the same operations, with the same data, happening in the same order, should be deterministic. Thus, we can start by worrying about inputs, and reordering.
One such input that causes problems is time.
Firat of all, you should always compute the same timestep. I am not saying to not measure time, I am saying that you will not pass time to the physics simulation, because variations in time are a source of noise in the simulation.
Why do you measure time if you are not passing it to the physics simulation? You want to measure the elapsed time to know when a simulation step should be called, and - assuming you are using sleep - how much time to sleep.
Thus:
- Measure time: Yes
- Use time in simulation: No
Now, instruction reordering. There are two sources of reordering: 1) the CPU...
It is possible to ensure that instructions happen in the same order in each CPU core, up until the point where they need to interact. The problem is as follows: if a CPU core needs to do an instruction that requires a value that has to be retrieved from the cache of another core, it is likely that it will pospone that instruction to one that uses values that are in its own cache, while it waits for the value from the other core.
Physics has plenty of opportunities for parallelism. However, if you want to avoid these problems - and keep the code easy to follow anyway - have your physics be single threaded.
Addendum: This reminds me, you want to miminize trips to RAM. They have a similar effect on the instruction order and a bigger effect on performance.
Thus:
- Single threaded: Yes
- Optimize for CPU cache: Yes
... and 2) the compiler.
It could decide that f * a + b
is the same as b + f * a
, however that may have a different result. It could also compile to fmadd, or it could decide take multiple lines like that that happen togheter and write them with SIMD, or some other optimization I cannot think of right now. And remember we want the same operations to happen on the same order, it comes to reason that we want to control what operations happen.
And no, using double will not save you.
You need to worry about the compiler and its configuration, in particular to synchronize floating point numbers across the network. You need to get the builds to agree to do the same thing.
Arguebly, writing assembly would be ideal. That way you decide what operation to do. However, that could be a problem for supporting multiple platforms.
Thus:
- Ern... Hmm... Use a compiler that let you configure how they deal with floating point numbers. For example see /fp (Specify floating-point behavior)
.
That brings me to different idea: keep your values small and truncate the results.
Due to the way floats are represented in memory, large values are going to lose precision. It comes to reason that keeping your values small (clamp) mitigates the problem. Thus, no huge speeds and no large rooms. Which also means you can use discrete physics because you have less risk of tunneling.
On the other hand, small errors will accumulate. So, truncate. I mean, scale and cast to an integer type. That way you know nothing is building up. There will be operations you can do staying with the integer type. When you need to go back to floating point you cast and unscale.
Note I say scale. The idea is that 1 unit will actually be represented as a power of two (16384 for example). Whatever it is, make it a constant and use it. You are basically using it as fixed point number. In fact, if you can use proper fixed point numbers from some reliable library much better.
I am saying truncate. About the rounding problem, it means you cannot trust the last bit of whaever value you got after the cast. So, before the cast scale to get one bit more than you need, and truncate it afterwads.
Thus:
- Keep values small: Yes
- Careful rounding: Yes
- Fixed point numbers when possible: Yes
Wait, why do you need floating point? Could you not work only with an integer type? Oh, right. Trigonometry and radication. You can compute tables for trigonometry and radication and have them baked in your source. Or you can implement the algorithms used to compute them with floating point number, except using fixed point numbers instead. Yes, you need to balance memory, performance and precision. Yet, you could stay out of floating point numbers, and stay deterministic.
Did you know they did stuff like that for the original PlayStation? Please Meet My Dog, Patches.
By the way, I am not saying to not use floating point for graphics. Just for the physics. I mean, sure, the positions will depend on the physics. However, as you know a collider does not have to match a model. We do not want to see the results of truncation of the models.
Thus: USE FIXED POINT NUMBERS.
See also:
- 3D Graphics on Mobile Devices - Part 2: Fixed Point Math
- The neglected art of Fixed Point arithmetic
- CORDIC Algorithm Simulation Code
$endgroup$
I will end up saying to use fixed point number. However, this is a ride I am taking you on.
Floating point is deterministic. Well, it should be. It is complicated.
There is plenty of literature on floating point numbers:
- What Every Computer Scientist Should Know About Floating-Point Arithmetic
- THE NEW IEEE-754 STANDARD FOR FLOATING POINT ARITHMETIC
- IEEE 754-2008 revision
And how they are problematic:
Consistency of Floating Point Results or Why doesn’t my application always give the same answer?.
Cross-Platform Issues With Floating-Point Arithmetics in C++.
The pitfalls of verifying floating-point computations.
Is floating point math deterministic?.
What could cause a deterministic process to generate floating point errors.
Consistency: how to defeat the purpose of IEEE floating point.
For abstract. At least, on a single thread, the same operations, with the same data, happening in the same order, should be deterministic. Thus, we can start by worrying about inputs, and reordering.
One such input that causes problems is time.
Firat of all, you should always compute the same timestep. I am not saying to not measure time, I am saying that you will not pass time to the physics simulation, because variations in time are a source of noise in the simulation.
Why do you measure time if you are not passing it to the physics simulation? You want to measure the elapsed time to know when a simulation step should be called, and - assuming you are using sleep - how much time to sleep.
Thus:
- Measure time: Yes
- Use time in simulation: No
Now, instruction reordering. There are two sources of reordering: 1) the CPU...
It is possible to ensure that instructions happen in the same order in each CPU core, up until the point where they need to interact. The problem is as follows: if a CPU core needs to do an instruction that requires a value that has to be retrieved from the cache of another core, it is likely that it will pospone that instruction to one that uses values that are in its own cache, while it waits for the value from the other core.
Physics has plenty of opportunities for parallelism. However, if you want to avoid these problems - and keep the code easy to follow anyway - have your physics be single threaded.
Addendum: This reminds me, you want to miminize trips to RAM. They have a similar effect on the instruction order and a bigger effect on performance.
Thus:
- Single threaded: Yes
- Optimize for CPU cache: Yes
... and 2) the compiler.
It could decide that f * a + b
is the same as b + f * a
, however that may have a different result. It could also compile to fmadd, or it could decide take multiple lines like that that happen togheter and write them with SIMD, or some other optimization I cannot think of right now. And remember we want the same operations to happen on the same order, it comes to reason that we want to control what operations happen.
And no, using double will not save you.
You need to worry about the compiler and its configuration, in particular to synchronize floating point numbers across the network. You need to get the builds to agree to do the same thing.
Arguebly, writing assembly would be ideal. That way you decide what operation to do. However, that could be a problem for supporting multiple platforms.
Thus:
- Ern... Hmm... Use a compiler that let you configure how they deal with floating point numbers. For example see /fp (Specify floating-point behavior)
.
That brings me to different idea: keep your values small and truncate the results.
Due to the way floats are represented in memory, large values are going to lose precision. It comes to reason that keeping your values small (clamp) mitigates the problem. Thus, no huge speeds and no large rooms. Which also means you can use discrete physics because you have less risk of tunneling.
On the other hand, small errors will accumulate. So, truncate. I mean, scale and cast to an integer type. That way you know nothing is building up. There will be operations you can do staying with the integer type. When you need to go back to floating point you cast and unscale.
Note I say scale. The idea is that 1 unit will actually be represented as a power of two (16384 for example). Whatever it is, make it a constant and use it. You are basically using it as fixed point number. In fact, if you can use proper fixed point numbers from some reliable library much better.
I am saying truncate. About the rounding problem, it means you cannot trust the last bit of whaever value you got after the cast. So, before the cast scale to get one bit more than you need, and truncate it afterwads.
Thus:
- Keep values small: Yes
- Careful rounding: Yes
- Fixed point numbers when possible: Yes
Wait, why do you need floating point? Could you not work only with an integer type? Oh, right. Trigonometry and radication. You can compute tables for trigonometry and radication and have them baked in your source. Or you can implement the algorithms used to compute them with floating point number, except using fixed point numbers instead. Yes, you need to balance memory, performance and precision. Yet, you could stay out of floating point numbers, and stay deterministic.
Did you know they did stuff like that for the original PlayStation? Please Meet My Dog, Patches.
By the way, I am not saying to not use floating point for graphics. Just for the physics. I mean, sure, the positions will depend on the physics. However, as you know a collider does not have to match a model. We do not want to see the results of truncation of the models.
Thus: USE FIXED POINT NUMBERS.
See also:
- 3D Graphics on Mobile Devices - Part 2: Fixed Point Math
- The neglected art of Fixed Point arithmetic
- CORDIC Algorithm Simulation Code
edited 27 mins ago
answered 1 hour ago
TheraotTheraot
7,4893 gold badges19 silver badges28 bronze badges
7,4893 gold badges19 silver badges28 bronze badges
add a comment |
add a comment |
$begingroup$
Use double floating point precision, instead of single floating point precision. Although not perfect, it is accurate enough to be deemed deterministic in your physics.
If you truly need perfect determinism, use fixed point math. This will give you less precision, but deterministic results. I am not aware of any physics engines that use fixed point math, so you may need to write your own if you wanted to go this route. (Something I would advise against.)
$endgroup$
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
add a comment |
$begingroup$
Use double floating point precision, instead of single floating point precision. Although not perfect, it is accurate enough to be deemed deterministic in your physics.
If you truly need perfect determinism, use fixed point math. This will give you less precision, but deterministic results. I am not aware of any physics engines that use fixed point math, so you may need to write your own if you wanted to go this route. (Something I would advise against.)
$endgroup$
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
add a comment |
$begingroup$
Use double floating point precision, instead of single floating point precision. Although not perfect, it is accurate enough to be deemed deterministic in your physics.
If you truly need perfect determinism, use fixed point math. This will give you less precision, but deterministic results. I am not aware of any physics engines that use fixed point math, so you may need to write your own if you wanted to go this route. (Something I would advise against.)
$endgroup$
Use double floating point precision, instead of single floating point precision. Although not perfect, it is accurate enough to be deemed deterministic in your physics.
If you truly need perfect determinism, use fixed point math. This will give you less precision, but deterministic results. I am not aware of any physics engines that use fixed point math, so you may need to write your own if you wanted to go this route. (Something I would advise against.)
edited 5 hours ago
answered 5 hours ago
EvorlorEvorlor
2,5064 gold badges27 silver badges70 bronze badges
2,5064 gold badges27 silver badges70 bronze badges
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
add a comment |
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
I recommend my other answer over this answer: gamedev.stackexchange.com/a/174324/41345
$endgroup$
– Evorlor
5 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
The double-precision approach runs afoul of the butterfly effect. In a dynamical system (like a physics sim), even a tiny deviation in initial conditions can amplify through feedback, snowballing up to a perceptible error. All the extra digits do is delay this a little longer - forcing the snowball to roll a bit further before it gets big enough to cause problems.
$endgroup$
– DMGregory♦
2 hours ago
add a comment |
$begingroup$
Use the Memento Pattern.
In your initial run, save off the positional data each frame, or whatever benchmarks you need. If that is too unperformance, only do it every n frames.
Then when you reproduce the simulation, follow the arbitrary physics, but update the positional data every n frames.
Overly simplified pseudo-code:
function Update():
if(firstRun) then (SaveData(frame, position));
else if(reproducedRun) then (this.position = GetData(frame));
$endgroup$
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
add a comment |
$begingroup$
Use the Memento Pattern.
In your initial run, save off the positional data each frame, or whatever benchmarks you need. If that is too unperformance, only do it every n frames.
Then when you reproduce the simulation, follow the arbitrary physics, but update the positional data every n frames.
Overly simplified pseudo-code:
function Update():
if(firstRun) then (SaveData(frame, position));
else if(reproducedRun) then (this.position = GetData(frame));
$endgroup$
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
add a comment |
$begingroup$
Use the Memento Pattern.
In your initial run, save off the positional data each frame, or whatever benchmarks you need. If that is too unperformance, only do it every n frames.
Then when you reproduce the simulation, follow the arbitrary physics, but update the positional data every n frames.
Overly simplified pseudo-code:
function Update():
if(firstRun) then (SaveData(frame, position));
else if(reproducedRun) then (this.position = GetData(frame));
$endgroup$
Use the Memento Pattern.
In your initial run, save off the positional data each frame, or whatever benchmarks you need. If that is too unperformance, only do it every n frames.
Then when you reproduce the simulation, follow the arbitrary physics, but update the positional data every n frames.
Overly simplified pseudo-code:
function Update():
if(firstRun) then (SaveData(frame, position));
else if(reproducedRun) then (this.position = GetData(frame));
edited 5 hours ago
answered 5 hours ago
EvorlorEvorlor
2,5064 gold badges27 silver badges70 bronze badges
2,5064 gold badges27 silver badges70 bronze badges
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
add a comment |
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
I don't think this works for OP's case. Let's say you and I are both playing the game on different systems. Each of us places the puzzle pieces in the same way - a solution that was not predicted in advance by the developer. When you click "start," your PC simulates the physics such that the solution is successful. When I do the same, some small difference in the simulation leads to my (identical) solution not being graded as successful. Here, I don't have the opportunity to consult the memento from your successful run, because it happened on your machine, not at dev time.
$endgroup$
– DMGregory♦
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
$begingroup$
@DMGregory That's correct. Thank you.
$endgroup$
– jvn91173
2 hours ago
add a comment |
jvn91173 is a new contributor. Be nice, and check out our Code of Conduct.
jvn91173 is a new contributor. Be nice, and check out our Code of Conduct.
jvn91173 is a new contributor. Be nice, and check out our Code of Conduct.
jvn91173 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Game Development Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgamedev.stackexchange.com%2fquestions%2f174320%2fhow-can-i-perform-a-deterministic-physics-simulation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Is there any way you can provide more detail on how this game is supposed to work? If you need all of the objects to fall into the EXACT same place at the beginning of a puzzle, you could bake the start of the simulation. That way, all of the objects are playing a pre-determined animation to get them into their initial positions. Then, when you want the player to interact with the objects in a non-deterministic way, just turn on the physics simulation for those objects.
$endgroup$
– JPSmithpg
7 hours ago
$begingroup$
I've edited the OP to further explain how the game is supposed to work. The challenge isn't getting everything in the right place when the user first sees the level; it's making sure that when the user presses start, each simulation (given the exact position and orientation of every piece of the machine) produces the same result across all platforms.
$endgroup$
– jvn91173
7 hours ago
$begingroup$
You have not set yourself an easy task if you want to target many different CPUs & and want to be able to publish updates to your game without changing any past version results. Before we get too deep into this, it's worth investigating if there's anything you can do at the game design level to simplify the problem. For instance, can you snap puzzle piece positions/orientations to a coarser resolution than raw floating point, restrict the vocabulary of the moving parts/interactions, or add rounding/coalescing steps?
$endgroup$
– DMGregory♦
7 hours ago
$begingroup$
@DMGregory Snapping positions and/or rounding is an interesting thought. However, isn't this still subject to the same struggles that I faced before? For example, if sine on architecture 1 produces 0.49999999, and sine on architecture 2 produces 0.50000001, don't we have a problem once we round to the nearest unit?
$endgroup$
– jvn91173
7 hours ago
$begingroup$
I'd strongly recommend keeping transcendentals out of your inner loop if you can. The rounding suggestion was more about limiting accumulation of drift. If two simulations are going to differ, you want them to do it early and obviously so you can spot the error (eg. in unit tests) and identify the exact simulation step that you need to correct, rather than letting small differences accumulate quietly in the lowest bits where they usually don't produce a visible deviation (until one player finds a situation 10000 steps later where they do, and who knows where the initial divergence happened).
$endgroup$
– DMGregory♦
6 hours ago