Step size updating scheme adaptive embedded RK methods
If I have a RK method $y$ of order $p$ and a RK method $z$ of order $p-1$ I have read I can estimate the local error as $r_{n+1} = y_{n+1} - z_{n+1}$. First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Further, I read that the step size should be updated like
$$h_{n+1} = left( frac{TOL}{r_{n+1}}right)^{frac{1}{p}}h_n.$$
But I don't understand why this would work since we update our step size so that $r_{n+1} = TOL$, this would just result in $h_{n+1} = h_n$ after a while. What am I missing?
error-estimation runge-kutta adaptive-timestepping
add a comment |
If I have a RK method $y$ of order $p$ and a RK method $z$ of order $p-1$ I have read I can estimate the local error as $r_{n+1} = y_{n+1} - z_{n+1}$. First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Further, I read that the step size should be updated like
$$h_{n+1} = left( frac{TOL}{r_{n+1}}right)^{frac{1}{p}}h_n.$$
But I don't understand why this would work since we update our step size so that $r_{n+1} = TOL$, this would just result in $h_{n+1} = h_n$ after a while. What am I missing?
error-estimation runge-kutta adaptive-timestepping
add a comment |
If I have a RK method $y$ of order $p$ and a RK method $z$ of order $p-1$ I have read I can estimate the local error as $r_{n+1} = y_{n+1} - z_{n+1}$. First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Further, I read that the step size should be updated like
$$h_{n+1} = left( frac{TOL}{r_{n+1}}right)^{frac{1}{p}}h_n.$$
But I don't understand why this would work since we update our step size so that $r_{n+1} = TOL$, this would just result in $h_{n+1} = h_n$ after a while. What am I missing?
error-estimation runge-kutta adaptive-timestepping
If I have a RK method $y$ of order $p$ and a RK method $z$ of order $p-1$ I have read I can estimate the local error as $r_{n+1} = y_{n+1} - z_{n+1}$. First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Further, I read that the step size should be updated like
$$h_{n+1} = left( frac{TOL}{r_{n+1}}right)^{frac{1}{p}}h_n.$$
But I don't understand why this would work since we update our step size so that $r_{n+1} = TOL$, this would just result in $h_{n+1} = h_n$ after a while. What am I missing?
error-estimation runge-kutta adaptive-timestepping
error-estimation runge-kutta adaptive-timestepping
asked 7 hours ago
Heuristics
1223
1223
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Both methods are converging to the same true solution, but at slightly different weights. You can think of convergence of rate $p$ as matching the first $p$ terms of the Taylor series. The is because when proving convergence to the true solution, you expand $u(t_{n+1})$ in its Taylor series of $h = t_{n+1} - t_n$ and show that you match the first $p$ terms. So given that, in Taylor series form, $r_{n+1}$ is some polynomial $r_{n+1} = alpha_p h^p + mathcal{O}(h^{p+1})$ since both of the methods have exactly the same expansion for the first $p-1$ terms, but only one of the methods matches the $p$th term. Dropping the higher order terms since $h$ is considered small enough, we have $r_{n+1} approx alpha_p h^p$. But understand what this term means. Since the $p$th order method has the correct Taylor series expansion, this term is exactly the error of the $p-1$th order method from the true Taylor term, meaning that $alpha_p$ is the coefficient of the leading (largest) term in the local truncation error of the $p-1$th method and thus a good approximation for the method's local truncation error.
So now we have two things, $r_{n+1}$ is a good approximation to one of the method's local truncation error, and $r_{n+1}approx alpha_p Delta t^p$. The true path to the formula you have up there is a result for the optimal stepsize due to Cechino where you prove that would give you the error/time optimal stepsize in the next step if the error estimate is exact. This is also known as proportional control but doesn't take into account stability concerns so this is not the method actually used in a lot of ODE solver software, but it's a good teaching tool. You can get to this result without the optimal stepsize proof just through a bit of reasoning though. Proportional control makes sense because what it means is that you want to correct the stepsize proportional to to size of the error: the larger the error, the more correction you want. So you could take your $TOL/r_{n+1}$ as the coefficient, that's TOL divided by your error estimate, meaning that if your error estimate is twice the tolerance, you would halve your stepsize. That's reasonable, but recall that $r_{n+1}approx alpha_p Delta t^p$, so dimensionally it makes sense to take the $p$th root so that way this term actually scales like $1/h$. Of course, this is heuristic and the actual proof is more in depth. So there you go, now you have a reasonable error estimate and a reasonable way to make that produce a new stepsize.
But I don't understand why this would work since we update our step size so that $r_{n+1}=TOL$, this would just result in $h_{n+1}=h_n$ after a while.
If the error of the methods were independent of the stepsize or the location in time for the ODE, this would happen. But that's never true. Instead, the local truncation error of the methods very depending on many factors, such as the derivatives of the derivative function $f$, the size of the current state values, etc., and thus $alpha_p$ is always changing, causing $r_{n+1}$ and the optimal stepsize to always change. But for some simple enough equations, when adaptivity is working correctly, the stepsize will stability around some "optimal" value.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "363"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscicomp.stackexchange.com%2fquestions%2f30796%2fstep-size-updating-scheme-adaptive-embedded-rk-methods%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Both methods are converging to the same true solution, but at slightly different weights. You can think of convergence of rate $p$ as matching the first $p$ terms of the Taylor series. The is because when proving convergence to the true solution, you expand $u(t_{n+1})$ in its Taylor series of $h = t_{n+1} - t_n$ and show that you match the first $p$ terms. So given that, in Taylor series form, $r_{n+1}$ is some polynomial $r_{n+1} = alpha_p h^p + mathcal{O}(h^{p+1})$ since both of the methods have exactly the same expansion for the first $p-1$ terms, but only one of the methods matches the $p$th term. Dropping the higher order terms since $h$ is considered small enough, we have $r_{n+1} approx alpha_p h^p$. But understand what this term means. Since the $p$th order method has the correct Taylor series expansion, this term is exactly the error of the $p-1$th order method from the true Taylor term, meaning that $alpha_p$ is the coefficient of the leading (largest) term in the local truncation error of the $p-1$th method and thus a good approximation for the method's local truncation error.
So now we have two things, $r_{n+1}$ is a good approximation to one of the method's local truncation error, and $r_{n+1}approx alpha_p Delta t^p$. The true path to the formula you have up there is a result for the optimal stepsize due to Cechino where you prove that would give you the error/time optimal stepsize in the next step if the error estimate is exact. This is also known as proportional control but doesn't take into account stability concerns so this is not the method actually used in a lot of ODE solver software, but it's a good teaching tool. You can get to this result without the optimal stepsize proof just through a bit of reasoning though. Proportional control makes sense because what it means is that you want to correct the stepsize proportional to to size of the error: the larger the error, the more correction you want. So you could take your $TOL/r_{n+1}$ as the coefficient, that's TOL divided by your error estimate, meaning that if your error estimate is twice the tolerance, you would halve your stepsize. That's reasonable, but recall that $r_{n+1}approx alpha_p Delta t^p$, so dimensionally it makes sense to take the $p$th root so that way this term actually scales like $1/h$. Of course, this is heuristic and the actual proof is more in depth. So there you go, now you have a reasonable error estimate and a reasonable way to make that produce a new stepsize.
But I don't understand why this would work since we update our step size so that $r_{n+1}=TOL$, this would just result in $h_{n+1}=h_n$ after a while.
If the error of the methods were independent of the stepsize or the location in time for the ODE, this would happen. But that's never true. Instead, the local truncation error of the methods very depending on many factors, such as the derivatives of the derivative function $f$, the size of the current state values, etc., and thus $alpha_p$ is always changing, causing $r_{n+1}$ and the optimal stepsize to always change. But for some simple enough equations, when adaptivity is working correctly, the stepsize will stability around some "optimal" value.
add a comment |
First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Both methods are converging to the same true solution, but at slightly different weights. You can think of convergence of rate $p$ as matching the first $p$ terms of the Taylor series. The is because when proving convergence to the true solution, you expand $u(t_{n+1})$ in its Taylor series of $h = t_{n+1} - t_n$ and show that you match the first $p$ terms. So given that, in Taylor series form, $r_{n+1}$ is some polynomial $r_{n+1} = alpha_p h^p + mathcal{O}(h^{p+1})$ since both of the methods have exactly the same expansion for the first $p-1$ terms, but only one of the methods matches the $p$th term. Dropping the higher order terms since $h$ is considered small enough, we have $r_{n+1} approx alpha_p h^p$. But understand what this term means. Since the $p$th order method has the correct Taylor series expansion, this term is exactly the error of the $p-1$th order method from the true Taylor term, meaning that $alpha_p$ is the coefficient of the leading (largest) term in the local truncation error of the $p-1$th method and thus a good approximation for the method's local truncation error.
So now we have two things, $r_{n+1}$ is a good approximation to one of the method's local truncation error, and $r_{n+1}approx alpha_p Delta t^p$. The true path to the formula you have up there is a result for the optimal stepsize due to Cechino where you prove that would give you the error/time optimal stepsize in the next step if the error estimate is exact. This is also known as proportional control but doesn't take into account stability concerns so this is not the method actually used in a lot of ODE solver software, but it's a good teaching tool. You can get to this result without the optimal stepsize proof just through a bit of reasoning though. Proportional control makes sense because what it means is that you want to correct the stepsize proportional to to size of the error: the larger the error, the more correction you want. So you could take your $TOL/r_{n+1}$ as the coefficient, that's TOL divided by your error estimate, meaning that if your error estimate is twice the tolerance, you would halve your stepsize. That's reasonable, but recall that $r_{n+1}approx alpha_p Delta t^p$, so dimensionally it makes sense to take the $p$th root so that way this term actually scales like $1/h$. Of course, this is heuristic and the actual proof is more in depth. So there you go, now you have a reasonable error estimate and a reasonable way to make that produce a new stepsize.
But I don't understand why this would work since we update our step size so that $r_{n+1}=TOL$, this would just result in $h_{n+1}=h_n$ after a while.
If the error of the methods were independent of the stepsize or the location in time for the ODE, this would happen. But that's never true. Instead, the local truncation error of the methods very depending on many factors, such as the derivatives of the derivative function $f$, the size of the current state values, etc., and thus $alpha_p$ is always changing, causing $r_{n+1}$ and the optimal stepsize to always change. But for some simple enough equations, when adaptivity is working correctly, the stepsize will stability around some "optimal" value.
add a comment |
First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Both methods are converging to the same true solution, but at slightly different weights. You can think of convergence of rate $p$ as matching the first $p$ terms of the Taylor series. The is because when proving convergence to the true solution, you expand $u(t_{n+1})$ in its Taylor series of $h = t_{n+1} - t_n$ and show that you match the first $p$ terms. So given that, in Taylor series form, $r_{n+1}$ is some polynomial $r_{n+1} = alpha_p h^p + mathcal{O}(h^{p+1})$ since both of the methods have exactly the same expansion for the first $p-1$ terms, but only one of the methods matches the $p$th term. Dropping the higher order terms since $h$ is considered small enough, we have $r_{n+1} approx alpha_p h^p$. But understand what this term means. Since the $p$th order method has the correct Taylor series expansion, this term is exactly the error of the $p-1$th order method from the true Taylor term, meaning that $alpha_p$ is the coefficient of the leading (largest) term in the local truncation error of the $p-1$th method and thus a good approximation for the method's local truncation error.
So now we have two things, $r_{n+1}$ is a good approximation to one of the method's local truncation error, and $r_{n+1}approx alpha_p Delta t^p$. The true path to the formula you have up there is a result for the optimal stepsize due to Cechino where you prove that would give you the error/time optimal stepsize in the next step if the error estimate is exact. This is also known as proportional control but doesn't take into account stability concerns so this is not the method actually used in a lot of ODE solver software, but it's a good teaching tool. You can get to this result without the optimal stepsize proof just through a bit of reasoning though. Proportional control makes sense because what it means is that you want to correct the stepsize proportional to to size of the error: the larger the error, the more correction you want. So you could take your $TOL/r_{n+1}$ as the coefficient, that's TOL divided by your error estimate, meaning that if your error estimate is twice the tolerance, you would halve your stepsize. That's reasonable, but recall that $r_{n+1}approx alpha_p Delta t^p$, so dimensionally it makes sense to take the $p$th root so that way this term actually scales like $1/h$. Of course, this is heuristic and the actual proof is more in depth. So there you go, now you have a reasonable error estimate and a reasonable way to make that produce a new stepsize.
But I don't understand why this would work since we update our step size so that $r_{n+1}=TOL$, this would just result in $h_{n+1}=h_n$ after a while.
If the error of the methods were independent of the stepsize or the location in time for the ODE, this would happen. But that's never true. Instead, the local truncation error of the methods very depending on many factors, such as the derivatives of the derivative function $f$, the size of the current state values, etc., and thus $alpha_p$ is always changing, causing $r_{n+1}$ and the optimal stepsize to always change. But for some simple enough equations, when adaptivity is working correctly, the stepsize will stability around some "optimal" value.
First of all I don't see how this estimates the local error defined as the error we make in a single step using correct previous values. What does a lower order method have anything to do with previous correct values? I can accept that this is the error we make using a lower order method instead of a higher order method, but what is the relevance of this?
Both methods are converging to the same true solution, but at slightly different weights. You can think of convergence of rate $p$ as matching the first $p$ terms of the Taylor series. The is because when proving convergence to the true solution, you expand $u(t_{n+1})$ in its Taylor series of $h = t_{n+1} - t_n$ and show that you match the first $p$ terms. So given that, in Taylor series form, $r_{n+1}$ is some polynomial $r_{n+1} = alpha_p h^p + mathcal{O}(h^{p+1})$ since both of the methods have exactly the same expansion for the first $p-1$ terms, but only one of the methods matches the $p$th term. Dropping the higher order terms since $h$ is considered small enough, we have $r_{n+1} approx alpha_p h^p$. But understand what this term means. Since the $p$th order method has the correct Taylor series expansion, this term is exactly the error of the $p-1$th order method from the true Taylor term, meaning that $alpha_p$ is the coefficient of the leading (largest) term in the local truncation error of the $p-1$th method and thus a good approximation for the method's local truncation error.
So now we have two things, $r_{n+1}$ is a good approximation to one of the method's local truncation error, and $r_{n+1}approx alpha_p Delta t^p$. The true path to the formula you have up there is a result for the optimal stepsize due to Cechino where you prove that would give you the error/time optimal stepsize in the next step if the error estimate is exact. This is also known as proportional control but doesn't take into account stability concerns so this is not the method actually used in a lot of ODE solver software, but it's a good teaching tool. You can get to this result without the optimal stepsize proof just through a bit of reasoning though. Proportional control makes sense because what it means is that you want to correct the stepsize proportional to to size of the error: the larger the error, the more correction you want. So you could take your $TOL/r_{n+1}$ as the coefficient, that's TOL divided by your error estimate, meaning that if your error estimate is twice the tolerance, you would halve your stepsize. That's reasonable, but recall that $r_{n+1}approx alpha_p Delta t^p$, so dimensionally it makes sense to take the $p$th root so that way this term actually scales like $1/h$. Of course, this is heuristic and the actual proof is more in depth. So there you go, now you have a reasonable error estimate and a reasonable way to make that produce a new stepsize.
But I don't understand why this would work since we update our step size so that $r_{n+1}=TOL$, this would just result in $h_{n+1}=h_n$ after a while.
If the error of the methods were independent of the stepsize or the location in time for the ODE, this would happen. But that's never true. Instead, the local truncation error of the methods very depending on many factors, such as the derivatives of the derivative function $f$, the size of the current state values, etc., and thus $alpha_p$ is always changing, causing $r_{n+1}$ and the optimal stepsize to always change. But for some simple enough equations, when adaptivity is working correctly, the stepsize will stability around some "optimal" value.
answered 5 hours ago
Chris Rackauckas
5,63911526
5,63911526
add a comment |
add a comment |
Thanks for contributing an answer to Computational Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscicomp.stackexchange.com%2fquestions%2f30796%2fstep-size-updating-scheme-adaptive-embedded-rk-methods%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown