Does using currying result in lower performance in F#?
up vote
7
down vote
favorite
When writing a function that can accept currying, you can write it as a single-argument function that returns a function. For example,
let add x =
let inner y = x + y
inner
So you can either do:
add 3 4
or:
let add3 = add 3
add3 4
My question, is because you return a function, you are conceptually calling a function twice (the outer function and the inner function). Is this slower than:
let add x y = x + y
or does the compiler optimise invocations of add 3 4
in the curried definition?
f# currying
add a comment |
up vote
7
down vote
favorite
When writing a function that can accept currying, you can write it as a single-argument function that returns a function. For example,
let add x =
let inner y = x + y
inner
So you can either do:
add 3 4
or:
let add3 = add 3
add3 4
My question, is because you return a function, you are conceptually calling a function twice (the outer function and the inner function). Is this slower than:
let add x y = x + y
or does the compiler optimise invocations of add 3 4
in the curried definition?
f# currying
Oops! I was typing from memory. I will edit
– Cthutu
9 hours ago
add a comment |
up vote
7
down vote
favorite
up vote
7
down vote
favorite
When writing a function that can accept currying, you can write it as a single-argument function that returns a function. For example,
let add x =
let inner y = x + y
inner
So you can either do:
add 3 4
or:
let add3 = add 3
add3 4
My question, is because you return a function, you are conceptually calling a function twice (the outer function and the inner function). Is this slower than:
let add x y = x + y
or does the compiler optimise invocations of add 3 4
in the curried definition?
f# currying
When writing a function that can accept currying, you can write it as a single-argument function that returns a function. For example,
let add x =
let inner y = x + y
inner
So you can either do:
add 3 4
or:
let add3 = add 3
add3 4
My question, is because you return a function, you are conceptually calling a function twice (the outer function and the inner function). Is this slower than:
let add x y = x + y
or does the compiler optimise invocations of add 3 4
in the curried definition?
f# currying
f# currying
edited 9 hours ago
asked 10 hours ago
Cthutu
5,44452444
5,44452444
Oops! I was typing from memory. I will edit
– Cthutu
9 hours ago
add a comment |
Oops! I was typing from memory. I will edit
– Cthutu
9 hours ago
Oops! I was typing from memory. I will edit
– Cthutu
9 hours ago
Oops! I was typing from memory. I will edit
– Cthutu
9 hours ago
add a comment |
2 Answers
2
active
oldest
votes
up vote
7
down vote
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7
, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.
add a comment |
up vote
7
down vote
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g
is actually a short-hand definition for f
which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f
and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f
in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| -> s
Here F# can't fully optimize f s v
. The reason is that f
might have a more complex implementation than above that might return a different function depending on s
.
If you look in dnSpy
you note that F# are invoking function using InvokeFast
but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold
written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| -> s
loop s vs
Adapt
here tests before the loop if f
can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U
. This can always be invoked efficiently.
Hope this helps.
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
7
down vote
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7
, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.
add a comment |
up vote
7
down vote
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7
, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.
add a comment |
up vote
7
down vote
up vote
7
down vote
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7
, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7
, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.
answered 9 hours ago
p.s.w.g
114k19206252
114k19206252
add a comment |
add a comment |
up vote
7
down vote
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g
is actually a short-hand definition for f
which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f
and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f
in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| -> s
Here F# can't fully optimize f s v
. The reason is that f
might have a more complex implementation than above that might return a different function depending on s
.
If you look in dnSpy
you note that F# are invoking function using InvokeFast
but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold
written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| -> s
loop s vs
Adapt
here tests before the loop if f
can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U
. This can always be invoked efficiently.
Hope this helps.
add a comment |
up vote
7
down vote
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g
is actually a short-hand definition for f
which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f
and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f
in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| -> s
Here F# can't fully optimize f s v
. The reason is that f
might have a more complex implementation than above that might return a different function depending on s
.
If you look in dnSpy
you note that F# are invoking function using InvokeFast
but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold
written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| -> s
loop s vs
Adapt
here tests before the loop if f
can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U
. This can always be invoked efficiently.
Hope this helps.
add a comment |
up vote
7
down vote
up vote
7
down vote
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g
is actually a short-hand definition for f
which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f
and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f
in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| -> s
Here F# can't fully optimize f s v
. The reason is that f
might have a more complex implementation than above that might return a different function depending on s
.
If you look in dnSpy
you note that F# are invoking function using InvokeFast
but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold
written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| -> s
loop s vs
Adapt
here tests before the loop if f
can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U
. This can always be invoked efficiently.
Hope this helps.
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g
is actually a short-hand definition for f
which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f
and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f
in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| -> s
Here F# can't fully optimize f s v
. The reason is that f
might have a more complex implementation than above that might return a different function depending on s
.
If you look in dnSpy
you note that F# are invoking function using InvokeFast
but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold
written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| -> s
loop s vs
Adapt
here tests before the loop if f
can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U
. This can always be invoked efficiently.
Hope this helps.
answered 9 hours ago
Just another metaprogrammer
8,50022140
8,50022140
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53527210%2fdoes-using-currying-result-in-lower-performance-in-f%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Oops! I was typing from memory. I will edit
– Cthutu
9 hours ago