how to optimize performance of nfs running inside Strongswan ipsec transport on 10Gb network
Host1: Ubuntu 18.04
Host2: Freebsd 11.2
Here is my situation... I have 2 hosts on a 10G lan that have Strongswan IPsec transport configured between them to secure nfsv3. (Yes, I know nfsv3 is old and i should move on, but reasons...). As soon as I set the MTU to 9000 for the last link in the chain (host2 interface, 10G switch interface, other 10G switch interface, host1 interface) my nfs mount seems to hang.
I believe I have 2 problems to solve. 1st, once my ipsec connection is up, my performance, as measured through iperf3, drops from 9.4Gb/sec to ~800Mb/sec. 2nd, the NFS mount can't do anything once the tunnel is up and all related interfaces are using mtu 9000.
So, what should I do to increase my ipsec performance over a 10G LAN and what is wrong my NFS?
Host1 is mounting host2 via NFS using this fstab entry:
host2:/exports/share /mnt/storage nfs
_netdev,nofail,noatime,nolock,tcp,actimeo=1800 0 0
Since this a 10G lan I updated the kernel options for both systems to be more 10G lan optimized.
Host1 kernel tunings in /etc/sysctl.d/10-mychanges.conf
# Maximum receive socket buffer size
net.core.rmem_max = 134217728
# Maximum send socket buffer size
net.core.wmem_max = 134217728
# Minimum, initial and max TCP Receive buffer size in Bytes
net.ipv4.tcp_rmem = 4096 87380 134217728
# Minimum, initial and max buffer space allocated
net.ipv4.tcp_wmem = 4096 65536 134217728
# Maximum number of packets queued on the input side
net.core.netdev_max_backlog = 300000
# Auto tuning
net.ipv4.tcp_moderate_rcvbuf = 1
# Don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# The Hamilton TCP (HighSpeed-TCP) algorithm is a packet loss based congestion control and is more aggressive pushing up to max bandwidth (total BDP) and favors hosts with lower TTL / VARTTL.
net.ipv4.tcp_congestion_control=htcp
# If you are using jumbo frames set this to avoid MTU black holes.
net.ipv4.tcp_mtu_probing = 1
Host1 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.4
leftid=@host1
leftcert=/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.5
rightid=@host2
auto=add
authby=rsasig
type=transport
compress=no
Host2 (freebsd)
# $FreeBSD: releng/11.2/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
# This file is read when going to multi-user and its contents piped thru
# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
#
# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# set autotuning maximum to at least 16MB too
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
# enable send/recv autotuning
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
# increase autotuning step size
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
# set this on test/measurement hosts
net.inet.tcp.hostcache.expire=1
# Set congestion control algorithm to Cubic or HTCP
# Make sure the module is loaded at boot time - check loader.conf
# net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.algorithm=htcp
Host2 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.5
leftid=@host2
leftcert=/usr/local/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.4
rightid=@host1
auto=add
authby=rsasig
type=transport
compress=no
nfs ipsec strongswan
New contributor
add a comment |
Host1: Ubuntu 18.04
Host2: Freebsd 11.2
Here is my situation... I have 2 hosts on a 10G lan that have Strongswan IPsec transport configured between them to secure nfsv3. (Yes, I know nfsv3 is old and i should move on, but reasons...). As soon as I set the MTU to 9000 for the last link in the chain (host2 interface, 10G switch interface, other 10G switch interface, host1 interface) my nfs mount seems to hang.
I believe I have 2 problems to solve. 1st, once my ipsec connection is up, my performance, as measured through iperf3, drops from 9.4Gb/sec to ~800Mb/sec. 2nd, the NFS mount can't do anything once the tunnel is up and all related interfaces are using mtu 9000.
So, what should I do to increase my ipsec performance over a 10G LAN and what is wrong my NFS?
Host1 is mounting host2 via NFS using this fstab entry:
host2:/exports/share /mnt/storage nfs
_netdev,nofail,noatime,nolock,tcp,actimeo=1800 0 0
Since this a 10G lan I updated the kernel options for both systems to be more 10G lan optimized.
Host1 kernel tunings in /etc/sysctl.d/10-mychanges.conf
# Maximum receive socket buffer size
net.core.rmem_max = 134217728
# Maximum send socket buffer size
net.core.wmem_max = 134217728
# Minimum, initial and max TCP Receive buffer size in Bytes
net.ipv4.tcp_rmem = 4096 87380 134217728
# Minimum, initial and max buffer space allocated
net.ipv4.tcp_wmem = 4096 65536 134217728
# Maximum number of packets queued on the input side
net.core.netdev_max_backlog = 300000
# Auto tuning
net.ipv4.tcp_moderate_rcvbuf = 1
# Don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# The Hamilton TCP (HighSpeed-TCP) algorithm is a packet loss based congestion control and is more aggressive pushing up to max bandwidth (total BDP) and favors hosts with lower TTL / VARTTL.
net.ipv4.tcp_congestion_control=htcp
# If you are using jumbo frames set this to avoid MTU black holes.
net.ipv4.tcp_mtu_probing = 1
Host1 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.4
leftid=@host1
leftcert=/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.5
rightid=@host2
auto=add
authby=rsasig
type=transport
compress=no
Host2 (freebsd)
# $FreeBSD: releng/11.2/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
# This file is read when going to multi-user and its contents piped thru
# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
#
# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# set autotuning maximum to at least 16MB too
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
# enable send/recv autotuning
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
# increase autotuning step size
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
# set this on test/measurement hosts
net.inet.tcp.hostcache.expire=1
# Set congestion control algorithm to Cubic or HTCP
# Make sure the module is loaded at boot time - check loader.conf
# net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.algorithm=htcp
Host2 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.5
leftid=@host2
leftcert=/usr/local/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.4
rightid=@host1
auto=add
authby=rsasig
type=transport
compress=no
nfs ipsec strongswan
New contributor
add a comment |
Host1: Ubuntu 18.04
Host2: Freebsd 11.2
Here is my situation... I have 2 hosts on a 10G lan that have Strongswan IPsec transport configured between them to secure nfsv3. (Yes, I know nfsv3 is old and i should move on, but reasons...). As soon as I set the MTU to 9000 for the last link in the chain (host2 interface, 10G switch interface, other 10G switch interface, host1 interface) my nfs mount seems to hang.
I believe I have 2 problems to solve. 1st, once my ipsec connection is up, my performance, as measured through iperf3, drops from 9.4Gb/sec to ~800Mb/sec. 2nd, the NFS mount can't do anything once the tunnel is up and all related interfaces are using mtu 9000.
So, what should I do to increase my ipsec performance over a 10G LAN and what is wrong my NFS?
Host1 is mounting host2 via NFS using this fstab entry:
host2:/exports/share /mnt/storage nfs
_netdev,nofail,noatime,nolock,tcp,actimeo=1800 0 0
Since this a 10G lan I updated the kernel options for both systems to be more 10G lan optimized.
Host1 kernel tunings in /etc/sysctl.d/10-mychanges.conf
# Maximum receive socket buffer size
net.core.rmem_max = 134217728
# Maximum send socket buffer size
net.core.wmem_max = 134217728
# Minimum, initial and max TCP Receive buffer size in Bytes
net.ipv4.tcp_rmem = 4096 87380 134217728
# Minimum, initial and max buffer space allocated
net.ipv4.tcp_wmem = 4096 65536 134217728
# Maximum number of packets queued on the input side
net.core.netdev_max_backlog = 300000
# Auto tuning
net.ipv4.tcp_moderate_rcvbuf = 1
# Don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# The Hamilton TCP (HighSpeed-TCP) algorithm is a packet loss based congestion control and is more aggressive pushing up to max bandwidth (total BDP) and favors hosts with lower TTL / VARTTL.
net.ipv4.tcp_congestion_control=htcp
# If you are using jumbo frames set this to avoid MTU black holes.
net.ipv4.tcp_mtu_probing = 1
Host1 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.4
leftid=@host1
leftcert=/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.5
rightid=@host2
auto=add
authby=rsasig
type=transport
compress=no
Host2 (freebsd)
# $FreeBSD: releng/11.2/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
# This file is read when going to multi-user and its contents piped thru
# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
#
# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# set autotuning maximum to at least 16MB too
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
# enable send/recv autotuning
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
# increase autotuning step size
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
# set this on test/measurement hosts
net.inet.tcp.hostcache.expire=1
# Set congestion control algorithm to Cubic or HTCP
# Make sure the module is loaded at boot time - check loader.conf
# net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.algorithm=htcp
Host2 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.5
leftid=@host2
leftcert=/usr/local/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.4
rightid=@host1
auto=add
authby=rsasig
type=transport
compress=no
nfs ipsec strongswan
New contributor
Host1: Ubuntu 18.04
Host2: Freebsd 11.2
Here is my situation... I have 2 hosts on a 10G lan that have Strongswan IPsec transport configured between them to secure nfsv3. (Yes, I know nfsv3 is old and i should move on, but reasons...). As soon as I set the MTU to 9000 for the last link in the chain (host2 interface, 10G switch interface, other 10G switch interface, host1 interface) my nfs mount seems to hang.
I believe I have 2 problems to solve. 1st, once my ipsec connection is up, my performance, as measured through iperf3, drops from 9.4Gb/sec to ~800Mb/sec. 2nd, the NFS mount can't do anything once the tunnel is up and all related interfaces are using mtu 9000.
So, what should I do to increase my ipsec performance over a 10G LAN and what is wrong my NFS?
Host1 is mounting host2 via NFS using this fstab entry:
host2:/exports/share /mnt/storage nfs
_netdev,nofail,noatime,nolock,tcp,actimeo=1800 0 0
Since this a 10G lan I updated the kernel options for both systems to be more 10G lan optimized.
Host1 kernel tunings in /etc/sysctl.d/10-mychanges.conf
# Maximum receive socket buffer size
net.core.rmem_max = 134217728
# Maximum send socket buffer size
net.core.wmem_max = 134217728
# Minimum, initial and max TCP Receive buffer size in Bytes
net.ipv4.tcp_rmem = 4096 87380 134217728
# Minimum, initial and max buffer space allocated
net.ipv4.tcp_wmem = 4096 65536 134217728
# Maximum number of packets queued on the input side
net.core.netdev_max_backlog = 300000
# Auto tuning
net.ipv4.tcp_moderate_rcvbuf = 1
# Don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# The Hamilton TCP (HighSpeed-TCP) algorithm is a packet loss based congestion control and is more aggressive pushing up to max bandwidth (total BDP) and favors hosts with lower TTL / VARTTL.
net.ipv4.tcp_congestion_control=htcp
# If you are using jumbo frames set this to avoid MTU black holes.
net.ipv4.tcp_mtu_probing = 1
Host1 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.4
leftid=@host1
leftcert=/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.5
rightid=@host2
auto=add
authby=rsasig
type=transport
compress=no
Host2 (freebsd)
# $FreeBSD: releng/11.2/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
# This file is read when going to multi-user and its contents piped thru
# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
#
# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# set autotuning maximum to at least 16MB too
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
# enable send/recv autotuning
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
# increase autotuning step size
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
# set this on test/measurement hosts
net.inet.tcp.hostcache.expire=1
# Set congestion control algorithm to Cubic or HTCP
# Make sure the module is loaded at boot time - check loader.conf
# net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.algorithm=htcp
Host2 ipsec.conf
# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
charondebug="ike 4, knl 4, cfg 4"
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
conn host-host
left=192.168.0.5
leftid=@host2
leftcert=/usr/local/etc/ipsec.d/certs/host_cert.pem
right=192.168.0.4
rightid=@host1
auto=add
authby=rsasig
type=transport
compress=no
nfs ipsec strongswan
nfs ipsec strongswan
New contributor
New contributor
New contributor
asked 34 mins ago
StackShinStackShin
1
1
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
StackShin is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493853%2fhow-to-optimize-performance-of-nfs-running-inside-strongswan-ipsec-transport-on%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
StackShin is a new contributor. Be nice, and check out our Code of Conduct.
StackShin is a new contributor. Be nice, and check out our Code of Conduct.
StackShin is a new contributor. Be nice, and check out our Code of Conduct.
StackShin is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493853%2fhow-to-optimize-performance-of-nfs-running-inside-strongswan-ipsec-transport-on%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown