{"id":596,"date":"2009-06-24T03:47:20","date_gmt":"2009-06-23T17:47:20","guid":{"rendered":"http:\/\/nsrd.wordpress.com\/?p=596"},"modified":"2018-12-12T16:01:56","modified_gmt":"2018-12-12T06:01:56","slug":"considerations-for-client-parallelism-for-networker-server","status":"publish","type":"post","link":"https:\/\/nsrd.info\/blog\/2009\/06\/24\/considerations-for-client-parallelism-for-networker-server\/","title":{"rendered":"Considerations for client parallelism for NetWorker server"},"content":{"rendered":"<p>While doing a few tests for this blog on a lab server, I noticed what looked like odd behaviour \u2013 I had started a manual save running on the NetWorker server for local data. That backup was writing to tape, and while it was going I kicked off a group for an altogether different client.<\/p>\n<p>The backup for the client ran, but then seemed to hang on completion. As the backup-to-tape was merely to test filling tape, and therefore could be restarted at any time, I cancelled out on a hunch, and the savegroup completed almost immediately.<\/p>\n<p>It was &#8220;hung&#8221; waiting for a free unit of parallelism for the NetWorker server in order to write the client indices. It turned out that I&#8217;d forgotten a change I&#8217;d made on Friday to test some other settings \u2013 that change being to reduce the parallelism of the client instance of the NetWorker server to 1.<\/p>\n<p>With this in place, the backup server couldn&#8217;t complete the savegroup because it couldn&#8217;t write its indices, and it couldn&#8217;t write its indices because it was only allowed a client parallelism of 1, and that unit of parallelism was occupied writing to tape.<\/p>\n<p>So it lead me to think \u2013 how easy would it be, given this, for companies to experience delays in their backups due to too <em>low<\/em> a setting for client parallelism for the NetWorker server? The answer \u2013 quite easy. After all, the first, most golden rule of client performance tuning on NetWorker is to <em>eliminate<\/em> client parallelism, to reduce it to 1, and work your way up based on client hardware and data configuration.<\/p>\n<p>This means that it&#8217;s actually <em>fairly critical<\/em> that the NetWorker server have sufficient parallelism to ensure that index backups do not become an impediment to groups finishing. Based on this I&#8217;d recommend aiming for <em>client parallelism<\/em> for the NetWorker server to:<\/p>\n<ul>\n<li>Never be set to 1.<\/li>\n<li>For small environments (under 30 servers) be set to at least 4.<\/li>\n<li>For medium environments (say, 31-100) be set to at least 8.<\/li>\n<li>For larger environments (100+), be set to at least 8, but preferably one of:\n<ul>\n<li>The same as the actual server parallelism<em>, or<\/em><\/li>\n<li>The same as the highest group parallelism, if group parallelism is used.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Note that the above <em>entirely assumes<\/em> that the backup server is a dedicated backup server. If the backup server is also say, a file server*, then obviously different settings will need to be considered to avoid swamping the system.<\/p>\n<p>In essence, while the main goal for regular clients is to achieve <em>as low<\/em> a client parallelism as possible \u2013 i.e., to optimise the balance between number of savesets and throughput, for the <em>backup server<\/em> the goal should be to have as high a client parallelism as necessary to ensure that index backups are not delayed, so as to ensure that groups finish when they are ready to finish.<\/p>\n<p>&#8212;<br \/>\n* For what it&#8217;s worth, my recommendation is that in 100% of times, a backup server should be <em>dedicated<\/em>. That is, the primary and sole function of the server is to act as a backup server.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>While doing a few tests for this blog on a lab server, I noticed what looked like odd behaviour \u2013&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[16],"tags":[223,718],"class_list":["post-596","post","type-post","status-publish","format-standard","hentry","category-networker","tag-client-parallelism","tag-parallelism"],"aioseo_notices":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pKpIN-9C","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/posts\/596","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/comments?post=596"}],"version-history":[{"count":1,"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/posts\/596\/revisions"}],"predecessor-version":[{"id":7646,"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/posts\/596\/revisions\/7646"}],"wp:attachment":[{"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/media?parent=596"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/categories?post=596"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nsrd.info\/blog\/wp-json\/wp\/v2\/tags?post=596"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}