Dec 302017
 

With just a few more days of 2017 left, I thought it opportune making the last post of the year to summarise some of what we’ve seen in the field of data protection in 2017.

2017 Summary

It’s been a big year, in a lot of ways, particularly at DellEMC.

Towards the end of 2016, but definitely leading into 2017, NetWorker 9.1 was released. That meant 2017 started with a bang, courtesy of the new NetWorker Virtual Proxy (NVP, or vProxy) backup system. This replaced VBA, allowing substantial performance improvements, and some architectural simplification as well. I was able to generate some great stats right out of the gate with NVP under NetWorker 9.1, and that applied not just to Windows virtual machines but also to Linux ones, too. NetWorker 9.1 with NVP allows you to recover tens of thousands or more files from image level backup in just a few minutes.

In March I released the NetWorker 2016 usage survey report – the survey ran from December 1, 2016 to January 31, 2017. That reminds me – the 2017 Usage Survey is still running, so you’ve still got time to provide data to the report. I’ve been compiling these reports now for 7 years, so there’s a lot of really useful trends building up. (The 2016 report itself was a little delayed in 2017; I normally aim for it to be available in February, and I’ll do my best to ensure the 2017 report is out in February 2018.)

Ransomware and data destruction made some big headlines in 2017 – repeatedly. Gitlab hit 2017 running with a massive data loss in January, which they consequently blamed on a backup failure, when in actual fact it was a staggering process and people failure. It reminds one of the old manager #101 credo, “If you ASSuME, you make an ASS out of U and ME”. Gitlab’s issue may have at a very small level been a ‘backup failure’, but only in so much that everyone in the house thinking it was someone else’s turn to fill the tank of the car, and running out of petrol, is a ‘car failure’.

But it wasn’t just Gitlab. Next generation database users around the world – specifically, MongoDB – learnt the hard way that security isn’t properly, automatically enabled out of the box. Large numbers of MongoDB administrators around the world found their databases encrypted or lost as default security configurations were exploited on databases left accessible in the wild.

In fact, Ransomware became such a common headache in 2017 that it fell prey to IT’s biggest meme – the infographic. Do a quick Google search for “Ransomware Timeline” for instance, and you’ll find a plethora of options around infographics about Ransomware. (And who said Ransomware couldn’t get any worse?)

Appearing in February 2017 was Data Protection: Ensuring Data Availability. Yes, that’s right, I’m calling the release of my second book on data protection as a big event in the realm of data storage protection in 2017. Why? This is a topic which is insanely critical to business success. If you don’t have a good data protection process and strategy within your business, you could literally lose everything that defines the operational existence of your business. There’s three defining aspects I see in data protection considerations now:

  • Data is still growing
  • Product capability is still expanding to meet that growth
  • Too many businesses see data protection as a series of silos, unconnected – storage, virtualisation, databases, backup, cloud, etc. (Hint: They’re all connected.)

So on that basis, I do think a new book whose focus is to give a complete picture of the data storage protection landscape is important to anyone working in infrastructure.

And on the topic of stripping the silos away from data protection, 2017 well and truly saw DellEMC cement its lead in what I refer to as convergent data protection. That’s the notion of combining data protection techniques from across the continuum to provide new methods of ensuring SLAs are met, impact is eliminated, and data hops are minimised. ProtectPoint was first introduced to the world in 2015, and has evolved considerably since then. ProtectPoint allows primary storage arrays to integrate with data protection storage (e.g., VMAX3 to Data Domain) so that those really huge databases (think 10TB as a typical starting point) can have instantaneous, incremental-forever backups performed – all application integrated, but no impact on the database server itself. ProtectPoint though was just the starting position. In 2017 we saw the release of Hypervisor Direct, which draws a line in the sand on what Convergent Data Protection should be and do. Hypervisor direct is there for your big, virtualised systems with big databases, eliminating any risk of VM-stun during a backup (an architectural constraint of VMware itself) by integrating RecoverPoint for Virtual Machines with Data Domain Boost, all while still being fully application integrated. (Mark my words – hypervisor direct is a game changer.)

Ironically, in a world where target-based deduplication should be a “last resort”, we saw tech journalists get irrationally excited about a company heavy on marketing but light on functionality promote their exclusively target-deduplication data protection technology as somehow novel or innovative. Apparently, combining target based deduplication and needing to scale to potentially hundreds of 10Gbit ethernet ports is both! (In the same way that releasing a 3-wheeled Toyota Corolla for use by the trucking industry would be both ‘novel’ and ‘innovative’.)

Between VMworld and DellEMC World, there were some huge new releases by DellEMC this year though, by comparison. The Integrated Data Protection Appliance (IDPA) was announced at DellEMC world. IDPA is a hyperconverged backup environment – you get delivered to your datacentre a combined unit with data protection storage, control, reporting, monitoring, search and analytics that can be stood up and ready to start protecting your workloads in just a few hours. As part of the support programme you don’t have to worry about upgrades – it’s done as an atomic function of the system. And there’s no need to worry about software licensing vs hardware capacity: it’s all handled as a single, atomic function, too. For sure, you can still build your own backup systems, and many people will – but for businesses who want to hit the ground running in a new office or datacentre, or maybe replace some legacy three-tier backup architecture that’s limping along and costing hundreds of thousands a year just in servicing media servers (AKA “data funnel$”), IDPA is an ideal fit.

At DellEMC World, VMware running in AWS was announced – imagine that, just seamlessly moving virtual machines from your on-premises environment out to the world’s biggest public cloud as a simple operation, and managing the two seamlessly. That became a reality later in the year, and NetWorker and Avamar were the first products to support actual hypervisor level backup of VMware virtual machines running in a public cloud.

Thinking about public cloud, Data Domain Virtual Edition (DDVE) became available in both the Azure and AWS marketplaces for easy deployment. Just spin up a machine and get started with your protection. That being said, if you’re wanting to deploy backup in public cloud, make sure you check out my two-part article on why Architecture Matters: Part 1, and Part 2.

And still thinking about cloud – this time specifically about cloud object storage, you’ll want to remember the difference between Cloud Boost and Cloud Tier. Both can deliver exceptional capabilities to your backup environment, but they have different use cases. That’s something I covered off in this article.

There were some great announcements at re:Invent, AWS’s yearly conference, as well. Cloud Snapshot Manager was released, providing enterprise grade control over AWS snapshot policies. (Check out what I had to say about CSM here.) Also released in 2017 was DellEMC’s Data Domain Cloud Disaster Recovery, something I need to blog about ASAP in 2018 – that’s where you can actually have your on-premises virtual machine backups replicated out into a public cloud and instantiate them as a DR copy with minimal resources running in the cloud (e.g., no in-Cloud DDVE required).

2017 also saw the release of Enterprise Copy Data Analytics – imagine having a single portal that tracks your Data Domain fleet world wide, and provides predictive analysis to you about system health, capacity trending and insights into how your business is going with data protection. That’s what eCDA is.

NetWorker 9.2 and 9.2.1 came out as well during 2017 – that saw functionality such as integration with Data Domain Retention Lock, database integrated virtual machine image level backups, enhancements to the REST API, and a raft of other updates. Tighter integration with vRealize Automation, support for VMware image level backup in AWS, optimised object storage functionality and improved directives – the list goes on and on.

I’d be remiss if I didn’t mention a little bit of politics before I wrap up. Australia got marriage equality – I, myself, am finally now blessed with the challenge of working out how to plan a wedding (my boyfriend and I are intending to marry on our 22nd anniversary in late 2018 – assuming we can agree on wedding rings, of course), and more broadly, politics again around the world managed to remind us of the truth to that saying by the French Philosopher, Albert Camus: “A man without ethics is a wild beast loosed upon this world.” (OK, I might be having a pointed glance at Donald Trump over in America when I say that, but it’s still a pertinent thing to keep in mind across the political and geographic spectrums.)

2017 wasn’t just about introducing converged data protection appliances and convergent data protection, but it was also a year where more businesses started to look at hyperconverged administration teams as well. That’s a topic that will only get bigger in 2018.

The DellEMC data protection family got a lot of updates across the board that I haven’t had time to cover this year – Avamar 7.5, Boost for Enterprise Applications 4.5, Enterprise Copy Data Management (eCDM) 2, and DDOS 6.1! Now that I sit back and think about it, my January could be very busy just catching up on things I haven’t had a chance to blog about this year.

I saw some great success stories with NetWorker in 2017, something I hope to cover in more detail into 2018 and beyond. You can see some examples of great success stories here.

I also started my next pet project – reviewing ethical considerations in technology. It’s certainly not going to be just about backup. You’ll see the start of the project over at Fools Rush In.

And that’s where I’m going to leave 2017. It’s been a big year and I hope, for all of you, a successful year. 2018, I believe, will be even bigger again.

Talking about Ransomware

 Architecture, Backup theory, General thoughts, Recovery, Security  Comments Off on Talking about Ransomware
Sep 062017
 

The “Wannacry” Ransomware strike saw a particularly large number of systems infected and garnered a great deal of media attention.

Ransomware Image

As you’d expect, many companies discussed ransomware and their solutions for it. There was also backlash from many quarters suggesting people were using a ransomware attack to unethically spruik their solutions. It almost seems to be the IT equivalent of calling lawyers “ambulance chasers”.

We are (albeit briefly, I am sure), between major ransomware outbreaks. So, logically that’ll mean it’s OK to talk about ransomware.

Now, there’s a few things to note about ransomware and defending against it. It’s not as simplistic as “I only have to do X and I’ll solve the problem”. It’s a multi-layered issue requiring user education, appropriate systems patching, appropriate security, appropriate data protection, and so on.

Focusing even on data protection, that’s a multi-layered approach as well. In order to have a data protection environment that can assuredly protect you from ransomware, you need to do the basics, such as operating system level protection for backup servers, storage nodes, etc. That’s just the beginning. The next step is making sure your backup environment itself follows appropriate security protocols. That’s something I’ve been banging on about for several years now. That’s not the full picture though. Once you’ve got operating systems and backup systems secured via best practices, you need to then look at hardening your backup environment. There’s a difference between standard security processes and hardened security processes, and if you’re worried about ransomware this is something you should be thinking about doing. Then, of course, if you really want to ensure you can recover your most critical data from a serious hactivism and ransomware (or outright data destruction) breach, you need to look at IRS as well.

But let’s step back, because I think it’s important to make a point here about when we can talk about ransomware.

I’ve worked in data protection my entire professional career. (Even when I was a system administrator for the first four years of it, I was the primary backup administrator as well. It’s always been a focus.)

If there’s one thing I’ve observed in my career in data protection is that having a “head in the sand” approach to data loss risk is a lamentably common thing. Even in 2017 I’m still hearing things like “We can’t back this environment up because the project which spun it up didn’t budget for backup”, and “We’ll worry about backup later”. Not to mention the old chestnut, “it’s out of warranty so we’ll do an Icarus support contract“.

Now the flipside of the above paragraph is this: if things go wrong in any of those situations, suddenly there’s a very real interest in talking about options to prevent a future issue.

It may be a career limiting move to say this, but I’m not in sales to make sales. I’m in sales to positively change things for my customers. I want to help customers resolve problems, and deliver better outcomes to their users. I’ve been doing data protection for over 20 years. The only reason someone stays in data protection that long is because they’re passionate about it, and the reason we’re passionate about it is because we are fundamentally averse to data loss.

So why do we want to talk about defending against or recovering from ransomware during a ransomware outbreak? It’s simple. At the point of a ransomware outbreak, there’s a few things we can be sure of:

  • Business attention is focused on ransomware
  • People are talking about ransomware
  • People are being directly impacted by ransomware

This isn’t ambulance chasing. This is about making the best of a bad situation – I don’t want businesses to lose data, or have it encrypted and see them have to pay a ransom to get it back – but if they are in that situation, I want them to know there are techniques and options to prevent it from striking them again. And at that point in time – during a ransomware attack – people are interested in understanding how to stop it from happening again.

Now, we have to still be considerate in how we discuss such situations. That’s a given. But it doesn’t mean the discussion can’t be had.

To me this is also an ethical consideration. Too often the focus on ethics in professional IT is around the basics: don’t break the law (note: law ≠ ethics), don’t be sexist, don’t be discriminatory, etc. That’s not really a focus on ethics, but a focus on professional conduct. Focusing on professional conduct is good, but there must also be a focus on the ethical obligations of protecting data. It’s my belief that if we fail to make the best of a bad situation to get an important message of data protection across, we’re failing our ethical obligations as data protection professionals.

Of course, in an ideal world, we’d never need to discuss how to mitigate or recover from a ransomware outbreak during said outbreak, because everyone would already be protected. But harking back to an earlier point, I’m still being told production systems were installed without consideration for data protection, so I think we’re a long way from that point.

So I’ll keep talking about protecting data from all sorts of loss situations, including ransomware, and I’ll keep having those discussions before, during and after ransomware outbreaks. That’s my job, and that’s my passion: data protection. It’s not gloating, it’s not ambulance chasing, it’s let’s make sure this doesn’t happen again.


On another note, sales are really great for my book, Data Protection: Ensuring Data Availability, released earlier this year. I have to admit, I may have squealed a little when I got my first royalty statement. So, if you’ve already purchased my book: you have my sincere thanks. If you’ve not, that means you’re missing out on an epic story of protecting data in the face of amazing odds. So check it out, it’s in eBook or Paperback format on Amazon (prior link), or if you’d prefer to, you can buy direct from the publisher. And thanks again for being such an awesome reader.

Ransomware is a fact of life

 Data loss, Security  Comments Off on Ransomware is a fact of life
Feb 012017
 

The NetWorker usage survey for 2016 has just finished. One of the questions I asked in this most recent survey was as follows:

Has your business been struck by ransomware or other data destructive attacks in the past year?

(_) Yes

(_) No

(_) Don’t know

(_) Prefer not to say

With the survey closed, I wanted to take a sneak peek at the answer to this question.

Ransomware, as many of you would know, is the term coined for viruses and other attacks that lead data erased or encrypted, with prompts to pay a ‘ransom’ in order to get the money back. Some businesses may choose to pay the ransom, others choose not to. If you’ve got a good data protection scheme you can save yourself from a lot of ransomware situations, but the looming threat – which is something that has already occurred in some instances – is ransomware combined with systems penetration, resulting in backup servers being deliberately compromised and data-destructive attacks happening on primary data. I gave an example of EMC’s solution to that sort of devastating 1-2 punch attack last November.

Ransomware is not going away. We recently saw massive numbers of MongoDB databases being attacked, and law enforcement agencies are considering it a growing threat and a billion dollar a year or more industry for the attackers.

So what’s the story then with NetWorker users and ransomware? There were 159 respondents to the 2016 NetWorker usage survey, and the answer breakdown was as follows:

  • No – 48.43%
  • Don’t know – 11.32%
  • Prefer not to say – 9.43%
  • Yes – 30.82%

An August 2016 article in the Guardian suggested that up to 40% of businesses had been hit by ransomware, and by the end of 2016 other polls were suggesting the number was edging towards 50%.

Ransomware Percentages

I’m going to go out on a limb and suggest that at least 50% of respondents who answered “Prefer not to say” were probably saying it because it’s happened and they don’t want to mention it. (It’s understandable, and very common.) I’ll also go out on a limb and suggest that at least a third of respondents who answered “Don’t know” probably had been but it might have been resolved through primary storage or other recovery options that left individual respondents unaware.

At the very base numbers though, almost 31% of respondents knew they definitely had been hit by ransomware or other data-destructive attacks, and with those extrapolations above we might be forgiven for believing that the number was closer to 38.9%.

The Guardian article was based on a survey of Fortune 500 senior IT executives, and ransomware at its most efficacious is targeted and combined with other social engineering techniques such as spear phishing, so it’s no wonder the “big” companies report high numbers of incidents – they’re getting targeted more deliberately. The respondents on the NetWorker survey however came from all geographies and all sizes, ranging from a few clients to thousands or more.

Bear in mind that being hit by ransomware is not a case of “lightning never strikes twice”. At a briefing I went to in the USA last year, we were told that one business alone had been hit by 270+ cases of ransomware since the start of the year. Anecdotally, those customers of mine even who mention having been hit by ransomware talk about it in terms of multiple situations, not just a single one.

Now as much as ever before, we need robust data protection, and air-gapped data protection for sensitive data – the Isolated Recovery Site (IRS) is something you’ll hear more of as ransomware gets more prevalent.

NetWorker users have spoken – ransomware is a real and tangible threat to businesses around the world.

I’ll be aiming to have the full report published by mid-February, and I’ll contact the winner of the prize at that time too.

Basics – Configuring a reports-only user

 NetWorker, Security  Comments Off on Basics – Configuring a reports-only user
May 252015
 

Something that’s come up a few times in the last year for me has been a situation where a NetWorker user has wanted to allow someone to access NetWorker Management Console for the purpose of running reports, but not allow them any administrative access to NetWorker.

It turns out it’s very easy to achieve this, and you actually have a couple of options on the level of NetWorker access they’ll get.

Let’s look first at the minimum requirements – defining a reports only user.

To do that, you first go into NetWorker Management Console as an administrative user, and go across to the Setup pane.

You’ll then create a new user account:

New User Account in NMC

Within the Create User dialog, be certain to only select Console User as the role:

NMC new user dialog

At this point, you’ve successfully created a user account that can run NMC reports, but can’t administer the NetWorker server.

However, you’re then faced with a decision. Do you want a reports-only user that can “look but don’t touch”, or do you want a reports-only user that can’t view any of the NetWorker configuration (or at least, anything other than can be ascertained by the reports themselves)?

If you want your reports user to be able to run reports and you’re not fussed about the user being able to view the majority of your NetWorker configuration, you’re done at this point. If however your organisation has a higher security focus, you may need to look at adjusting the basic Users NetWorker user group. If you’re familiar with it, you’ll know this has the following configuration:NetWorker Users Usergroup

This usergroup in the default configuration allows any user in the NetWorker datazone to:

  • Monitor NetWorker
  • Recover Local Data
  • Backup Local Data

The key there is any user*@*. Normally you want this to be set to *@*, but if you’re a particularly security focused organisation you might want to tighten this down to only those users and system accounts authorised to perform recoveries. The same principle applies here. Let’s say I didn’t want the reports user to see any of the NetWorker configuration, but I did want any root, system or pmdg user in the environment to still have that basic functionality. I could change the Users usergroup to the following:

Modified NetWorker Users usergroup

With this usergroup modified, logging in as the reports user will show a very blank NMC monitoring tab:

NMC-monitoring reports user

Similarly, the client list (as an example) will be quite empty too:

NMC-config reports user

Now, it’s worth mentioning there are is a key caveat you should consider here – some modules may be designed in anticipation that the executing user for the backup or recovery (usually an application user with sufficient privileges) will at least be a member of the Users usergroup. So if you tighten the security against your reports user to this level, you’ll need to be prepared to increase the steps in your application onboarding processes to ensure those accounts are added to an appropriate usergroup (or a new usergroup).

But in terms of creating a reports user that’s not privileged to control NetWorker, it’s as easy as the steps above.

Mar 022015
 

Lock

Backup security is – or rather should be a big requirement in any organisation. Sadly we still see periodic examples of where organisations fail to fully comprehend the severity of a backup breach. In a worst case scenario, a backup breach involving physical theft might be the equivalent of someone having permanent and unchecked access to a snapshot of your entire network.

There are two distinct aspects to backup security to consider:

  • Physical
  • Electronic

For each type of backup security, we need to consider two key areas:

  • At rest
  • In transit

This usually leads businesses to start backup security planning by considerations such as:

  • Do we encrypt backup media?
  • Do we used security guards for movement of backup media?
  • Are on-disk backups encrypted?

Oddly enough, there’s a bigger gorilla in the room for backup security that is less often thought of: your backups are only as secure as the quality of and your/their adherence to your security policies.

A long time ago in a state far, far away, a colleague was meeting with a system administrator in the offices of an environmental organisation. She needed to ensure the security restrictions for system access could be drastically lowered from the default install criteria. “Everyone here is an anti-authority hippy” she said (or words to that effect), “If we give them hard passwords they’ll just write the in permanent marker on their monitors.”

The solution was to compromise to a mid-point of security vs ease-of-access.

These days few organisations would yield to their users disdain for authority so readily, but it serves to highlight that a system is only as secure as you choose to make it. A backup environment does not sit in isolation – it resides on the hosts it is being used to protect (in some form or another), and it will hae a host-based presence within your network at some point. If someone can breach that security and get onto one of thoe hosts, there’s a good chance a significant aspect of your backup security protocols have been breached as well.

That’s why all backup security has to start at a level outside the backup environment … rather, it requires consideration at all layers. It doesn’t start with the complexity of the password required to access an administrator interface, and nor does it end with enabling data-at-rest encryption. So if you’re reading this thinking your backups are reasonably secure but your organisation only has mediocre access restrictions to get onto the network, you may have closed the gates after the horse has bolted.

Records retention and NMC

 Basics, Best Practice, Security  Comments Off on Records retention and NMC
Dec 102014
 

For those of us who have been using NetWorker for a very long time, we can remember back to when the NetWorker Management Console didn’t exist. If you wanted reports in those days, you wrote them yourself, either by parsing your savegroup completion results, processing the NetWorker daemon.log, or interrogating mminfo.

Over time since its introduction, NMC has evolved in functionality and usefulness. These days there are still some things that I find easier to do on the command line, but more often than not I find myself reaching for NMC for various administrative functions. Reporting is one of those.


(Just a quick interrupt. The NetWorker Usage Survey is happening again. Every year I ask readers to participate and tell me a bit about their environment. It’s short – I promise! – you only need around 5 minutes to answer the questions. When you’re finished reading this article, I’d really appreciate if you could jump over and do the survey.) 


There’s a wealth of reports in NMC, but some of the ones I find particularly useful often end up being:

  • User auditing
  • Success/failure results and percentages
  • Backup volume over time
  • Deduplication statistics

In order to get maximum use out of those, you want to make sure those details are kept for as long as you need them. In newer versions of NetWorker, if you go to the Enterprise Console and check out the Reports menu, you’ll see an option labelled “Data Retention”, and the default values are as follows:

default NMC data retention values

Those values are OK for using NMC reporting just for casual checking, but if you’re intending to perform longer-term checking, reporting or compliance based auditing, you might want to extend those values somewhat. Based on conversations with a couple of colleagues, I’m inclined to extend everything except for the Completion Message section to at 3 years in sites where longer-term compliance and auditing reporting is required. The completion messages are generally a little bigger in scope, and I’d be inclined to limit those to 3 months at the most. So that means the resulting fields would look like:

alternate NMC data retention values

Ultimately the values you set in the NMC Reports Data Retention area should be specific to the requirements of your business, but be certain to check them out and tweak the defaults as necessary to align them with your needs.


(Hey, now you’ve finished reading this article, just a friendly reminder: The NetWorker Usage Survey is happening again. Every year I ask readers to participate and tell me a bit about their environment. It’s short – I promise! – you only need around 5 minutes to answer the questions. When you’re finished reading this article, I’d really appreciate if you could jump over and do the survey.)


 

Oct 062014
 

As I mentioned in an earlier article, Data Domain OS 5.5 introduced a new feature, in-flight encryption. This applies to Data Domain Boost connections, and requires v3 of the DDBoost Libraries, which are thankfully found in NetWorker 8.2.

Currently you won’t find the controls for in-flight encryption within NetWorker – it’s something that needs to be enabled or disabled from the Data Domain itself. Thankfully, it’s relatively trivial to do:

DDBoost In-flight encryption configuration

Sample in-flight encryption configuration

There’s two different commands that have been used here:

# ddboost clients show config

That command lists the current configuration setting for in-flight encryption. The second command I used was to enable in-flight encryption for a host, viz.:

# ddboost clients add centaur encryption-strength high

As is the case with any Data Domain command, you can get a full list of the options for a command by typing help command – in this case, help ddboost clients. A basic dump of the options can be triggered by an incomplete command, too:

All in-flight encryption options

All in-flight encryption options

Once in-flight encryption has been enabled, all you have to do is ensure the client direct option is enabled for the client. So long as the client can reach the Data Domain directly via the Boost device names in NetWorker (and the client is running NetWorker 8.2), you’ll get encryption of data during transit.

Those who have been using NetWorker for a while will know that there are other options for triggering encrypted backups, but of course such options are incompatible with successful deduplicated storage on a Data Domain – this method still allows for distributed stream processing and effective deduplication, but keeps the data secure during transit.

As you’d expect, the in-flight encryption works for recoveries as well as backup. Using tcpdump on the Data Domain I captured a couple of recoveries, one with encryption disabled, and one with encryption enabled. The files I recovered were those you’ll typically find in /usr/share/doc on a Linux system, and the results look quite different when viewed in Wireshark.

First, the unencrypted data:

Unencrypted DDBoost Backup Content

Unencrypted DDBoost Backup Content

With in-flight encryption, the traffic looks considerably different:

Encrypted DDBoost Backup Content

Encrypted DDBoost Backup Content

That’s all there is to it – it’s simple and straight forward to enable (and yes, as you may have guessed from the very first screen-capture, you can indeed use wild-cards in client definitions).

External NetWorker Authentication without AD

 NetWorker, Security  Comments Off on External NetWorker Authentication without AD
Aug 182014
 

One of the least used features in NetWorker is the option for external authentication of user accounts for use with NMC. This is normally discussed in the context of integrating NMC authentication into an Active Directory environment, but in theory, other LDAP v3 compliant directory services are compatible.

So over the weekend, I gave myself two goals: learn enough on the Ukulele to be able to play a song my boyfriend would recognise and integrate a lab NetWorker environment with the directory services provided by my OS X Server (10.9).

Surprisingly, I managed both – though perhaps unsurprisingly, the NMC/LDAP authentication was the trickier goal to get sorted out.

The first step I followed was to create a new group in LDAP called ‘nsradmin’, and placed into that group the user accounts that I wanted to be able to administer the NetWorker server. With that done, I switched back to NMC:

External authentication 1

From within NMC’s main window, go to Setup > Configure Login Authentication… and choose to configure an external repository, as shown below:

External Authentication 2

My external repository is pretty basic; as a home server, it’s a fairly flat structure, so the configured repository resembled the following:

External Authentication 3

In the distinguished name, I referenced the full DN to the directory administrator. This is normally undesirable; a preferred option would be to configure another directory user that has appropriate read permissions but limited to no modification permissions. I didn’t feel like diving into that level of control within LDAP and it was only a lab server so I plunged ahead with the actual directory administrator.

The user and group search path are both straight forward:

  • User Search Path: cn=users,dc=miranda,dc=turbamentis,dc=int
  • Group Search Path: cn=groups,dc=miranda,dc=turbamentis,dc=int

For Apple’s directory services, you need to modify most of the options in the Advanced field, viz:

  • User ID Attribute becomes ‘uid’ for non-AD servers
  • User Object Class is ‘apple-user’
  • Group Object Class is ‘apple-group’
  • Group Member Attribute is ‘memberUid’

For what it’s worth, I confirmed those settings by using the ldapsearch tool on the directory server:

ldapsearch -LLL -h miranda.turbamentis.int -b "cn=users,dc=miranda,dc=turbamentis,dc=int" -D "uid=diradmin,cn=users,dc=miranda,dc=turbamentis,dc=int" -W
...snip...
dn: uid=services,cn=users,dc=miranda,dc=turbamentis,dc=int
uid: services
uidNumber: ...
homeDirectory: /Users/services
cn: Services User
sn: User
loginShell: /bin/bash
givenName: Services
objectClass: person
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: posixAccount
objectClass: shadowAccount
objectClass: top
objectClass: extensibleObject
objectClass: apple-user
# ldapsearch -LLL -h miranda.turbamentis.int -b "cn=Groups,dc=miranda,dc=turbamentis,dc=int" -D "uid=diradmin,cn=users,dc=miranda,dc=turbamentis,dc=int" -W
...snip...
dn: cn=nsradmin,cn=groups,dc=miranda,dc=turbamentis,dc=int
objectClass: top
objectClass: posixGroup
objectClass: extensibleObject
objectClass: apple-group
apple-group-realname: nsradmin
cn: nsradmin
apple-ownerguid: ...
apple-generateduid: ...
gidNumber: ...
apple-group-memberguid: ...
apple-group-memberguid: ...
memberUid: pmdg
memberUid: services

If you’re encountering issues with the configuration (and more importantly, subsequent testing), I’d recommend setting the LDAP Debug Level to 1 so that you can see what sort of LDAP searches NMC is performing – these can be seen from the gstd.raw file in the NetWorker Management Console logs directory. If you’re not sure whether you’ve got all the details correct, by the way, just hit the Next> button … you can’t progress to the next screen unless NMC can successfully query the list of groups and names based on the details you’ve entered.

Clicking next, you’ll be prompted to confirm which users and roles will have ‘Console Security Administrator Role’ – this is a critical field; this defines the users who can re-invoke this form after the switchover to external authentication has happened:

External Authentication 4

Make sure there’s at least one actual user account defined in there. This is where I came-a-cropper the first few times – I assumed I could just use the group in there and it would be sufficient. (I’d need to go back and check against an Active Directory associated NMC server to confirm whether it’s any different there as I can’t recall off-hand.)

Click Next> again once you’ve populated that – again, NMC will query and confirm the validity of the entered details before it lets you progress:

External Authentication 5

You’ll then be prompted to confirm which servers you want to distribute the authority file to – in my case, since GST services are running on the same host as the backup server itself, it’s two instances of the same server, NetWorker and NMC. The distribution should log as follows:

External Authentication 6

Click Finish, but whatever you do, don’t yet exit NMC. There’s a few more bits and pieces you need to do. Specifically, you have to do the following:

  1. Add at least the referenced security console administrator user (from above) as a user in NMC, assigning the user all security roles.
  2. Equally, add that user (e.g., user=pmdg,host=hostName) to the NetWorker Application Administration list (within the NetWorker Administration console).
  3. Test the login of that user using another browser or RDP session. Once you exit the console session you’ve been using, internally defined accounts will be disabled. (In fact, they actually already are, but because this session remains authenticated while you remain connected.)

In my testing, I found that (at least with OSX 10.9 Server LDAP), I couldn’t successfully define administrative NetWorker control via the External Roles field in the User Groups list, viz.:

External Authentication 7

That is, it wasn’t sufficient to define ‘group=nsradmin’ or an external role of ‘nsradmin’ to grant NetWorker administrative rights to anyone in that external group. (I suspect as much as anything that this is a peculiarity between the operation of OS X 10.9 Server directory services and NMC than a failing in NMC itself.)

Even with the slightly less integrated approach, where administrative accounts will need to be named individually within the User group for NetWorker, there are still definite advantages of external authentication integration:

  1. Reducing number of passwords you have to remember in your overall environment
  2. Auditor satisfaction that an account disabled in directory services will be disabled from NMC access
  3. Auditor satisfaction of named user account tracking (rather than local-to-NMC and possibly generic accounts) in NMC

In case you’re wondering – if someone with a directory account tries to log in and there hasn’t been an account defined in NMC, NMC will automatically create the account, but not assign any privileges to it. This allows a previously authenticated administrator to quickly edit the privileges.

One final note – if you do happen to mess up the authentication process and can’t log in, the short-term solution is quite straight forward:

  • Stop the NetWorker Management Console services
  • On the NMC server, touch under the Management Console ‘cst’ directory a file called ‘authoverride’.
  • Restart the NMC services
  • Log in as administrator
  • Either switch back to local authentication, or adjust the external roles/etc as appropriate
  • Stop NMC services
  • Remove the ‘authoverride’ file
  • Restart the NMC services
  • Verify it’s working

Keeping all that in mind, it’s relatively straight-forward to jump into the realm of external user authentication with NMC – and that procedure above is your get-out-of-gaol card if for some reason, your directory services goes down.

NetWorker Tunnels

 NetWorker, Security  Comments Off on NetWorker Tunnels
Apr 012013
 

Tunnel

NetWorker and firewalls has always been a bit of a challenging combination. It’s become increasingly simplified over time – to the point where even a network luddite such as myself can readily configure ports access across a firewall – so long as the firewall administrators or interface are cooperative.

But the rub has always been the need for multiple ports. Many firewall administrators would like to only have to open one port across a DMZ. This is a feature some competing products have cited as an advantage in secure environments over NetWorker and to be honest, in situations where port minimisation is a key required feature, it’s been difficult to argue against that.

A new feature in NetWorker 8 that I hadn’t noticed before however is a new option – tunnelling. As per typical network tunnels, the scenario available to NetWorker now is specifying a single IP address and port number on either side of the DMZ to pass all traffic through.

This functionality designates a communications proxy on either side of the firewall – a new NetWorker daemon, nsrtund, comes into play, and access is configured via aliases and the server network interface option within clients. Currently this looks to be restricted in operating systems … the tunnel proxy and the NetWorker server need to be any of the following:

  • Solaris/Sparc (10 or 11)
  • Solaris/AMD (10 or 11)
  • RHEL on x86/x64;
  • SLES on x86/x64.

I’m hoping Windows gets added to that mix soon – we know Windows represents a substantial aspect of NetWorker server deployments these days.

I’ve not yet had a chance to run up a configuration to test tunnelling, but the documentation looks comprehensive, and if you’re interested in using it yourself, you’ll find it in the Technical Notes section on the support.emc.com website.

(Oh, and while I’m at it – kudos to EMC for renaming their documentation files to sensible names reflective of the title and content, rather than the old standard of part numbers. It’s great to see technology companies adjusting things to suit customers better.)

 

Feb 222011
 

I’ve been involved with an increasing number of NetWorker 7.6 SP1 configurations on Windows 2008 R2, and I’m not sure whether what I’ve encountered is specific to Windows 2008 R2 or just a general deficiency in the NetWorker installer’s firewall configuration process. Either way, since it caused some challenges for me, I wanted to note down the issues I’ve observed.

First, the firewall configuration is only applied to the “Public” profile. This is OK for single-interface servers, but if your system has multiple interfaces, it isn’t sufficient – you need to edit the rules to apply to all three of “Domain”, “Private” and “Public”:

Firewall configuration 1

The next issues encountered were relating to tape libraries on storage nodes. In particular, it appeared that the default automatic NetWorker firewall configuration on at least Windows 2008 R2 didn’t add support for the nsrmmgd or nsrlcpd daemons to communicate.

To create these rules:

  • On the server:
    • Copied two of the existing rules – one for TCP, one for UDP – and updated the “Programs and Services” pane to reference X:pathtobinnsrmmgd.exe.
  • On each storage node:
    • Copied two of the existing rules – one for TCP, one for UDP – and updated the “Programs and Services” pane to reference X:pathtobinnsrlcpd.exe.

With these sets of changes in play, NetWorker has behaved a lot more normally.

(Obviously, any firewall changes you make in your environment should be considered against site requirements.)

%d bloggers like this: