This month, as we approach the end of the 2017, I am reflecting back on one of the areas that has come to mind throughout the year. It's around IT assets: what they are, where they are, how they are being used, what dependencies surround them, and what are the risks posed by not knowing this information. When speaking with organizations, we see the same continually concerns: What do I have? Where is it? Who is using it? What are my risks around using these technologies or applications? In many cases the answer is, “I don’t know”. And as you can imagine, “I don’t know” doesn’t work well as an answer to leadership, especially when something happens. But “I don’t know” is not uncommon as the answer today in our increasingly complex IT and multi-cloud environments. Let’s unwrap that a bit this month.
Since IT is the bedrock of all we do, it seems logical, and just about self-evident, to state that we must have a good understanding of what we have. You’d be right in thinking this and while that seems like such a basic expectation, it’s not as simple as it sounds. With on-prem technology and multi-cloud stacks and applications that are dependent upon each other, it can be hard to know for sure what’s out there and what it’s doing.
I’ll give you a great real-world example we see that you can probably relate to: a server or switch that no one is quite sure how is being used. They'll have a server count (physical, virtual, cloud, etc.) where they thought they had 56 servers and it turns out they find 61. What are those other 5 about? The natural tendency (and what many want to do) is to just shut them down to see what happens, see who calls and complains. But they can’t as what will happen when they disconnect them? Are they running a critical workload? What if this server had dependencies upon it for other IT workloads and/or applications? Think of the potential implications if you just shut it down. Same with a switch. Are you sure of which IT services and applications are connected through that switch? What happens if you just pull the plug? Most organizations leave them running until they get good visibility into what these devices or workloads are doing.
This is why a tool to give you visibility into your environment is important. Some organizations have tools to do this, many do not. Some have tools they feel are just “good enough” to give basic information and they are OK with that. But is “good enough” OK?
Visibility into your IT environment, your applications, and how they work together is not just a “nice to have”, but critical to mitigate and to help prevent risks, though, to be frank, nothing can “prevent” everything in IT. It comes down to this: if you don’t know what you have, how do you know who’s using it? It can be shadow IT gone rouge, since workloads and applications may or may not sit in the same location anymore. Workloads can be multi-cloud and/or hybrid (with interdependencies), so what does that mean for your security posture? The risk threat will only continue to get worse, as we all know.
Without being sure of what and where your critical assets, workloads, and applications sit and are dependent upon, think of the potential affect upon data security and governance. Does your critical and sensitive data sit within environment that you can control and have visibility into? The answer to that question can reveal a lot.
A comprehensive strategy and tools for discovery can help you deal with what most operations teams live by: you can’t monitor what you don’t know, and you can’t manage and remediate what you don’t monitor. Well, you can’t even get to the monitor stage if you don’t know what you have. And you certainly can’t remediate what you don’t know you have. Some of the areas in which discovery tools and a strong strategy can help are:
It can help to reduce costs. Knowing where your assets and workloads reside—and the dependencies that surround them—will help you not overpay for the resources you need. You might find that you have too many assets/resources for your workloads or where you’ve over provisioned. Having good and accurate visibility will help you plan better and only spend what you need. And if it turns out that you are overspending, at least you'll have visibility and understand why. It can also help mitigate costs around shadow IT.
Reduce complexity. Knowing what you have in your environment will allow you to strive for what we all want in our IT environments: reducing complexity and making things easier. You can’t make things less complex without deep visibility. This is particularly true in today's environments that are inherently more complex than ever with on-prem, multi-cloud, and hybrid—often at the same time—so application and workload dependencies are not often as clear as in the past.
Retire old assets. You might find that you have lots of old assets that are no longer needed that can be removed from production and retired. This saves maintenance and operational cost and can free up valuable space.
Keeping systems current. Systems must be patched regularly—and how can you if you don’t know what and where they are—otherwise it opens up the potential for security risks and exploitations.
Help you keep track of where your data sits. By knowing your assets and applications, where they sit, who uses them, and their dependencies, you can help understand your exposure and risk, and will help in your data governance and compliance strategies.
Prevent vendor lock-in. A good discovery tool (or suite of tools) should be completely vendor independent and agnostic and work across all platforms, regardless of manufacturer. We see organizations use tools that come integrated with manufacturer solutions. Some of these tools work well, but in many cases, only for that particular manufacturer or product they were included with. This can force organizations to lock into certain vendors, which is not a good thing. Tools must work across different platforms and vendors so as to not force vendor lock-in, which can cause the need to purchase multiple tools from multiple vendors to perform the same function across an enterprise. Talk about complexity. The right tools and strategy can prevent this and vendor lock-in.
I mentioned it earlier: risks will continue to evolve so we all must continually be prepared. Having good visibility into what we have is a baseline that everything else should be built upon.
This simple concept of knowing exactly what you have and what it does, is certainly more complex than it seems on the surface. That’s why we recommend exploring different tools and vendors to see how they compare in terms of functionality, visibility and value, to find what's right for you and your environment. The value that these tools can bring can be very pronounced and, often times, can pay for themselves very quickly by reducing operational costs. But it’s not just about reducing costs—which a good discovery tool will do—but about peace of mind.
What we all do in IT is complex enough, so let’s try to make a bit easier. This is where good discovery tools and a strong strategy will help. Don't forget to lean on your partners to guide you, as they will. Let’s try to make things a little less complex. Visibility into your assets, workloads, applications, environment and dependencies is a good place to start.
Andy Jonak's Blog: www.andyjonakblog.com