Latest News

Intelligent Automation for ITOps Starts with Intelligent Data

 Written by Elise Carmichael, CTO, Lakeside Software

There is a growing belief that automation will stand in for certain IT service tasks. While there may be a detractor or two who hold firm that assisting end users will never become automated, most hope that automation will allow IT teams to do more with less by handing over Level 1-2 tickets to trusty bots. Let me be clear, though: automation does not mean replacing humans or their expertise. Instead, I’m talking about automating certain tasks. A recent Gartner inquiry mentioned that unnecessary interactions will be replaced by automation, but not everything can fit in this use case. At the same time, the type of engagement is also shifting toward things such as tech bars that supply more empowerment and enablement — a shift away from L1 for L1 sake, not a shift away from human interaction.

How do we get automation for ITOps right? Considering the essential steps for successful ITOps automation, I think it is imperative to ensure you have the right data to inform automated workflows. Without comprehensive data, automation will require too much human intervention or will be too risky to deploy, in turn negating the inherent value of automation use cases.

In addition to reducing service ticket volume (and, by extension, lowering costs), automation enables IT managers to level up their teams by shifting their collective focus and time on more strategic projects. Upskilling in this way can position IT as an important facilitator of digital transformation projects while still efficiently supporting end users and keeping the IT lights on, so to speak.


But what is the right data upon which to build automated workflows? We know that organizations are swimming in data, regardless of whether they don’t use it all or, by contrast, they are drowning in it — the ones-and-zeroes equivalent of water in “The Rime of the Ancient Mariner.”

How do we bring data’s value to the surface? I suggest the following three steps:


  1. Start with human expertise.

I would argue that throwing out a life preserver to reel in the right data is really about the analysis of that data. Therein lies the need for humans to ensure that automations — before they roll out on a large scale — are built and trained with relevant data to trigger meaningful and effective service desk workflows. Intelligent automation for ITOps starts with intelligent data. You can’t bot that deep well of human expertise — not even when it comes to generative AI.

If you don’t have the right context for data, one of two things can go wrong:

  • You can misdiagnose what’s happening to cause the end-user-facing problem, in turn applying the wrong solution.
  • You miss the mark on attribution — that is, you know there’s a problem, but you don’t know who’s actually experiencing the problem.

The risk in either case is applying a sweeping solution for all end-user systems, potentially affecting productivity or causing disruption from updates and reboots.


  1. Consider the data science hierarchy of needs.

There’s only so much human expertise at hand, however, so it’s important not to try to boil the data ocean. The quantity of the data collected for ITOps applications is less important than being able to answer the “‘so what?’ of your discovery” via actionable insights garnered from that data.

Let’s consider how this progression up the classical “data science hierarchy of needs” plays out for a use case related to end-user digital experience:

Raw data: Say you have raw performance data from a laptop (“Laptop is running slow”).

Context: That data starts to become meaningful information when context is applied (“A particular executable is running on the system”).

Analysis: From there, as you start to apply analysis against that information, moving up the hierarchy (“The end user is using a new version of Microsoft Excel, which is slowing down their workstation”).

Actionable Insight: Finally, you arrive at actionable data insights that answer the ever-important “So what?” question (“This particular version of Excel is causing problems, so we need to patch the versions already installed across the IT estate”).

As you start to ascend that data science hierarchy of needs, you can leverage data insights to build out automated workflows, ultimately extracting greater applicable value from that ocean of raw data.


  1. Capture data closest to the end user.

ITOps is all about the end user. Historically, however, hard metrics related to IT uptime have been based on network and server data. As employees branch out to numerous types of remote work environments, IT must be able to capture end-user device data related to performance issues in order to pivot to a proactive IT posture.

Where does automation enter the picture? You can build automation models for discovering issues and auto-deploying fixes. Other use cases include building out service desk workflows or creating chatbots to deploy more solutions directly to the end users, so they can solve certain problems themselves — without even opening an IT ticket. For ITOps data to be intelligent data, it must be collected at the endpoint. Only then can you build the complete picture IT needs to start to connect the data dots, infuse analysis, and take action.

For example, say you gather raw data on the performance of apps end users need for their jobs. With the right context, this data can reveal which apps are working and which are crashing, serving up broader meaning for the enterprise: “It is specifically X application, version X, that’s causing problems.” Automated actions can shift proactive response even further to the left – often before the end user even notices a problem.


From technologists to business enablers

To me, there is no fear that automation is going to supplant humans, especially as businesses continue to transform and digitize. Automating routine IT tasks improves operational efficiency and opens wide doors of opportunity for IT to emerge as a true business enabler. The reward for the enterprise at large is an IT team that comprises creative business practitioners who can saddle up with the lines of business as proactive IT consultants instead of ticket-crisis agents.