As the fourth TBM Conference approaches (Nov 7-10, San Diego) along with my own five-year anniversary at Apptio, I'd like to share a few personal observations about how Apptio and TBM have grown and matured to fit IT organizations that aren't so big or mature. This is the final of four parts to this series. Make sure to read the firstsecond, and third posts also.  

When it comes to IT data, just about everyone thinks their own baby is ugly. Yet the premise of Apptio Cost Transparency (like just about any analytics offering) is that it will generate new value from existing data. No wonder the #1 objection to IT cost transparency is "our data isn't _______ (complete, detailed, broad, clean, consistent...) enough to be __________ (meaningful, actionable, defensible...)."

But a funny thing has happened. "Data quality" has quietly morphed from the biggest obstacle to cost transparency into an objective of cost transparency. There was no singular software breakthrough or proclamation of new best practice: it happened organically among TBM practitioners as they discovered the power of showing the desired state — what their baby could be capable of with a little guidance. This is the evolution I have witnessed.

"My dad has an awesome set of tools. We can fix it!"

When someone expressed skepticism that their data was good enough, an Apptian's first instinct might have been to talk about ourselves. We may have proudly explained how Apptio TBM was purpose-built to ingest raw IT and financial data. We might have explained how our Extract, Load, and Transform (ELT vs. industry-typical ETL) approach preserves data integrity and trust with a visible chain of custody to original raw form. Or how our inference engine could find correlations between disparate datasets (which, when not carefully explained, sounded suspiciously like magic).  Or how layers of abstraction between original "raw" datasets, transformed datasets, master datasets, model objects, and the reporting layer enabled Apptio to adapt quickly to inevitable changes in systems and schema.  We'd point to real examples of analytic agility in action — like how one customer, after changing from one vendor's CMDB to another, had everything working as before in just 10 minutes, just by reconfiguring data mappings.

All true. We are, after all, a software company. And data management is part of what makes Apptio unique; it's why so many customers who tried other approaches found their first success with Apptio. Apptio itself can, indeed, make data better by normalizing, rationalizing, and correlating. Customers didn't just "overcome" data issues, they used Apptio to fix them.  And so customers, as well as Apptio, would advise skeptics," don't wait for your data to get better before getting started with Apptio."  In other words, "don't get fit before you go to the gym." Yet I still don't think we really "got it". We might easily confuse people by talking technology when the real concern was about data that simply wasn't there. We might describe some clever features to truly summon forth data that wasn't there before, like using IP ranges to fill in missing location data, but if overgeneralized, we might inadvertently imply we had IBM Watson-class heuristics capable of reading between the lines of any data.  For the record, we do not have a secret Watson.  Apptio software cannot, for example, figure out by itself that a GL entry for sports arena naming rights is miscategorized as "IT Facilities" (true story). It does not launch its own application dependency discovery tool. It cannot glean from the surnames of developers who was supporting which apps. 

Static, assumptive data

Technology's inability to autonomously plug data holes does not stop cost transparency from generating value with as-is data. Even the most advanced customers fill in some live data gaps with static values based on some combination of offline data gathering and implicit assumptions. For example, if an app support team does not track hours against applications, the manager may still provide generalized rules about who tends to spend what proportion of their time where.  Whether that is "good enough" depends on the use case. Want to use trends in developer utilization to optimize project throughput? You can't do that with static assumptions about time. Want to include bug fix costs as part of application total costs for application rationalization?  Informed assumptions may be good enough. Then replace them with live data over time, if and where you need it.

Shrinking the haystack

How do you know when and where you need better source data?

We've seen a pattern among customers creating their own data quality dashboards, turning the Apptio analytic lens onto the source data itself. Now there are several generations of improvements in out-of-the-box data quality analytics that can spot gaps, duplicates. and inconsistencies within datasets and across datasets. For example, the CIO at First American kept his eye on "unknown server count" — how many servers could not be associated with any application or other use, as a proxy for their level of transparency. And allocation rules could tell you how many dollars are affected by each data issue so you could prioritize which data improvements to focus on; e.g. how many dollars are associated with unknown servers, operational dollars with no visible business purpose.

Self-service, data quality analytics are also a powerful tool for pushing data quality responsibilities closer to the front lines where data is best understood and fixes possible. This top-down approach accelerates data improvement by focusing effort where it matters, a welcome relief from "data hygiene" exercises with vague future benefits and no clear end.

“What we had done several times in the past was get a whole central data-cleansing team to come in and do a big project to map infrastructure to applications. They would do it, and declare victory. However, six months later the data quality would be rubbish again as the underlying issues had not been addressed…  (With TBM) we made people inside Shell who are going to be accountable for this forever do their own data quality cleanup. This meant application owners first and foremost, and then indirectly on project owners and technical service owners who would feel pressure from application owners to bubble up the correct project and operational data… Do the data quality work in parallel to and as part of your TBM program.”  Mary Jarrett, IT Manager for Functional Excellence, Royal Dutch Shell, per case study published on TBMC.org.

Power of the Possible

Focus is a huge help to the team maintaining and feeding the Apptio software. But sometimes data source owners need more motivation, especially if the missing data would require larger efforts; e.g. those involving changes to the process, tooling, or staff responsibilities. Many TBM practitioners describe a dynamic where sharing information that is known spawns stakeholder questions about what they wish they knew — to understand, to persuade, to make a decision.  

“Data improves as people begin using it. You get to see the synergy when you start showing it and people at first are skeptical, and then they start thinking it's useful and they say, ‘Oh, well. If you can capture this, can you capture that?’” Stacy Shifflett, Director of Business Insights, Freddie Mac

“Everyone has gaps and inaccuracies in their data. Don't let that be your crutch. TBM lets you tie dollars to those gaps so data owners know where to focus their improvement efforts. And they can use the reporting to see the progress.” Sheena Patel, Client Engagement Manager for Service & Performance Management (SPM), Fannie Mae

“Your data's never going to be perfect, and you're never going to have 100% buy-in before you start.  But just throw the information out there because when people see it, their eyes will be wide open. They’ll see what needs to be fixed, not just for TBM but for all the other reasons that system and data are there in the first place.”  Gboyega Adebayo, Lead TBM Analyst, Fannie Mae

Stakeholders signal which data gaps matter, not to whether a report or the model are complete but to the information needed to be persuasive or actionable enough. Is it better to use an assumption, or classify a cost as unallocated? Does it need to be actionable or is directional good enough to spark the right investigation? What threshold of confidence does the stakeholder need to make decisions? Is it better than the information they use now for those decisions? 

How cool would it be if software could tell you all that! Apptio can turn raw data into information, but only people can tell you what information they need to understand better, believe more, act faster. But they can't do that from a blank sheet of paper. They need to see what's possible.

To learn about what TBM is all about, visit the Learn TBM page.