www.apress.com

17-3-22

Measuring Happiness, the IT Way

By Michael S. Cuppett

People want to be recognized for work well done; it is just human nature. The challenge for IT folks is being able to quantify how application, infrastructure, and operational changes improve the business by making customers happier, leaning processes, or contributing to revenue increases or cost decreases. Cumulative degradation mathematically reveals what IT leaders have struggled against when defining metrics to prove IT’s value to the business. Imagine the CIO sharing uptime metrics with other CXOs, knowing that the CXOs are hearing daily from their teams that the application systems are unstable and slow. A CIO stating 99.9% uptime undoubtedly receives flak from the other chiefs. Uptime, which means that the computer is running and the application works, is much different from a measure showing productive use of the application to meet business and consumer demands.

The trick is converting technical outcomes into financial or customer benefit measurements. Take time to consider how to communicate the value message. For example, reducing overall database server load may delay capital outlay or reduce monthly DBaaS costs. Decreasing transaction times leads to improved customer satisfaction, which may be reflected in the company’s net promoter score, reduced complaints, or increased revenue. Transform the technical measure into financial- or customer-impacting news.

Customer Experience

Time is everything for IT when measuring customer experience, which cumulative degradation does not measure. Cumulative degradation does not account for transaction times, which is the determinate that customers observe. Marketing analyzes the number of clicks to purchase, cross-selling, conversions, and more under the customer experience management (CEM) umbrella. Marketing manages the unique needs of customers, while IT needs to make sure transactions are fast! All marketing metrics suffer when transaction performance degrades: response times lengthen and the expectations of every unique customer are not met. Although it may not be intuitive to see how clicks to purchase and cross-selling are impacted by performance, consider the likelihood of customer abandonment. Customers can become impatient and discard slow transactions, quickly jumping to competitor sites or waiting to try again in the future.

Although systems performing poorly are technically “up” or “available,” customer frustration reveals a different sentiment. Applications not performing to the service level agreement (SLA) or to other expectations should be labeled degraded. When performance worsens, the applications should be considered unavailable. Patience is a virtue, but it is a character attribute not exercised by most people who are waiting for an application to respond. 

Fortunately, the industry has recognized this disconnect and has moved to customer experience—internal and external—as a measure of application delivery success. Online retailers design infrastructure and applications with the intent that a customer never leaves the site due to application unresponsiveness. The goal is to deliver an experience in which the customer finds, reviews, decides, and purchases items quickly and effortlessly. Beyond performance, cumbersome navigation, slowness, or ambiguous checkout form fields can also drive customers to competing sites. Now that DBAs are DevOps team members, issues such as navigation and poorly performing web forms become areas of concern because each team member is responsible for every component contributing to the product. The opportunity to push responsibility to another team—finger pointing—does not exist in Agile and DevOps.

DBAs need to contribute and report contributions to customer experience improvements. Measuring customer experience by using the cumulative model provides paths forward in determining where to invest time, money, and people. Upping the availability for any segment decreases degradation, leading to improved customer experience. DBA participation traditionally involved database tuning first, operating OSs and storage improvements second, and the full IT supply chain serving the application third. 

New Content Item

Measuring customer experience by transaction times tells a better story. Delivering 99% of transactions at 10 seconds versus delivering 95% of transactions at 1 second are two very different customer experience stories. All 99% of the transactions in the first scenario reflect a poor customer experience when matched against the industry standard of 3 seconds. The second scenario represents excellent customer experience from very fast response times, but the caveat is that 5% of the transactions were too slow, which is an unacceptable number. For both, unavailability is never good, but my observations show that customers are more gracious when a site is not available—they know that problems can occur—than they are when a site is slow. As mentioned before, it may be a wiser move to take a site down to perform a fix rather than trying to push through the performance problem. When the site is available, the site needs to perform excellently to maintain acceptable customer experience levels.

Good News: DevOps and Virtualization

The very good news comes in two flavors, DevOps and virtualization, which are tremendously exciting options for addressing performance degradation. Dynamic resource management is one reason why virtualization became popular. Adding CPU or memory to a struggling guest provides a quick and easy solution for performance problems, at least in many cases. Depending on the database and version, improved performance depends on the database implementation being able to dynamically consume additional resources. If the database supports dynamic memory expansion, the database should be able to consume an additional memory outlay. 

DevOps presents the opportunity to spin up additional servers to increase capacity, especially at the application tier. Templates can be used to build servers for new projects or to build servers to expand capacity to resolve performance challenges. 
Application and infrastructure play different roles, depending on where your customer is in the application process. The application presentation and functionality in which customers step through different selections is driven more by the application than the infrastructure. In contrast, once a customer initiates a transaction, infrastructure delivery capability increases in importance. 

Customer experience during screen navigation depends on the compute environment (phone, tablet, laptop, kiosk, etc.) and the client–side interface application code. If the application makes frequent service calls to back-end systems to populate fields as the consumer progresses through the screen, the infrastructure contributes to the experience—good or bad. For demonstration purposes, let’s have the application be a simple registration form with no back-end calls. The user experience depends on the application flow from one input field to the next and to the user’s smartphone or computer. User frustration might come from unexampled data entry formatting, but it is not likely relative to performance. Once the user clicks Submit, response time becomes everything, making the infrastructure and database germane to the customer experience. As newly anointed DevOps team members, DBAs must ensure that the customer data inserts quickly and securely into the database, and must work with other infrastructure teams to drive network and server speediness.

About the Author

Mike Cuppett is a Business Resiliency Architect for a Fortune 25 healthcare organization, where he currently strives to apply DevOps methodologies to Disaster Recovery programs.  Previously, he was charged with application and infrastructure reliability, availability, recoverability, and performance as a Solutions Engineer. Cuppett draws on three decades of experience as a DBA and IT engineer in the US Army and the private sector, culminating in a succession of management and senior technology positions at large companies in database administration, solutions engineering, and disaster recovery. Cuppett writes frequent articles on Oracle DBA issues and the business dimension of DevOps for LogicalRead, Oracle Technology Network, and APM Digest. He took his BS in Management and Computer Information Systems from Park University.

This article is excerpted from DevOps, DBAs, and DBaaS: Managing Data Platforms to Support Continuous Integration by Michael S. Cuppett, ISBN 978-1-4842-2207-2.