THANK YOU FOR SUBSCRIBING
After nearly 30 years in the software industry, I have learned a tremendous amount about how organizations use and manage both third party (purchased) and in-house (self-built) software. There are applications used to grow sales, improve products, and empower employees. There are applications to manage costs, communicate with customers, synch with suppliers, and inform investors. Anyway you slice it, software plays a pivotal role in how modern enterprises function, and as such, more and more products are being offered by more and more vendors. Increasingly, those products also come with a message of out-with-the-old (on premises) and in-with-the-new (cloud).
"We are increasingly deploying our internal applications on hybrid cloud infrastructure"
This “tool renewal” process has cycled many times in the past, with software moving from mainframes to minicomputers to client/server models. The current revolution leans toward moving software and processing to the cloud. The justification for purchase is usually tied to some or all of the following savings:
-lowering the cost of administration (people savings)
-shifting from large upfront licenses to annual subscriptions (cash-flow savings)
-leveraging new technology such as mobile (bring your own device savings)
-improving ease of use (training savings)
-reduction or elimination of server hardware (equipment savings)
However, these benefits don’t come without cost. As the Chief Financial Officer of TIBCO, a provider of enterprise software products, I’m often asked about the expenditures involved in adding/replacing technology across the enterprise and how to justify the business case for new software. The particulars of my answer depend a lot on the size of the organization.
For example, smaller organizations (less than $500Mn in revenues) can usually replace their in-house software with the latest third-party applications fairly painlessly, as they will likely have limited process customizations to maintain.
The answer for larger organizations is decidedly different. Though the current tool renewal push may be screaming ‘all new, all cloud, all the time,’ the reality for most organizations is much more complex.
In larger companies, CIOs are forced to function with a combination of older on-premises software products, some of which have been highly customized, and state-of-the-art (but still immature) cloud products, which tend toward vanilla functionality. This is the situation we face at TIBCO. Our justification for purchasing new software applications is less focused on expense savings or timing of cash flows, but is heavily driven by the need to tightly fit the software to our business processes. Unfortunately, many newer cloud-based applications are thin on functionality, so adopting them wholesale would force us to become quite generic in our approach to process—how we sell and connect with our customers and partners, build and enhance our products, and manage our internal operations.
Our philosophy has been to leverage new cloud applications for some of our external facing processes, while connecting those applications to the many existing on-premises applications that handle our product management, configure-to-order, Quote-to-Cash, purchase-to-payment cycles, financial accounting, and employee management processes. We leverage our own messaging, business process management, and integration technologies to connect all of the disparate apps together.
We’ve relied on some of the unique characteristics of our analytics products to ensure that this blended approach is delivering on functionality expectations. In addition to the standard desktop analytics solutions used by developers to build dashboards and visualize trends, our Analytic Fabric Service aggregates data from our business applications and collaboration platforms in order to combine insights from all those sources—the aggregate of which can be delivered in a useable form to mobile or PC platforms.
This aggregation ability greatly facilitated our recent decision to implement Apttus for our configure-to-order and Quote-to-Cash processes. We leveraged our technology to integrate Salesforce.com with Apttus, our internal product management systems, our internal support systems, our internal entitlement systems, and Oracle. We focused most of the customization effort on our own integration and business process management products, and kept the cloud applications as generic as possible.
This helps us in two ways. First, it becomes infinitely easier to replace applications if new contenders come along that present a better fit with our process. Second, it becomes much simpler to upgrade versions of the cloud applications that we are using. Cloud applications are often revised several times per year, so this is an important consideration. If we needed to retest all our customizations every time our cloud apps were updated, we would have time for nothing else!
As with most large enterprises, we are moving to cloud—methodically. We are increasingly deploying our internal applications on hybrid cloud infrastructure. We purchase CPU cycles, memory, and storage on a capacity subscription basis from third-party datacenters like Amazon Web Services, and virtualize our applications on that infrastructure. This offers significant hardware and datacenter cost savings for our company; and we plan to push most of our internal applications to hybrid clouds using virtualized servers over the course of time.
This is not quite the same as being on a true SaaS system, such as Salesforce or Apttus, but it does offer some of the same data security and availability advantages. In addition, we can continue to customize, as we desire to ensure a tighter fit between our processes and our applications, but without introducing unnecessary complexity. A focus on connecting is driving this process as well.
For example, one of the challenges with moving applications from on-premises to hybrid cloud environments is the need to retool connections between those applications. Again, we are leaning on our own technology to ease the way forward. Mashery cloud application programming interfaces (APIs) enable us to rapidly connect and disconnect applications to/from each other in our datacenter as well as in the cloud. APIs specify how software interacts with other software; so rather than individually addressing each application’s interactions with one another, we use an API management tool to handle the process. Thus, once applications have been connected through Mashery, information can flow among them regardless of where they are housed—in the cloud, a hybrid cloud, or a traditional datacenter.
This focus on connecting and blending technologies has led us down some interesting avenues. Our next project is to enable smarter analytics engines to generate alerts when business rules are violated during our key operational processes. In addition, we are looking for ways to connect social media chatter into our data collection and alerting processes, so that customer questions that are not filtering directly to our support forums can still get picked up and responded to. The possibilities are practically limitless once you start envisioning how you can connect and exploit disparate technologies and applications to best suit your preferences.
At TIBCO, we are optimistic that we can blend cloud and on-premises applications together in seamless ways to accomplish all our business objectives. While we are lucky to have a strong portfolio of products that we can leverage to do this, I believe many other large organizations have the opportunity to do the same—whether they leverage our products, those of our competitors, or their own novel solutions.