3 reasons why your Enterprise should start moving to the Cloud right now

 

The cloud is no longer a nice to have, but increasingly a necessity.

Traditionally enterprises have been reluctant to embrace cloud technologies, especially on security grounds. But now, enterprises are moving past their fears about security and regulatory risks and embracing the business agility of public cloud services.

The wave of Fintech startups, digital challengers and tech giants presents a threat to all the big enterprises that is unlikely to go away any time soon. Cloud computing can help these enterprises to respond more effectively to these challenges by driving down costs, enabling innovation and creating the flexibility needed to respond to change.
The main reasons to start embracing the cloud are:

 

1. Not Your Core Competence

Managing infrastructure is not your core competence. Banking, insurance, pharmaceutical, automotive etc. enterprises need to focus on their core products that are giving them competitive advantage and differentiating them from the others. Managing data centers is not a differentiator, rather a supporting service. Having your own power plant to produce electricity won’t engage your customers and make them buy more products from you, unless tou are in the electricity business.

Technological advances these days are very hard to keep up with. This fact puts additional pressure on your IT operations stuff to deliver faster than ever before. The problem is that your IT operation guys are probably already under the water trying to keep up with existing demand from the business. It’s not enough anymore just to provide a server and wire it up with the rest of the network. Your IT operations need to be able to handle modern requirements and deploy applications packaged in Docker containers on a highly virtualized environments that can auto-scale up and down based on the demand. They need to be able to setup fully automated continuous integration and deployment pipelines so that your enterprise can deliver new products to the market faster than ever. They might need to support new PaaS environments so that your self-operated data center could seamlessly be expanded to a hybrid cloud one day.

When (or better if) IT folks finally bring their internal shop in order, the time will come to refresh the whole hardware/software infrastructure and to rationalize application inventory, which will just put more pressure on them. And yes, all of that needs to be done while still cutting down IT operational costs.

Businesses cannot wait for IT operations to keep up while more technologies are being thrown on them. What often happens is that the businesses eventually gives up and forms a shadow IT organization that can fulfil their demands. That’s usually the beginning of the end for your IT organizations.

Operating modern data centers is job for top-notch engineers. Top-notch engineers typically want to work for Infrastructure and Platform specialist companies, not banks, insurance, pharmaceuticals and similar companies.

Unless your hands are tied up with existing contracts, there is no reason why you should continue trying to do “the plumbing” that cloud providers are doing way better than you. Especially when this is not your core competence.

 

2. Cost Optimization

Cloud computing means enterprises no longer have to invest heavily in dedicated hardware, software and manpower. By using cloud, they can scale up vast amounts of technology infrastructure on demand and pay only for what they use. Capacity management should no longer be in the glossary of terms for enterprises.

Having such vast compute capacity available on demand is extremely important in industries where milliseconds can mean millions in profit. But even if you are not in such industries, possibility to pay-as-you go model that cloud vendors are offering is more than appealing. What’s the point keeping your Dev, Test, UAT and other non-production environments still up and running after 6pm when everybody is gone home? You can just switch them off and save cost. Switch them back on at 9am the next day when people come back to work.

GE Oil & Gas says that on average, applications moved into the cloud had a 52% reduced cost of ownership compared to on-premises applications.

FINRA, one of the largest independent securities regulators in the United States, has now moved about 75% of its operations to the cloud. It estimates it will save up to $20 million annually by using cloud instead of a physical data center infrastructure.

 

3. Boost Innovation

For most businesses, innovation means first gaining an understanding of product offerings or key processes, then what to change to get a better result. The implementation of the innovation almost always becomes an IT project. In other words, the last mile of most innovation that makes a difference goes through IT.

The real gold mine in any enterprise is the data that is buried in numerous databases throughout the company. Making sense out of this valuable information to generate new business prospects is the area where big data analytics gains more traction everyday. However, setting the full big data analytics cluster in your data center is not an easy task. It requires large number of machines as well as skills to set them up and operate. While IT teams are spending months to deploy Hadoop clusters, the business is losing valuable time and the value if data is deteriorating each day.

Adopting cloud-based IT can eliminate the usual IT roadblocks to innovation. Business users and developers can scale resources up and down as needed, freeing IT to spend more time creating and less time configuring. In the cloud, the new IT paradigm is simplicity and flexibility.

 

Which cloud vendor to choose?

According to Gartner, there are 14 cloud vendors out there. The big players are Amazon Web Services (AWS), Microsoft Azure and Google Cloud.

AWS and Azure are true enterprise players while Google is still paving its way through. Google has traditionally been the platform of choice for startups, academy and researchers, but lately Google is investing a lot of effort in getting bigger enterprise customers.

I would recommend to go with AWS as the primary cloud provider with Azure as the backup solution. Modern application packaging solutions (such as Docker) provide vendor neutrality that enables your applications to seamlessly run on any cloud provider.

AWS is the definite market leader, as some statistical numbers are saying:

  • 37% of all Internet traffic goes via Amazon cloud
  • $8 billion in revenue last year
  • All 14 other cloud providers combined have 1/5th the aggregate capacity of AWS
  • Serving 1 million private customers and 600 government agencies
  • 11 AWS regions worldwide and more than 30 data centers.
  • Big names already using AWS: WorldBank, Capitol One, Salesforce, J.P. Morgan Chase, Nasdaq, FINRA, PacificLife Insurance, Unilever, Philips, Novartis, Siemens, AON, Tesco Bank, RSA Insurance, AirBnB, Netflix etc.

 

So, what’s your excuse for not embracing the cloud right now?

Enterprise API – The only IT strategy that matters

We all know there is a problem with IT in today’s enterprises. The CIO knows, the CFO knows, the CEO knows and everyone below them knows. Architects, Operations, Developers definitely know. The problem however still persists.

We would all love to have the perfectly connected IT systems behaving like clockwork. Systems that are maintainable, extendable, affordable, performable and many other XXXable’s.

 

Figure 1: How we wish IT systems would work together

 

Unfortunately, the reality of a typical IT landscape in a large enterprise is much more chaotic. Systems that are everything but the above XXXable’s.

 

Figure 2: How IT systems are interconnected in reality

 

The reason why the problem still persists in today’s enterprises is the complexity. Years and years of projects after projects that added new functionality on top of the old legacy.

Even though everyone accepts that there is a problem, nobody is willing to address it properly. It’s a huge elephant in the room that everyone is ignoring. The reason is that there is no obvious solution to the problem. It’s almost impossible to untangle all those dependencies. It’s even impossible to document it properly, let alone to fix it.

Many enterprises just accept as-is situation. The IT systems they use are outdated but still good enough to do the work.

As Tom Goodwin stated in his article: We are at the peak of complexity – and it sucks. We live in a hybrid of old and new world systems. We use Slack and WhatsApp to communicate with other developers but then send status reports as emails. We use fancy mobile apps to book our flights but then we do the check-in at the airport where the staff is still looking at screens with green desktop applications.

Legacy IT systems are built to last. It’s more probable that an enterprise itself will cease to exist before their IT systems stop working. Even if an enterprise survives the competition and the market pressure, their problems won’t be the IT systems themselves, but the people who have the knowledge to maintain those systems. Try to find a Cobol developer these days who is not retired. If your key systems are written in Cobol, you already have an issue.

The solution is not an IT strategy that will decommission a few of big systems just to be replaced by newer ones. Dependencies still stay. The problem still exists, just is postponed to another era.

The real solution in tackling complexity has been known and used by programmers for a very long time. That solution is known as the Facade Pattern. The Facade Pattern hides the complexities of the system and provides an interface to a client. The client can access the system using the facade. The role of the Facade Pattern is to provide different high-level views of subsystems whose details are hidden from users. Hiding detail is a key programming concept. What makes the Facade Pattern different from other patterns is that the interface it builds up can be entirely new. It is not coupled to existing requirements, nor must it conform to existing interfaces.

Facade Pattern thinking can be applied to whole enterprises as well, not just a low level code. API Services are an example of a Facade Pattern. The only IT strategy that could reduce complexity produced over many years of development & merging & acquisitions is to hide all that complexity and start thinking about API services that your enterprise has and cares about.

The “API” stands for “Application Programming Interface”. The name itself has roots in the 1970s when the C programming language was just picking up. The name alludes to externalization of low-level application interfaces. The name API works fine on the application level where different components need to be loosely coupled and separated by interfaces. On a higher level however, the name API is a bit misleading. When we are talking about enterprise API management, we do not care about internal application API’s. We care about functions that are useful to a user/client/customer.

As an example, a typical enterprise system is a document management system. Document management systems have hundreds of internal API’s, but as a user we basically need only two functions – GetDocument and PutDocument.

Another example might be Human Resource system that is managing records of all internal employees of a company. Such systems have hundreds of functions. But from the outside, we can start by externalizing only few that can be useful to some employee portal – FindEmployee, AddEmloyee, RemoveEmployee.

When we start thinking in this direction, our complexity suddenly becomes more manageable.

 

Figure 3: Exposing Enterprise API’s

 

New projects should not be another brick in the complexity wall anymore. They should be just temporary views into our existing enterprise API services. Applications will come and go as they should, but the enterprise API services need to stay.

A new customer portal should only use enterprise API’s that are exposed. It should not be concerned with the legacy complexity. Internal complexity needs to be managed by a set of aggregated services that are orchestrating access to all the internal systems that are needed to fulfill the high level API call.

 

Figure 4: New Customer Portal using Enterprise API’s

 

Over time, internal systems can be replaced/decommissioned without impacting newly built applications that are using their services. Enterprise operations continue to work without interruption while the whole technical debt is being paid in gradually.

With this architecture in place, when we need to upgrade one of the existing applications to a new version, or replace the entire application with a new one, the existing HR Services, Document Management Services and Policy Management Services layers protect our API consumers from being affected by this change. It is in those layers that we can reliably define reusable business rules and convert between logical interfaces for the new Customer Portal Application and the application specific interfaces of our back-office applications. 
So what happens when we publish only logical, application neutral services through our API Management layer? It turns out that we will naturally be publishing APIs that make sense to our enterprise. We will be building an interface to our enterprise. We will be doing all of our customers a favor by doing so, since it has a greater chance of setting them on the right path from the very beginning.

API management can be taken even a few steps further. Enterprises can implement three levels of API management:

  • Client API Management – entry point to all front-end services used by client applications
  • Enterprise API Management – entry point to all backend services used by front-end services
  • Data API Management – entry point to all data sources used by backend services

Implementing Enterprise API strategy is never a technical issue. Many modern software products are already exposing most of their functionality via API’s. They are usually built with microservice architecture where API’s are in the heart of the solution. They are the contract between each microservice. Older monoliths are not there but still there are ways to “interact” with them – e.g. building a layer around the monolith that exposes REST API endpoints but internally is mapping them to native calls to the business components (see StranglerApplication pattern).

The problem with Enterprise API strategy is the same as with any other IT strategy. It requires buy-in and support from the highest levels in the company.

In 2002, Jeff Bezos (Amazon CEO) sent an email to all employees where he mandated usage of API services:

1. All teams will henceforth expose their data and functionality through service interfaces.

2. Teams must communicate with each other through these interfaces.

3. There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4. It doesn’t matter what technology they use.

5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6. Anyone who doesn’t do this will be fired.

As the outcome of this, the leading cloud provider Amazon Web Services (AWS) was born where every service in AWS is API driven.

API management requires ownership. It requires dedicated API Management Office with API evangelists who can teach development teams how to externalize their functions as API’s. It requires setting up new governance processes, rules, standards and guidelines to monitor the implementation of the enterprise API strategy.

There are many of API management tools on the market. The common name that emerged recently for them is API Gateway.

API Gateway services have a multitude of features. Their main focus is to make designing, deploying and managing an API easier, as well as to ensure that it is safe, secure and functional.

Some of the benefits of using API Gateway solutions are:

  • Documentation – one of the most common problems of developers is figuring out how an API works. An API management service has to provide an easy way to read the documentation and enable developers to “try before they buy”.
  • Analytics and Statistics – it is critical to understand how people use our API and get insights for our business.
  • Deployment – should be flexible and support public or private clouds, on-premises implementations, or combinations.
  • Developer engagement – engaging with API consumers, developer or partners is important. Getting an easily accessible developer portal will significantly facilitate on-boarding.
  • Security – API’s carry sensitive data, so it is important to protect the exposed information. The service has to at least provide identity and access management for users and developers.
  • Availability – should be available, scalable and redundant. An API environment can become demanding and the service should be able to deal with any kind of errors, problems or temporary traffic spikes.

Popular tools on the market for API management are:

To be flexible, cost effective and to adopt rapidly to changing customer expectations and behavior, all enterprises should consider and adopt an Enterprise API Strategy. An enterprise being able to answer “what are my key Enterprise APIs?” might be the difference between failing and succeeding in the millennial and post millennial era.