A Guide to AWS Certifications


Many of my customers have asked me what type of AWS certification would be the right one for their job roles. In this article I will try to map IT roles that are typically found in enterprises today to the existing AWS certifications. Also, there will be a brief explanation of each AWS certificates with some recommended study guidelines. Hopefully that can help a bit in deciding which one to take.

AWS certifications are considered to be one of the toughest in the IT industry. Having achieved at least one of them is beneficial to show your expertise, to give you recognition and credibility with your employer and customers, as well as to advance your career and provide more options for your next job.

Currently there are 9 AWS certifications. However, not all of them are needed for the job roles you are currently doing in your company. Unless you are a “certification junkie” or see achieving all 9 as a personal challenge, there really is no reason to go down that route. It took me about 2 years to get all 9 certificates. However, it is a special feeling having achieved all of them.

Here is a brief description of each of AWS certificates (for more detailed explanation, check out the official AWS certification page):

  1. Cloud Practitioner

This is an entry level certificate. It is not a technical exam as it covers more benefits of the cloud, cloud economics, cloud migration, security in the cloud and some basic understanding of compute and storage services. The intended audience are those who are just starting with the cloud journey. This is the easiest certificate to get compared to other AWS ones. On the difficulty scale from 1 (easy) to 10 (hard), I would rate it is as level 2.

  1. Associate – Solutions Architect

Solutions Architect Associate verifies your knowledge of designing applications in AWS. Questions are very technical and you would need to know a broad set of AWS services in order to pass the exam. It is essential to know how to design scalable, secure and fault tolerant applications by applying AWS best practices. On the difficulty scale, I would rate this one as 4.

  1. Associate – Developer

Developer certificate is similar to the Solution Architect one but the focus is more on developing, deploying and debugging applications in AWS. You would need to know all the concepts/services from Solution Architect as well as some other more developer oriented such as CI/CD pipelines, working with SDKs and CLIs etc. On the difficulty scale, I would rate this one as 4.

  1. Associate – SysOps Administrator

SysOps Administrator exam tests your knowledge of deploying, maintaining and troubleshooting applications in AWS. I personally found this one to be more difficult than other two associate certifications, since you really need to know how to solve problems in production. You still need to understand all the services that are covered in Solution Architect and Developer certifications plus to have experience in solving real deployment/production level issues. On the difficulty scale, I would rate this one as 5.

  1. Specialty – BigData

Specialty certifications are focusing on a specific domain and in this one it is all about data analytics. You would need to know how to ingest streaming data to AWS, how to process them in real time and how to store and visualize them. If you are seasoned data scientist who is already using Hadoop/Spark extensively, you might find this exam easier than others but still you need to learn how to apply all these concepts in an AWS environment. Big Data Specialty certification requires that you have either Cloud Practitioner or one of Associate levels done already. On the difficulty scale, I would rate this one as 7.

  1. Specialty – Security

Security Specialty tests your knowledge on building secure applications in AWS. There are plenty of services and best practices on how to protect your environment at perimeter, in-transit, at rest and build applications that are compliant to different standards. Together with Advanced Networking certification, I think this one is the most important certificate to have. Security should be the highest priority and mastering these concepts is essential in building bullet-proof applications. Security Specialty certification requires that you have either Cloud Practitioner or one of Associate levels done already. On the difficulty scale, I would rate this one as 7.

  1. Specialty – Advanced Network

As mentioned previously, this is one of the most important certifications in AWS. It covers foundational principles of AWS networking. Whether you are building hybrid or cloud-native applications, you will need to understand almost everything that this exam covers. Advanced Network Specialty certification requires that you have either Cloud Practitioner or one of Associate levels done already. On the difficulty scale, I would rate this one as 10.

  1. Professional – Solutions Architect

Professional level certifications, such as Solutions Architect, are assuming you are already experienced in designing cloud applications and you understand most of the AWS services that are out there. You are being tested on different scenarios here. How to design applications based on a specific set of business requirements. Professional Solution Architect certification requires that you have Solution Architect Associate level done already. On the difficulty scale, I would rate this one as 8.

  1. Professional – DevOps Engineer

Similar to SysOps Administrator, DevOps Engineer requires you to have hands-on experience in provisioning, operating, and managing AWS environments. You are being tested on many scenarios where deep understanding on how to troubleshoot operational issues is required. Professional DevOps Engineer certification requires that you have either Developer Associate or SysOp Associate done already. On the difficulty scale, I would rate this one as 9.

The difficulty of each certification is an individual thing that depends on your professional background. I have been a developer and a solutions architect for a long time so these types of exams were easier for me. However, I have never worked in the networking field before so I found Advanced Networking Specialty exam to be the most difficult one.

Companies starting their cloud journey with AWS need to mobilize their entire IT and business departments and educate them on AWS. The best way to prepare their staff for the journey is by going through the certification process. But how to select which certification is the right one for you and your employees?

The following table tries to map job roles to AWS certifications that are recommended to have for that role:



Recommended AWS Certification

Software Engineer

Also: application programmer, developer, system programmer, system engineer

Associate – Developer

Associate – Solutions Architect

Database Administrator Associate – SysOps Administrator
Operations Engineer

Also: DevOps specialist, infrastructure engineer, operations engineer

Associate – SysOps Administrator

Professional – DevOps Engineer

Data Scientist Cloud Practitioner

Specialty – BigData

Systems Analyst

Also: product specialist, solutions specialist, technical designer

Associate – Solutions Architect
Business Analyst

Also: business architect, enterprise-wide information specialist

Cloud Practitioner
Enterprise Architect Cloud Practitioner

Associate – Solutions Architect

Solution Architect

Also: software architect, IT architect

Associate – Solutions Architect

Professional – Solutions Architect

Network Engineer

Also: hardware engineer, network designer

Cloud Practitioner

Specialty – Advanced Networking

Security Engineer

Also: CISO

Cloud Practitioner

Specialty – Security

CIO/CTO Cloud Practitioner
IT Director

Also: head of IT, IT manager

Cloud Practitioner
Technical Sales

Also: sales manager, account manager, sales executive

Cloud Practitioner

Associate – Solutions Architect

Project Manager 

Also: product planner, project leader, master scheduler

Cloud Practitioner

Associate – Solutions Architect

Web Developer

Also: web designer, web producer, multimedia architect

Associate – Developer
Software Tester

Also: test analyst, software quality assurance tester

Cloud Practitioner


Recommended study guide

There is no substitute for experience. The best way to prepare for the certification exams is to get your hands dirty on AWS. That means opening an account and starting to play around with services.

AWS services are broad and it can be overwhelming for beginners to find their way around the web console.  The way I started with AWS was by reading a book “Amazon Web Services in Action” by Michael and Andreas Wittig. It’s a practical, hands-on introduction to AWS and will speed your ramp-up with AWS.

Once you start developing an understanding of what AWS is, you can continue with reading AWS official certification preparation books. At the moment there are only three of them:

  1. AWS Certified Solutions Architect Official Study Guide: Associate Exam
  2. AWS Certified SysOps Administrator Official Study Guide: Associate Exam
  3. AWS Certified Advanced Networking Official Study Guide: Specialty Exam

I haven’t read the first two so can’t really comment, but looking at customer reviews it seems that they are a great source for Associate level exams. I did read the Advanced Networking book and was simply blown away! I can highly recommend this book to anyone who plans on doing anything on AWS. The AWS networking fundamentals are so clearly explained that even beginners in this area can fully grasp all the concepts and details about designing AWS systems.

Additional way to ramp-up your knowledge about AWS services and effectively prepare for the exams, is to attend digital or classroom courses. There are over 100 digital courses that are free and available to anyone. They range from short 5-minute courses to several-hours ones. The classroom courses are delivered by AWS or AWS Training Partners. They are delivered by skilled trainers with many years of experience who will explain most of AWS services in a practical way.  Both types of courses can be discovered and booked on the AWS Training and Certification portal.

Once you feel ready to take on the challenge, I can recommend testing yourself with a few official practice exams (and sample questions) from the same AWS Training and Certification portal.

And finally, the ultimate source of all knowledge are AWS documentation pages and whitepapers. Make sure you spend the majority of your time here as these are the most comprehensive and up to date sources of information.

Don’t just prepare to pass the exams. Study, understand and apply the knowledge you gained. AWS is the biggest revolution in IT industry ever and it’s great to have such a toolset at your disposal to build great things that we were not able to do in the past. Certifications are just a reward for the knowledge you gained.

“If you build it, they will come.”  – Field of Dreams

Good luck!

3 reasons why your Enterprise should start moving to the Cloud right now


The cloud is no longer a nice to have, but increasingly a necessity.

Traditionally enterprises have been reluctant to embrace cloud technologies, especially on security grounds. But now, enterprises are moving past their fears about security and regulatory risks and embracing the business agility of public cloud services.

The wave of Fintech startups, digital challengers and tech giants presents a threat to all the big enterprises that is unlikely to go away any time soon. Cloud computing can help these enterprises to respond more effectively to these challenges by driving down costs, enabling innovation and creating the flexibility needed to respond to change.
The main reasons to start embracing the cloud are:


1. Not Your Core Competence

Managing infrastructure is not your core competence. Banking, insurance, pharmaceutical, automotive etc. enterprises need to focus on their core products that are giving them competitive advantage and differentiating them from the others. Managing data centers is not a differentiator, rather a supporting service. Having your own power plant to produce electricity won’t engage your customers and make them buy more products from you, unless tou are in the electricity business.

Technological advances these days are very hard to keep up with. This fact puts additional pressure on your IT operations stuff to deliver faster than ever before. The problem is that your IT operation guys are probably already under the water trying to keep up with existing demand from the business. It’s not enough anymore just to provide a server and wire it up with the rest of the network. Your IT operations need to be able to handle modern requirements and deploy applications packaged in Docker containers on a highly virtualized environments that can auto-scale up and down based on the demand. They need to be able to setup fully automated continuous integration and deployment pipelines so that your enterprise can deliver new products to the market faster than ever. They might need to support new PaaS environments so that your self-operated data center could seamlessly be expanded to a hybrid cloud one day.

When (or better if) IT folks finally bring their internal shop in order, the time will come to refresh the whole hardware/software infrastructure and to rationalize application inventory, which will just put more pressure on them. And yes, all of that needs to be done while still cutting down IT operational costs.

Businesses cannot wait for IT operations to keep up while more technologies are being thrown on them. What often happens is that the businesses eventually gives up and forms a shadow IT organization that can fulfil their demands. That’s usually the beginning of the end for your IT organizations.

Operating modern data centers is job for top-notch engineers. Top-notch engineers typically want to work for Infrastructure and Platform specialist companies, not banks, insurance, pharmaceuticals and similar companies.

Unless your hands are tied up with existing contracts, there is no reason why you should continue trying to do “the plumbing” that cloud providers are doing way better than you. Especially when this is not your core competence.


2. Cost Optimization

Cloud computing means enterprises no longer have to invest heavily in dedicated hardware, software and manpower. By using cloud, they can scale up vast amounts of technology infrastructure on demand and pay only for what they use. Capacity management should no longer be in the glossary of terms for enterprises.

Having such vast compute capacity available on demand is extremely important in industries where milliseconds can mean millions in profit. But even if you are not in such industries, possibility to pay-as-you go model that cloud vendors are offering is more than appealing. What’s the point keeping your Dev, Test, UAT and other non-production environments still up and running after 6pm when everybody is gone home? You can just switch them off and save cost. Switch them back on at 9am the next day when people come back to work.

GE Oil & Gas says that on average, applications moved into the cloud had a 52% reduced cost of ownership compared to on-premises applications.

FINRA, one of the largest independent securities regulators in the United States, has now moved about 75% of its operations to the cloud. It estimates it will save up to $20 million annually by using cloud instead of a physical data center infrastructure.


3. Boost Innovation

For most businesses, innovation means first gaining an understanding of product offerings or key processes, then what to change to get a better result. The implementation of the innovation almost always becomes an IT project. In other words, the last mile of most innovation that makes a difference goes through IT.

The real gold mine in any enterprise is the data that is buried in numerous databases throughout the company. Making sense out of this valuable information to generate new business prospects is the area where big data analytics gains more traction everyday. However, setting the full big data analytics cluster in your data center is not an easy task. It requires large number of machines as well as skills to set them up and operate. While IT teams are spending months to deploy Hadoop clusters, the business is losing valuable time and the value if data is deteriorating each day.

Adopting cloud-based IT can eliminate the usual IT roadblocks to innovation. Business users and developers can scale resources up and down as needed, freeing IT to spend more time creating and less time configuring. In the cloud, the new IT paradigm is simplicity and flexibility.


Which cloud vendor to choose?

According to Gartner, there are 14 cloud vendors out there. The big players are Amazon Web Services (AWS), Microsoft Azure and Google Cloud.

AWS and Azure are true enterprise players while Google is still paving its way through. Google has traditionally been the platform of choice for startups, academy and researchers, but lately Google is investing a lot of effort in getting bigger enterprise customers.

I would recommend to go with AWS as the primary cloud provider with Azure as the backup solution. Modern application packaging solutions (such as Docker) provide vendor neutrality that enables your applications to seamlessly run on any cloud provider.

AWS is the definite market leader, as some statistical numbers are saying:

  • 37% of all Internet traffic goes via Amazon cloud
  • $8 billion in revenue last year
  • All 14 other cloud providers combined have 1/5th the aggregate capacity of AWS
  • Serving 1 million private customers and 600 government agencies
  • 11 AWS regions worldwide and more than 30 data centers.
  • Big names already using AWS: WorldBank, Capitol One, Salesforce, J.P. Morgan Chase, Nasdaq, FINRA, PacificLife Insurance, Unilever, Philips, Novartis, Siemens, AON, Tesco Bank, RSA Insurance, AirBnB, Netflix etc.


So, what’s your excuse for not embracing the cloud right now?

Enterprise API – The only IT strategy that matters

We all know there is a problem with IT in today’s enterprises. The CIO knows, the CFO knows, the CEO knows and everyone below them knows. Architects, Operations, Developers definitely know. The problem however still persists.

We would all love to have the perfectly connected IT systems behaving like clockwork. Systems that are maintainable, extendable, affordable, performable and many other XXXable’s.


Figure 1: How we wish IT systems would work together


Unfortunately, the reality of a typical IT landscape in a large enterprise is much more chaotic. Systems that are everything but the above XXXable’s.


Figure 2: How IT systems are interconnected in reality


The reason why the problem still persists in today’s enterprises is the complexity. Years and years of projects after projects that added new functionality on top of the old legacy.

Even though everyone accepts that there is a problem, nobody is willing to address it properly. It’s a huge elephant in the room that everyone is ignoring. The reason is that there is no obvious solution to the problem. It’s almost impossible to untangle all those dependencies. It’s even impossible to document it properly, let alone to fix it.

Many enterprises just accept as-is situation. The IT systems they use are outdated but still good enough to do the work.

As Tom Goodwin stated in his article: We are at the peak of complexity – and it sucks. We live in a hybrid of old and new world systems. We use Slack and WhatsApp to communicate with other developers but then send status reports as emails. We use fancy mobile apps to book our flights but then we do the check-in at the airport where the staff is still looking at screens with green desktop applications.

Legacy IT systems are built to last. It’s more probable that an enterprise itself will cease to exist before their IT systems stop working. Even if an enterprise survives the competition and the market pressure, their problems won’t be the IT systems themselves, but the people who have the knowledge to maintain those systems. Try to find a Cobol developer these days who is not retired. If your key systems are written in Cobol, you already have an issue.

The solution is not an IT strategy that will decommission a few of big systems just to be replaced by newer ones. Dependencies still stay. The problem still exists, just is postponed to another era.

The real solution in tackling complexity has been known and used by programmers for a very long time. That solution is known as the Facade Pattern. The Facade Pattern hides the complexities of the system and provides an interface to a client. The client can access the system using the facade. The role of the Facade Pattern is to provide different high-level views of subsystems whose details are hidden from users. Hiding detail is a key programming concept. What makes the Facade Pattern different from other patterns is that the interface it builds up can be entirely new. It is not coupled to existing requirements, nor must it conform to existing interfaces.

Facade Pattern thinking can be applied to whole enterprises as well, not just a low level code. API Services are an example of a Facade Pattern. The only IT strategy that could reduce complexity produced over many years of development & merging & acquisitions is to hide all that complexity and start thinking about API services that your enterprise has and cares about.

The “API” stands for “Application Programming Interface”. The name itself has roots in the 1970s when the C programming language was just picking up. The name alludes to externalization of low-level application interfaces. The name API works fine on the application level where different components need to be loosely coupled and separated by interfaces. On a higher level however, the name API is a bit misleading. When we are talking about enterprise API management, we do not care about internal application API’s. We care about functions that are useful to a user/client/customer.

As an example, a typical enterprise system is a document management system. Document management systems have hundreds of internal API’s, but as a user we basically need only two functions – GetDocument and PutDocument.

Another example might be Human Resource system that is managing records of all internal employees of a company. Such systems have hundreds of functions. But from the outside, we can start by externalizing only few that can be useful to some employee portal – FindEmployee, AddEmloyee, RemoveEmployee.

When we start thinking in this direction, our complexity suddenly becomes more manageable.


Figure 3: Exposing Enterprise API’s


New projects should not be another brick in the complexity wall anymore. They should be just temporary views into our existing enterprise API services. Applications will come and go as they should, but the enterprise API services need to stay.

A new customer portal should only use enterprise API’s that are exposed. It should not be concerned with the legacy complexity. Internal complexity needs to be managed by a set of aggregated services that are orchestrating access to all the internal systems that are needed to fulfill the high level API call.


Figure 4: New Customer Portal using Enterprise API’s


Over time, internal systems can be replaced/decommissioned without impacting newly built applications that are using their services. Enterprise operations continue to work without interruption while the whole technical debt is being paid in gradually.

With this architecture in place, when we need to upgrade one of the existing applications to a new version, or replace the entire application with a new one, the existing HR Services, Document Management Services and Policy Management Services layers protect our API consumers from being affected by this change. It is in those layers that we can reliably define reusable business rules and convert between logical interfaces for the new Customer Portal Application and the application specific interfaces of our back-office applications. 
So what happens when we publish only logical, application neutral services through our API Management layer? It turns out that we will naturally be publishing APIs that make sense to our enterprise. We will be building an interface to our enterprise. We will be doing all of our customers a favor by doing so, since it has a greater chance of setting them on the right path from the very beginning.

API management can be taken even a few steps further. Enterprises can implement three levels of API management:

  • Client API Management – entry point to all front-end services used by client applications
  • Enterprise API Management – entry point to all backend services used by front-end services
  • Data API Management – entry point to all data sources used by backend services

Implementing Enterprise API strategy is never a technical issue. Many modern software products are already exposing most of their functionality via API’s. They are usually built with microservice architecture where API’s are in the heart of the solution. They are the contract between each microservice. Older monoliths are not there but still there are ways to “interact” with them – e.g. building a layer around the monolith that exposes REST API endpoints but internally is mapping them to native calls to the business components (see StranglerApplication pattern).

The problem with Enterprise API strategy is the same as with any other IT strategy. It requires buy-in and support from the highest levels in the company.

In 2002, Jeff Bezos (Amazon CEO) sent an email to all employees where he mandated usage of API services:

1. All teams will henceforth expose their data and functionality through service interfaces.

2. Teams must communicate with each other through these interfaces.

3. There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4. It doesn’t matter what technology they use.

5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6. Anyone who doesn’t do this will be fired.

As the outcome of this, the leading cloud provider Amazon Web Services (AWS) was born where every service in AWS is API driven.

API management requires ownership. It requires dedicated API Management Office with API evangelists who can teach development teams how to externalize their functions as API’s. It requires setting up new governance processes, rules, standards and guidelines to monitor the implementation of the enterprise API strategy.

There are many of API management tools on the market. The common name that emerged recently for them is API Gateway.

API Gateway services have a multitude of features. Their main focus is to make designing, deploying and managing an API easier, as well as to ensure that it is safe, secure and functional.

Some of the benefits of using API Gateway solutions are:

  • Documentation – one of the most common problems of developers is figuring out how an API works. An API management service has to provide an easy way to read the documentation and enable developers to “try before they buy”.
  • Analytics and Statistics – it is critical to understand how people use our API and get insights for our business.
  • Deployment – should be flexible and support public or private clouds, on-premises implementations, or combinations.
  • Developer engagement – engaging with API consumers, developer or partners is important. Getting an easily accessible developer portal will significantly facilitate on-boarding.
  • Security – API’s carry sensitive data, so it is important to protect the exposed information. The service has to at least provide identity and access management for users and developers.
  • Availability – should be available, scalable and redundant. An API environment can become demanding and the service should be able to deal with any kind of errors, problems or temporary traffic spikes.

Popular tools on the market for API management are:

To be flexible, cost effective and to adopt rapidly to changing customer expectations and behavior, all enterprises should consider and adopt an Enterprise API Strategy. An enterprise being able to answer “what are my key Enterprise APIs?” might be the difference between failing and succeeding in the millennial and post millennial era.

What is the value of Enterprise Architecture?

Get the poster here.


Usually the hardest thing about Enterprise Architecture is “selling” it to senior executives. What value does Enterprise Architecture provide? What Enterprise Architecture does? How much is it going to cost us? This poster can answer those questions.

The Business Model Canvas

The Business Model Canvas (by Strategyzer), is a strategic management and entrepreneurial tool. It allows you to describe, design, challenge, invent, and pivot your business model. If you haven’t seen it before, take a look at this short video that explains it briefly.

Now that the Canvas is explained, let’s focus on what a business model for Enterprise Architecture might look like.


Value Proposition

Value Proposition is about “products” that Enterprise Architecture sells. The value that Enterprise Architecture delivers can be classified in two groups: Guidance and Control.

1. Guidance
Enterprise Architecture guides enterprises in identifying core competencies and business priorities. It also provides guidance in how to create practical and efficient means to manage information technology portfolios, how to rationalize existing systems and projects to gain cost reductions, and how to help remove redundancy in information systems. Enterprise Architecture demonstrates which technology investments and assets directly support business goals, strategies, and needs.

2. Control
Enterprise Architecture controls proposed solutions, services or changes. Since Enterprise Architecture sees the whole picture (from business, data, information and technology angles), it can address all areas affected and reduce exposure to the risk of unintended impacts.

Customer Segments

Enterprise Architecture takes input from C-level people but also feeds information back to them. Enterprise Architecture customers are broad but most common ones are:
1. CIO’s and COO’s
2. Business Stakeholders
3. Heads of IT Departments
4. Project Managers
5. Solution Architects


Usual ways that Enterprise Architects deliver value to their customers are:

1. Portals – such as Enterprise Architecture Repository that holds relevant information about the enterprise: application and technology inventory, data sources, business capabilities and services, standards, principles, architecture and solution building blocks and decision logs.

2. Personal Assistance – establishing role of trusted advisor by directly assisting the stakeholders and/or becoming a project team member

3. Governance Processes – rigid compliance and quality assurance through integration with PMO, Procurement and Supplier processes

Customer Relationships

The relationship between customers and Enterprise Architecture is maintained in one of these ways:

1. Communities – different communities of practice (per domain, per technology, pet initiative etc.) are one way of keeping communication open between Enterprise Architects and customers

2. Portals/Dashboards – reporting dashboards customized per stakeholder groups

Key Resources

The key resources that form Enterprise Architecture value proposition are:

1. Domain Architects – Domain Architects are members of Enterprise Architecture team and are usually split by Business, Data, Application and Technology domains

2. EA Platform – by Enterprise Architecture Platform it is assumed a platform (known as Architecture Repository) that provides insights into architectural artifact and reports generated out of those artifacts

Key Activities

Key Activities that Enterprise Architects perform could be summarized in four steps:
1. Observe – collect current information from as many sources as practically possible
2. Orient – analyze this information, and use it to update your current reality
3. Decide – determine a course of action
4. Act – follow through on your decision

These four steps are knows as OODA (Observe, Orient, Decide, Act) loop. It’s a continuous cycle of improvement. Observing the results of actions, seeing whether they’ve achieved the results that were intended, reviewing and revising initial decision, and moving to the next action.

Observe – Enterprise Architecture observers what is currently happening in the enterprise environment. Can current internal or external events impact existing strategy in direct or indirect way? If yes, is the enterprise ready to adapt to the new environment? Observing is key to a successful decision. If this step is flawed, that will lead architects to a flawed decision, and a flawed subsequent action. While speed is important, so is improving analytical skills and being able to see what’s really happening.

Orient – Enterprise Architecture needs to properly interpret previously observed situation. There are many events that influence orientation and at the end directly impact decisions taken. Usual influences are: market changes, technology disruptors, internal company culture, personal previous experiences, internal supporters/adversaries, organisational structure and new information coming in.

Decide – Decisions are based on the observations made and the orientation used. Enterprise Architecture can decide to prioritise investments in certain business capabilities in order to better meet some new market conditions that were previously observed.

Act – is the stage where decisions get implemented. Enterprise Architecture guides and controls implementation of decisions made in the previous step. That can be an implementation of a new system to support new business capabilities.

Key Partnerships

In order to successfully deliver value to the customers, Enterprise Architecture needs to form partnerships with different enterprise groups in a form of tight process integration between them. Key partnerships of Enterprise Architecture are:

1. Suppliers – third party vendors delivering application and infrastructure services

2. PMO – Project Management Office that is keeping control of the current and planned project and funds planned for them

3. Procurement – purchasing group within an enterprise responsible for acquiring new resources, software licenses etc.

Cost Structure

Enterprise Architecture requires budget for:

1. EA platform – development and management of Architecture Repository and reporting dashboard

2. Human Resources – salaries of Enterprise Architects and organisation of communities of practice

Revenue Streams

Enterprise Architecture does not directly generate any revenue. However, indirectly it can lead to significant cost savings. Some of the ways Enterprise Architecture (EA) helps keeping spending under control are:
– EA reduces technology costs and accelerates time to market by facilitating common approaches
– EA shifts IT spending from temporary stop-gap projects to strategic initiatives
– EA flags redundant, non-strategic and high risk projects before they get funding
– EA ensures IT spending is aligned with business strategy and goals

DevOp(tion)s for Enterprises


How to introduce DevOps in large enterprises?
What options do enterprises have considering their legacy and outsourced environments?

Read all about it in the new EntArchs White Paper, together with practical tips on a roadmap to DevOps, logical and physical architectures of DevOps, how microservices play a role in enterprises and what tools do we recommend to be used.

Download your free copy of “DevOp(tion)s For Enterprises – by EntArchs.pdf”