October 20, 2020
TABLE OF CONTENTS
In this new video series from Levvel, our industry experts discuss the ways cloud services can support digital innovation at small to medium size financial institutions, the differences between cloud and on-premises security approaches, and the benefits that come with adopting a cloud architecture. We’ll share relevant stories from the field and how we’ve helped financial institutions overcome their biggest challenges.
Over the coming weeks, new episodes will be released around the importance of implementing a cloud archtiecture and the direct impact it has on business success.
Pursuing a digital innovation effort is an exciting endeavor for any company, especially larger companies. To help prepare you for your journey, Levvel has composed a list of key questions and considerations you can use to drive discussion across your organization.
Chris Rigoni: So a lot of talk about the cloud generally focuses on IT and it makes IT’s job easier, which is great. But unfortunately, with a lot of that talk you get a lot of focus on IT, which isn’t really going to generate a lot of investment from the business side. So in institutions, especially financial institutions, the struggle is when you’re talking about a cloud transformation, they don’t want to invest in something like that because IT doing their job faster isn’t really a good business case.
What they don’t realize a lot of times, it’s the business agility that comes along with that. So when you’re talking about experimenting and doing different things with different products, or being able to scale and spin up environments to be able to code and develop in an experimental type of way, that allows the business to shift focus very quickly. This also applies when interacting with customers and having everything available to do that, whether it be usability, testing or different proofs of concept.
The other thing from an operational process or business process standpoint, if a business process changes and changes quickly, you can actually focus that change and do it from an IT perspective, without the concern for resources because the resources are very agile in a cloud environment. So you can actually shift that focus and adhere to that process from an IT perspective a whole lot quicker than you could with a traditional mainframe setup.
Belal Bayaa: Different financial institutions have different ways of calculating a return on investment. So for many of them, it’s the famous OPEX versus CapEx argument, but with the cloud, the IT spending is shifted to a pay-as-you-go kind of model. So you have this concept of on-demand pricing where you don’t necessarily have to guesstimate what capacity you need beforehand when you’re designing your cloud architecture. A good example of that is the ability that’s provided by some cloud providers to scale horizontally to adapt to different demands in network performance.
So this scaling is good for not only for performance, but also for cost optimization because once the demand dies down, then these virtual machines are de-provisioned and that helps the financial institutions to not have to pay extra for what they don’t necessarily need. These cloud service providers, they offer a good amount of documentation and the transparency for these product offerings that they have. This kind of makes the financial institutions themselves a bit more involved in the total cost of ownership calculations.
Chris Rigoni: Another way that it supports this innovation is economies of scale. So if you think about a traditional type of legacy environment, it’s a lot of mainframes, it’s a lot of data centers, it’s a lot of resources to be able to manage that. Not only from just a maintenance perspective, but anytime anything’s updated or you have a new code type that comes out or any shifts that you need to do both from a product and business process standpoint, it requires a lot of resources, both money and people. When you think about a cloud environment, there’s a lot less that’s required of that from a resource perspective.
You have not only cost-based pricing, but you also have less resources and actual manpower that’s required to both maintain and actually spin up environments and actually do different things with that. So you’re able to scale very quickly and then also contract very quickly if you need to. If maybe a product you were trying to prove out doesn’t work very well and you move on to another product, you can spin down that environment and spin up another environment in order to do that.
So from a business perspective, the economies of scale is a definite a feature that’s going to help you from an innovation perspective. The other thing that you think about when you think about economies of scale is you think about the technology purchasing power of a cloud service provider. So they’re going to be able to structure the cloud in a way and offer different technologies that as newer technologies come out, you’ll be able to take advantage of those without having to do a whole lot of work internally versus operationally and on the IT side. So you’re not worried about buying more infrastructure. You’re not worried about getting resources and investing in that technology because your cloud provider or service provider already has that for you.
The other thing you think about that’s very beneficial from that is that you have newer products and services that are going to be coming out in the financial industry. It’s constantly changing. Payments is constantly changing, whether it be internet of things or real-time payments or open banking. A lot of newer services and technologies and products are coming. As those are introduced, a cloud infrastructure is going to allow you to take advantage of those a whole lot faster than you normally would be.
Belal Bayaa: When it comes to traditional data centers of managing physical hardware on-prem, at these data centers came with its own set of challenges. Scheduling maintenance windows for removing old hardware, installing the new hardware, also on the software sense where you had to schedule these windows to patch or update the software so that you have the latest and greatest versions out there; all these came with their corresponding and associated downtimes as well.
After that, the need to estimate the capacity correctly, these financial institutions had to get the specs right if they wanted to scale their servers vertically or even horizontally when it comes to ordering the number of servers as well to account for different network traffic.
Failure to estimate this properly came at a dire costs in traditional on-premise systems because from a timing standpoint, you had to wait for these servers to be actually designed and then shipped over to the on-prem location. So that all came with harsher consequences. Say that the institutions did get those values right. In order for them to be proactive, they had to consistently operate at peak capacity to prevent failures from happening, so that came at a significant cost.
So with the advent of the cloud, some product offerings take care of a lot of these operations under the hood and some of them seemingly take minutes, if not, seconds to conduct. So this would allow the financial institution to focus their energy more on creating value for the customer, and this would give them a better competitive edge in the market.
Chris Rigoni: Cloud can really help create innovation in multiple different ways. But specifically for your online and mobile banking channels, you’re going to be able to experiment with innovative ways for you to interact with your customers, both in online and mobile banking. And the way you do this is data storage capabilities for the cloud allow you to get data points in there and actually do analytics and machine learning in a way that you couldn’t before for the storage capacity.
What that allows you to do is it allows you to see how your customers are using your mobile app, how they’re using online banking. You can target market them, where you put a banner, for example, for a credit card or a new product or service that you’re offering. How that changes when you put it in a different way. How do they respond to that? If you change around your mobile banking application in a certain way, change around the UI, how did your customers respond? Was it a positive response? Did they use features more than they did before?
If a really great feature that’s a good customer service point is buried away in your application and nobody’s using it and you move it and you immediately see a jump in the usage, that’s going to allow you to obviously tailor that experience for the things that your customers care about. Having the cloud is going to support a customer-centered approach.
When you’re talking about experimenting with online and mobile banking, the other way that this helps us is you can control the cost of that experimentation. So you basically pay for what you use with the cloud, the cloud environment and the cloud infrastructure. And the way that we think about that from a business perspective is that if I want to do something experimental, if I want to do something new and I spin up an environment and offer something as a proof of concept to my customers, I can test that, see how it performs, and if it doesn’t perform very well, I can spin it down and do away with it, and I no longer pay for those services.
And so from a business case perspective, it allows you to do things like usability testing and other features and functionality, and actually interact with your customers in a way that’s not going to cost as much as it previously would with data centers and other types of environments.
Belal Bayaa: Financial institutions can take their products from ideation to realization at speeds that were previously unattainable before the cloud came into play. They’re able to create isolated network environments within which they can design and implement proof of concepts and minimum viable products.
As far as deploying changes to production is concerned, that process is also made a little more seamless with the cloud, be it deploying fixes to production, hot-fixes, or even deploying new features in response to increased customer demand for new application development features.
Add to that the flexibility to choose the deployment methodology. Some cloud service providers give you the option to configure which way you want to deploy like A/B testing or blue-green deployment, or canary or rolling deployments. With managed services, these financial institutions can rest assured that the right amount of the resources are provisioned at any given point in time.
Chris Madison: The public cloud may be more secure than your data center because the cloud service providers have a significant incentive to provide a secure infrastructure for their customers. So, they invest heavily in their security, and this is demonstrated through precertification across many compliance frameworks.
The reason why they invest in it is that confidence that customers can deploy their applications on the public cloud in a secure manner. So, to understand how that works though is the shared responsibility model and the shared responsibility model separates the cloud responsibilities, the cloud service providers responsible for the security of the cloud while cloud consumers or the financial organization is responsible for security in the cloud.
How that looks in an IaaS type of environment or a service delivery model is the service provider’s responsible for updating the host operating system and the virtualization technology to eliminate any threats from that perspective where the customer is responsible for just the host operating system.
The other aspect of that is the inherent security built into the cloud. So, to isolate different environments, the cloud allows you to build out a multi-count framework such that you can have an account for production and testing and development and creating the accounts cost nothing. It’s what you put inside of them that actually starts to build up the cost.
So, what that does in a multi-account creation isolates the blast radius in case one of those accounts are compromised. Then you go a step further, within each account, you can set up virtual private networks such that if a particular application is compromised, the blast radius is limited to that particular application.
Finally, you can go even further into micro-segmentation using sub-networks and then software-defined network constructs like security groups that further isolate your blast radius. The benefit of leveraging the cloud then in terms of security is through the shared responsibility model, you can shift the security controls or many of the security controls for the infrastructure over to the cloud service provider where they take care of the hardware and software patching and the compliance frameworks, or at least the initial platform compliance.
That provides the cost efficiencies because you can focus on application not infrastructure. Finally, by the inherent security of the cloud, you can limit the blast radius by implementing multiple accounts which you couldn’t do in your local data center because you’re not going to spin up a physical data center per application.
Belal Bayaa: So, almost every cloud service provider offers some way to code the underlying cloud architecture. Storing the infrastructure as code in a version-controlled repository has several advantages. Whereas you’re able to spin up and spin down individual virtual machines with immutable deployments, you’re also able to completely destroy entire cloud architectures and reprovision them for no other reason than to create a simple refresh or a reset of the entire cloud infrastructure to a state that you know is compliant and secure to the standards of the financial institution.
Because you have to remember, almost everything is virtualized in the cloud, so the only way to access them is through some sort of console. It can be very tempting for any member of any operation, for example, to go in through that console and start clicking around and configuring things to the way that they see fit, but with infrastructure as code, all these changes are clearly documented and any delta between any two given points in time can be observed.
Combined with integrating this into some sort of software development life cycle or offloading the infrastructure code bits to its own SDLC and exposing the infrastructure changes to additional security scanning and tools, this can form one of the main foundations of operational excellence.
If you look at traditional on-premise systems and you consider the concrete example of a distributed denial of service or a DDOS, handling something like that can be a bit challenging with traditional on- premise architectures. A DDOS attack is where attackers attempt to overwhelm and overpower the resources provisioned in your network architecture, and they target your infrastructure until the services that you provide are no longer able to cope with that demand, rendering your service unusable.
With traditional on-premise systems, that can be challenging in the sense that dynamic scaling of your resources is not as easy because they are physically located in your data centers, you’d have to have dedicated hardware and dedicated software. You’d have to configure them to scale to the DDOS attacks. So, you’d have to have internal resources dedicated to performing those tasks, but with a cloud service providers, there are solutions that can easily mitigate those kinds of attacks.
For example, you have automatic scaling or horizontal scaling of virtual instances to absorb these types of attacks so that your application can continue performing to the standard that the financial institution likes, but then you also have the ability of provisioning a content delivery network whereby these attacks could be diverted to different edge locations around the world.
And again, this would allow your application to just continue performing at a good pace. Well, traditional on-premise systems, you wouldn’t need dedicated hardware and dedicated software to deal with these types of attacks. You would have to also allocate internal resources to perform these configurations to mitigate any type of attack as well and dynamically doing so in a traditional on-premise system is obviously challenging because of the static nature of the architecture.
Whereas with the cloud, you have some solutions such as auto-scaling where you could scale the number of instances of your application to absorb the attack that’s being conducted on your system, or some cloud service providers offer a CDNs, content delivery networks, whereby the attacks are diverted to different edge locations.
If you look at a financial institutions’ login page, for example, attackers could continuously hit that login page to the point where it’s overwhelms the API. From that standpoint, the overwhelming traffic on that API endpoint would render that page to be basically unreachable because of the increased demand in network traffic.
Chris Madison: So one of the significant differences between the cloud and on-premises security approaches is network security. So typically in your data center you have control of the physical network, so you can put packet sniffing technology on the physical network and view your traffic for compromised activity or for different types of attacks, and these flow into security monitoring tools.
In the cloud you don’t have ownership of the physical network layer, you have access to a virtualized network, and your virtual images operate on top of those virtual networks. And so what that means is information that flows across the virtual network may not even leave a physical host to travel on the physical network. So that’s one significant difference.
Most of the cloud security providers only allow traffic meant for particular image to arrive at that image. So if you put a packet sniffing technology on one of your images you would not be able to actually see network traffic at all. So that’s the primary difference.
So what that means is that there are three different types of ways to implement network security in a cloud environment. One is physical appliances, and physical appliances are good for your physical network when you own it and your private cloud. When you move to the public cloud you’re not going to be able to take that physical appliance and put it into the cloud to monitor your traffic.
The next step, or the next type of appliance, is the virtual appliance which moves into the cloud and can start tracking some of the information that’s moving across your virtual network. I say some of the information because if you’re using serverless technologies you don’t even have access to the virtual network that your traffic is going to traverse, you couldn’t even monitor that.
But let’s say you’re infrastructure as a service environment with your virtual appliance, the problem with the virtual appliance approach is it becomes a bottleneck. All your traffic is flowing through the virtual appliance and that increase cost was in your virtual appliance has to scale and ideally it’s cloud-aware so it can scale using auto-scale groups, etcetera.
But if the virtual appliance is not cloud-aware and traffic is traversing through it, it may also become confused because in a cloud environment the same address may be used by multiple hosts over several times in an hour even, so many of the virtual appliances become confused in that type of environment.
Except for some use cases what’s typically used in the cloud environment, what’s recommended, are host-based agents. And those host-based agents are lightweight, they’re cloud-native, they’re cloud-aware, they can track information in terms of file integrity monitoring, or potential access to the machine that shouldn’t occur, and it takes advantage of native cloud logging infrastructure to deliver that information to a SIM, which then monitors for intrusion and configuration changes that should not have occurred. So that’s the primary difference.
In your private network, or your private cloud, you can use physical appliances, but in cloud-based, you don’t have access to that virtual network traffic because of its virtual network. And it’s a migration or the recommended approach is to use host-based agents.
So ultimately when you’re migrating from your private cloud or the data center into the public cloud, the way they implement security is different, and that’s ultimately because of the access to the physical network in the private cloud versus lack of access and only access to the virtual private network and the public cloud.
Belal Bayaa: The difference between cloud and on-premise security approaches can include for example some cloud service providers which offer managed services that can assess continuous compliance to certifications like GDPR of the financial institution.
These managed services can use machine learning to assess different behavioral patterns, to access patterns of sensitive data in the cloud, and it can also automatically detect and classify these types of sensitive data that are stored.
And if the FI themselves were to implement such a solution on-premise it would require a significant investment in resources and in actually buying the necessary software to create these solutions, which is why the cloud can make this process a lot more seamless.
Chris Madison: Moving to the cloud, it does have a significant impact on regulatory compliance. So to understand compliance, organizations have a variety of corporate obligations that come from legislation, such as HIPAA, industry-specific regulations, such as PCI, contractual obligations, and even internal IT governance. And, compliance is really what it boils down to is the awareness of those obligations and adherence to those obligations. And then auditing is making sure that the organization is aware and adhering to those obligations. So really auditing is proving or disproving compliance.
So, what that means is in a shared responsibility model, you do not have access to the platform that you’re going to audit. And this is where the cloud service provider does pre-certification of their platform and the organization, their financial organization, does not have access to that platform to do auditing. So the reliance becomes, or there’s a significant reliance on third-party assessments of that cloud platform.
So the Compliance organization or the Compliance teams within the organization, they have to shift the way they do auditing or their assessors after the shift the way they do auditing. And that is the reliance on the third-party assessments versus auditing the framework. So the term that’s leveraged to understand it’s called compliance inheritance. And compliance inheritance is where the scope of the audit or a scope of a compliance assessment is limited to just the application, and then the assessments of the cloud service provider.
So, the application is built upon the cloud service provider’s assessed capabilities and services, and then the auditor and the organization is responsible for just implementing security controls within their application. So the issue there is, the compliance inheritance does not mean that an application built upon that compliant framework is automatically compliant. So you can still deploy an application that’s not compliant on top of that, because you didn’t implement your own security controls, etc.
The change or the difference in compliance assessment is a result of the shared responsibility model where the cloud service provider is responsible for the platform itself, and then third-party assessors audit the set of services in the platform at a certain point in time. And then, organizations build on top of that and are responsible for implementing their own security controls.
But, because they’re leveraging the provider, you get compliance inheritance into the application, so the scope of the audit is reduced just to the application.
So, this is typically not something that auditors are used to seeing. In the past, auditors, if you wanted to deploy your application, PCI application is an example for a financial institution, were deployed on a data center. That whole data center was in the scope of the audit.
But, public cloud providers will not allow your auditor to come in and look at their entire process because it is a business process, it is IT infrastructure, and it is security controls. And so, what the public cloud has brought upon is that security inheritance. So, some auditors are not used to that, but those that are well-versed in cloud technologies, or that have gone through the process before, are better equipped to handle those types of audits.
When you’re going through an external audit, it’s important to remember the difference in the public cloud and you went to find an auditor that has experience auditing systems that are being deployed on cloud service providers. Because, internal organizations typically do not have the experience, at least new to the cloud, do not have the experience and they don’t have that foundational knowledge to understand compliance inheritance, so they’re going to try and apply their internal security controls against the public cloud and there’s an impedance mismatch there that’s not necessarily going to work.
So it’s important to select an external auditor that does have experienced with cloud assessments or applications that are going to run on a public cloud.
Senior Financial Services Consultant
Senior Cloud Consultant
Cloud Capability Lead
Chris is a Senior Financial Services Consultant who works across a variety of companies and industries to create strategic payments advantages. He has over eight years of experience in managing emerging payments and digital platforms and served as a subject matter expert in tokenization, digital product management, real-time payments, Zelle, and open banking. Chris spent five years at BBVA Compass, most recently leading business-efforts in the launch of Google Pay and Samsung Pay, as well as managing their mobile wallet offering. The last three years have focused on tokenization, Zelle, and real-time payments strategies within organizations of different sizes and needs. He currently resides in Charlotte, NC with his wife and three children.
Belal is an AWS Certified Solutions Architect who focuses on infrastructure automation, security, and compliance in the public cloud. Prior to Levvel, he worked in application development in the dental, insurance, and FinTech industries. His DevOps expertise, combined with his application-development experience allow him to work in all stages of the SDLC, from code, to deployment, to infrastructure layout. He holds a B.E. in Electrical Engineering and lives with his wife and children in Dallas, TX.
Chris Madison has over 20 years of experience in the design and development of software solutions. As an early adopter of Cloud technologies, Chris has unique insight into constructing elastic solutions across a variety of cloud computing platforms, including Amazon Web Services, Azure, and IBM Cloud. His prior experience as an application and integration architect with IBM Software Group and Watson organizations has developed a customer-centric, disciplined approach to developing strategic plans and application architectures. When not keeping abreast of the break neck changes in the cloud industry, Chris trains to run 50Ks.