May 30, 2019
Innovation in cloud computing continues to accelerate at a breathtaking pace, with new concepts, strategies, frameworks, and products announced every day. As of January 2019, the Cloud Native Computing Foundation (CNCF) includes more than 600 active projects in its principal open source community. A new project joins, on average, each week. While this can be exciting for some, many business and operations groups find keeping up to date on the latest offerings and capabilities to be a challenge. Financial models must adapt to an inexorable push towards op-ex accounting with its fluidity; support teams must become familiar with new terminology and shifting responsibilities; and ultimately, technology strategy must lead by example.
Enter stage right the dual specters of Digital Transformation and IT Modernization, linked drivers of change—but also sometimes of anxiety and uncertainty. Who champions the transition? What tools and automation do successful teams use? Why do some efforts fail while others flourish? When can progress against goals be objectively measured? How best to re-envision existing products, services, processes, teams? Taken en masse, simple assembly of answers to those questions can be daunting and even derailing. However, there exists an alternative trail to follow, that of the Cloud Native.
Since its inception around 2010, the term “Cloud Native” has evolved from Netflix’s basic “Resilience, Discovery, and Scalability” to CNCF’s core concepts  of
Each of the concepts followed their evolutionary paths: containers stemming from early Sun Microsystems work with Solaris Zones, orchestration emerging from the dogmatic IBM integration patterns, and microservices from experiments at HP during cloud computing’s formative years. Though large tech companies played a role in surfacing each concept, the open source community brought each to prominence. However, all those companies also shared a common starting point: their workloads existed in their data centers, on their servers, traversing their networks, and hitting their capital budgets for every new feature or capability. They controlled the environment and used its capabilities to, in part, redefine workloads.
Around the same time as “Cloud Native” emerged, DevOps gained momentum as part of the solution to an increasing clash between accelerated time-to-market for software products versus rapidly growing infrastructure footprints (and operating budgets) to run them. This clash was leading to constant challenges with the predominant waterfall method of application development: no room for iteration, regular overbuilding or expediting of infrastructure, compressed product delivery cycles, and swollen work queues. With so many pieces moving so often and in so many directions, the work required to prepare the infrastructure to receive software took on its own set of roles and tasks. Sometimes this work was carefully orchestrated between development and operations teams, but more often it was a contentious affair taking place near—or at—the end of the delivery cycle, accompanied by finger-pointing and blame-storming.
At a 2009 O’Reilly Velocity Conference, a pair of Flickr employees, John Allspaw and Paul Hammond, shared their case  for what eventually became known as DevOps, an integrated approach to application development and infrastructure operations focused on communication and transparency. The approach depended on three core attributes:
By the next year, DevOps was an official term, and its ecosystem was growing. The first and most notable addition was building, testing, and deploying software through Continuous Delivery  and its intense focus on automation. DevOps quickly gained so much energy it spawned a novel, The Phoenix Project, which dramatized a fictional company’s journey from chaos to order by improving work quality and reducing work quantity through DevOps practices.
Few companies begin their existence operating in the cloud computing space. Sprawling data centers full of servers and switches, humming power supplies, and vast rivers of network cabling are still prevalent in most companies’ operations, whether owned or leased from Co-Lo providers. Introducing change in those data centers can be complex and managing that change can define the environment’s success or failure. Years before the advent of DevOps, System Development Life Cycle was formally defined as process and procedure development, change management, identifying user experiences and user impacts, and proper security procedures for hardware or software systems. Most companies follow an SDLC approach to introduce change to their technology environments, but SDLC eventually became synonymous with the application realm, and “Software” replaced “System” in popular nomenclature.
Software-based SDLC consists of a sequence of activities with active decision points from Planning and Feasibility Analysis to Maintenance and Disposition. SDLC has evolved over the years to include few specific procedural requirements except one: development must adhere to a standard five-stage process of Planning, Analysis, Design, Implementation, and Maintenance. Along the way, the Implementation stage expanded to include Development, Testing, and Deployment to highlight the importance of output quality and accelerate the software release process. Driven by many of the same environmental pressures as those pushing DevOps, SDLC further matured from the dogmatic Waterfall Stage-Gate model to the more flexible, currently dominant Agile development methodology.
Agile is a low risk, continuous development model which emphasizes software quality through concurrent development and testing. It is focused on the compulsory immediate set of software features provided by a client’s requirements. The process is designed to be incremental and continuously evolving, intended to track the desired value from a software product through extensive feedback from the client. It also helps development teams to focus on shorter iterations and a subset of product features through methodologies such as SCRUM or Extreme Programming (XP).
With its focus on continuous and consistent improvement, Agile aligns perfectly with the process optimization attribute of DevOps. Similarly, DevOps derives considerable benefit from the way work is categorized and executed in an Agile flow. The adoption of Agile practices and DevOps does not depend on the use of public cloud. Having a stable foundation for introducing change and automating work is often viewed as a highly desirable prerequisite and best practice for public cloud workloads. Many features from the Cloud Native toolset—such as Microservices—can be a launchpad for initiating and growing an Agile and DevOps capability within an organization.
The term Microservices became an IT buzzword in the first half of the decade and has since peaked on the hype cycle as most organizations are striving to adopt it as their target application architecture. Many principles and aspirations drive the Microservices paradigm, but most are related to its primary benefit: to increase business capabilities by decoupling the application stack and reducing interdependencies across discrete elements of functionality. What does this mean and what is the impact?
First, a Microservice is a self-contained and loosely coupled process that provides an individual business capability for application/product. Being self-contained means a group of Microservices can, for example, be scaled independently of each other. Second, developing, testing and deploying functionality can occur independently, reducing the impact of introducing change and allowing product teams to operate in the same loosely coupled manner as the Microservices. Third, a properly designed Microservice architecture brings the ability to share identical/similar capabilities across multiple applications.
Before the rise of Microservices, virtualization was the primary method for optimizing infrastructure utilization. Both virtual machines and containers can have value depending on business use cases. In traditional virtualization, however, many decisions must be identified early in the product cycle: preferred guest OS for implementation, storage space required for host and guest operation, network setup and provisioning. All of the decisions become configuration items which cost money and are time-consuming to manage. On the other hand, containers are designed to run microservices application workloads with lower costs, better isolation, lightweight provisioning, agnostic storage and network interfaces, and all with flexibility, resiliency, stability, and execution speed better than virtual machines. Using containers for Microservices is further augmented and simplified by open-source and enterprise-grade container orchestration platform like Kubernetes, OpenShift, and Docker Swarm. These platforms help overcome common business blockers to delivering rapid, reliable technology change by accelerating, standardizing and sustaining the delivery process.
Companies with an interest in cloud computing tend to start their journey due to one or more of the following factors:
Regardless of how many or which ones apply, the underlying drivers are always related to the simple fact that public cloud providers can focus their time, energy, and money on technology evolution while most companies must deal with many other competing priorities. However, too often an enterprise-first approach fails to materialize: significant and costly decisions are made based on existing partnerships instead of what the enterprise needs. These decisions span the course from jumping into the cloud for the wrong reasons to selecting inappropriate or ineffective toolchains. The situation worsens when compounded by one or more of the following: not enough people engaged and empowered, not enough understanding of the “why” behind changes, no articulation of the values of cloud computing, no accompanying cultural transformation.
So how to address those dual specters of Digital Transformation and IT Modernization without sacrificing other priorities, allocating some already overtaxed resources, or further straining the existing infrastructure?
A simple answer is testing the waters with hybrid cloud. In hybrid cloud solutions, products and services are designed with at least some cloud capabilities in scope. Examples include: burstable capacity requirements where traffic demands are mitigated by cloud-based services; evaluating untested workloads in the “fail fast, fail cheaply” mode; adding high availability and/or disaster recovery capabilities without additional dedicated infrastructure; meeting regulatory requirements for data residency while maximizing application performance; deploying new features or capabilities without impacting existing people, process, or product. Tackling one or more of these solutions opens the door for an enterprise-first mindset to address the essential requirement that infrastructure and operations must be modernized to address the demands of Agile, DevOps and Continuous Delivery. Automation of scalability, consolidation of workload, comprehensive collection of data, and using intelligent design are all part of this modernization. No longer will projects receive the dreaded status update of “the code is ready / has been ready, but we have nowhere to deploy and test.”
With an established Agile practice and a functioning DevOps role, it is time to address the question of where to run the container orchestration platform: on-premise in virtualized infrastructure, in the cloud, or even both? To answer this question, cost, security, compliance, management, monitoring, and automation must all be focus areas to address prior to deploying Microservices whether in container orchestration platform or a cloud provider. For companies with little or no exposure to the cloud, there are many viable reasons to run a container orchestration platform on-premise: specialized needs to run application workloads on specific hardware, the need for complete control over the environment, or the ability to use existing processes to monitor security, compliance, performance, and costs. Note, however, this approach also involves investment in training, expertise, and experience to maintain the infrastructure and orchestration platform.
For organizations actively seeking to move to cloud, this is an ideal starting point to migrate some complex on-premise workloads to the cloud. Many cloud providers have already replaced most of the typical management, monitoring, and security processes of virtualized infrastructure with automation to eliminate the operational burden. This is accomplished by providing managed container services (Amazon EKS, Google GKE, Azure AKS, and others) that are highly scalable with robust performance characteristics. When properly implemented, a fully-functional cloud-based Microservices workload features continuous integration and continuous delivery/deployment (CI/CD) with one-click infrastructure provisioning and decommissioning and version-controlled manifests This means fast and faultless application workload deployments to the container orchestration platform no matter where it runs.
Proving the validity and competency of a solution requires a successful pilot. Any hybrid cloud use case is an option, but the key initial decision of “what should go where” guides which use case best aligns with strategy and skill. Consider any of the following workloads as candidates for this pilot:
Once the candidate is selected, it is time for an end-to-end build and deploy cycle of the cloud portion. This cycle vets adherence to the Agile, DevOps and Continuous Delivery model and the level of automation attained. Each of the CNCF core concepts (Container Packaged, Dynamically Orchestrated, Microservices-Oriented) have added relevance here as deployment of the cloud components is a perfect opportunity to have teams execute in a Cloud Native posture. Regardless of the specific components involved, each should be deployable as a container, have no static dependencies for operation, and rely on microservices for downstream functionality. An example, based on a Hybrid Edge workload candidate:
In this example, public cloud deployment consists of:
This deployment would consist of an automated pipeline to build, test, and deploy the processing containers, implementation of serverless event-based orchestration, and multiple microservices to support handling of data across different stages and states.
Many enterprises have noble intentions when it comes to spearheading efforts to become fully Cloud Native. Although they select proper toolchains and purchase licenses to support this transition, it is often found that the transition ultimately does not succeed due to a lack of a Cloud Native mindset. Larger organizations, in particular, tend to silo work across multiple application development teams, engineering and network teams, and even multiple governance and oversight teams. Momentum can stall from lack of support or interaction across all the teams and associated company stakeholders. An organization-wide cultural transformation is required to attain the Cloud Native goal, and these changes are better introduced in smaller increments. Select drivers from various departments need to be selected to champion the Cloud Native cause and create traction across the enterprise, and the proper communication channels need to be established to build the Cloud Native mentality.
Once truly comfortable with cloud computing, the next step is using cloud as the default stance. Any workload is evaluated for deployment in public cloud before considering either dedicated infrastructure or private cloud. Initially, some exceptions due to legacy constraints may remain: specialized hardware requirements for telephony integration; meager network latency requirements; a regulatory requirement blocking use of public cloud for specific functions or data; or (sometimes) Total Cost of Ownership (TCO) calculations do not show a competitive value basis in using public cloud. Each of these can be addressed through building and executing a strong cloud computing roadmap, including a philosophy of change centered on Agile and DevOps best practices.
The need for specialized hardware often stems from restrictions of proprietary protocols such as those found in the majority of telephony systems. A critical component of most customer service operations, telephony remains a necessary function for escalation trees or concierge handling behind the basic automated voice response platforms. Those telephony systems frequently rely on some mix of technology to both manage the calls and provide analytics for their treatment and effectiveness. Removing the proprietary protocols, however, enables a cloud-first posture: all media can be managed and routed without specialized hardware and with the added benefit of almost unlimited scaling. Services including Amazon Connect and Azure Ameyo feature additional capabilities including ML-backed chatbots, real-time visibility for customer interactions, and unlimited storage for call recording. Imagine never having a hold queue max due to a flood of incoming phone calls and always being able to see same-day metrics on hold time, customer flows, transfers/abandons, and more.
Some public cloud providers can also address low network latency requirements through services such as AWS Direct Connect, Azure ExpressRoute, and Google Dedicated Interconnect. Each of these offerings matches a partnership with multiple backbone providers with dedicated network switching infrastructure to extend any private data center construct into the public cloud. Intensive transactional workloads can expect throughputs from 1GB to 10GB depending on uplink option and round trip times of less than 0.01 seconds - in some cases as low as 0.002 seconds. Those metrics match or exceed the performance of most dark fiber MAN (Metropolitan Area Network) infrastructure used by ISPs and CoLo providers around the world.
Even some of the most sensitive workloads and data can be managed given proper vetting and preparation. One notable example is the regulatory requirements placed upon the United States’ financial industry by the Office of the Comptroller of Current, or the OCC. With more than $51 trillion in assets being managed by the US financial system, plus additional governance tasks enforced by the Office of Strategic Management (OSM) and the Office of Enterprise Risk Management (OERM), the handling of data related to those assets and their disposition is amongst the most sensitive responsibilities in the private sector. Several large financial institutions have nevertheless implemented solutions leveraging public cloud and US government infrastructure to handle this essential and sensitive transfer of information.
Armed with a growing library of Microservices, an operational Agile practice, and exposure to real-world cloud deployments, the Cloud Native Journey is well underway - but by means complete. Now the focus can move to a combination of organizational and process alignment to Cloud Native best practices. While addressing any of these example exception cases, the importance of introducing change in an Agile methodology and linking implicitly with DevOps cannot be understated. Attempt to build the whole solution at once and a multitude of challenges - scope, budget, time, resources - can derail at the onset. Dividing the work into digestible chunks , however, aligns with a more practical release management strategy and with many aspects common to cloud deployments such as segmentation and isolation, pipeline-based delivery, and blue/green  environments. As cloud components mature from inception to design to Infrastructure-as-Code artifacts, they benefit from tight coupling with software artifacts riding an Agile release train enabled by a principled DevOps team and their tools. While this combination alone does not guarantee success, the absence of any of these best practices is likely to run into any (or all) of the same challenges as taking too much work at once.
Ultimately, the most visible and sensitive subject in a full cloud stance is TCO, and for good reason: inadequate study before adoption can lead to significant sticker shock. So how to avoid the shock and the subsequent criticism? After the first basic workloads are live—and exceptions such as hybrid cloud use cases are identified—it is time to inventory the remaining portfolio for larger and more complex opportunities. Ideally, a workload with a mix of public and private interfaces, a geographic public user base, and a number of data and storage features goes next into the queue. If possible, alignment to multiple inherent cloud provider services (unique functionality such as Amazon Cognito, Google Dataflow, or Azure Batch) will further explore an organization’s appetite and aptitude for taking a full cloud stance. Define the architecture in as much detail as possible, from transaction volumes to data sets to storage projections, as these drive the calculations required to properly examine TCO for a cloud-based solution.
With the opportunity identified and an architecture envisioned, an online cost estimator—available from most public cloud providers—will supply some predictions about TCO for the overall solution. A few prominent examples:
Keep in mind the specifics of the architecture impact both the initial deployment costs and the forward costs, and comparing between providers can be a challenge. Understanding how each provider excels or falters across their portfolio can be as important as knowing what kind of functionality or how much storage or which services are required. A knowledgeable partner can assess all aspects, create a solid architecture for deployment, and optimize for continual operation.
Levvel has domain experts and trusted advisors to help cut the path and be a guide on the journey no matter where it starts. Our teams provide solutions for Cloud and DevOps adoption, expansion, and maturation for companies of any size and shape. We are digital first cloud natives with an eye on the latest trends shaping the tech industry.
Our Cloud capabilities include security-conscious engineering and architecture for Amazon Web Services (AWS) and Microsoft Azure platforms. We work with teams to identify and optimize environments for compliance, networking, applications, integration, and budgeting in the cloud. Our team understands how to apply design, business, operations, and financial levers to create best-in-class solutions.
Our DevOps capabilities include harnessing automation for CI/CD (Continuous Integration / Continuous Delivery), Container strategies and infrastructure components. We work with teams to assess current DevOps capabilities and provide tangible recommendations to improve and optimize the objectives. Our deep technical expertise helps you move faster and ensures you build it the right way from the start. We build both big and small applications, from a large scale enterprise platform to a one-off Stripe integration. We can supplement your team or provide a full team, as we’re always flexible to our client’s unique needs.
We collaborate across Cloud and DevOps teams to build technology solutions which grow with our clients as they grow their business. We help ensure this growth occurs without negative exposure due to security issues or capacity constraints. We also feature expertise with Agile software development, Digital transformation across multiple industries, compelling and innovative Design, broad Strategy skills, and a renowned Research team, all working together to exceed our clients’ expectations.
Senior Cloud Consultant
Senior Cloud Consultant
DevOps Senior Manager
Senior Cloud Consultant
Darren is a Senior Cloud Consultant at Levvel with extensive experience in systems and network engineering, application development, security architecture, technology risk/compliance, and multiple architecture frameworks including LEAF, TOGAF, and Zachman. His business domain strengths include disciplined requirements analysis, iterative planning, and strategic transformation. His project delivery background includes implementation patterns from mobile distributed platforms to B2B integration. Darren brings a measured, focused approach to designing and implementing solutions of all shapes and sizes. When he has spare time he enjoys primitive camping, culinary exploration, and playing guitar.
Belal is an AWS Certified Solutions Architect who focuses on infrastructure automation, security, and compliance in the public cloud. Prior to Levvel, he worked in application development in the dental, insurance, and FinTech industries. His DevOps expertise, combined with his application-development experience allow him to work in all stages of the SDLC, from code, to deployment, to infrastructure layout. He holds a B.E. in Electrical Engineering and lives with his wife and children in Dallas, TX.
As an accredited OpenShift delivery specialist, Surya has worked with many clients who are either getting started on a private, public, or hybrid container strategy with CI/CD, or further along but have hit a bump or two along the way and are looking for experienced professionals to evaluate and address their concerns. Surya is interested in meeting people within the DevOps community and learning from their experiences.
Hari is a DevOps Senior Manager at Levvel with in depth experience in architecture and deployment of several data center solutions comprising of, but not limited to Storage, Virtualization, Networking, Automation, Public, Private and Hybrid Cloud technologies.
Ben is a data scientist and AWS Certified Solutions Architect and Developer. As an analyst and data scientist, he has worked in the retail, banking and automotive industries in consulting and practitioner capacities. In his work as an cloud consultant, he has advised Fortune 50 banks, written a Python library for multiple-account management, and created big-data and machine-learning pipelines for nationally-recognized media brands. He holds an M.S. in Economics and lives in New York City.
Daniel Foley is a DevOps Manager at Levvel who is well-versed in many different applications including Apache, MySQL, Puppet, Ansible, Zerto Replication, Centrify, McAfee EPO, Bromium Security, EMC Avamar, OpenShift, Elasticsearch, Prometheus, Docker, Kubernetes, among others. Daniel enjoys scripting to make his life as a Systems Engineer easier and to aid teams and clients. He is extremely familiar with Bash scripting, Python programming, and some Perl and Ruby.
At the end of lunch with a mentee, I used the items on our table to express the fundamental concepts of Kubernetes. Sometime after explaining the purpose of the Kubernetes scheduler, she asked a question I spent the next several weeks thinking about.
API design is crucial, giving structure to application interaction. Given cross-functional teams and applications, development time is reduced with a clear, intuitive way to access data. API development often follows two approaches: REST and GraphQL.
As of June 2018, the state of California passed a new privacy law that could lead to more consequences for US-based companies than the European Union’s General Data Protection Regulation (GDPR). Here's what you need to know and how to be compliant.
Before your data scientists wring value out of your reams of data, it has to be accessible and, on some basic level, coherently arranged. To harness all that brainpower, you need to keep the data wrangling to a minimum. Enter the data lake.