IDP White Paper

Internal Developer Platform for Cloud Deployment and Operations

Introduction

The adoption of Service Oriented Architecture (SOA) at AWS and Azure gave birth to the original DevOps culture where Developers would own the end-to-end lifecycle of an application from coding and running deployments to maintaining uptime of the application. Unfortunately, today’s DevOps is not about Developers owning operations, but rather operators building automation for their own operational efficiencies. Developer self-service with respect to cloud infrastructure is quite scarce in most organizations. Developers raise support tickets to DevSecOps and have to wait days for them to be fulfilled. In organizations where developers are allowed unfettered access, the security of the cloud infrastructure is in disarray: open ports, unchanged passwords, untracked keys, unencrypted disks and the list goes on. Many enterprises are trying to address this problem by building an Internal Developer Platform or IDP with the purpose of improving engineering productivity through developer self-service, but with security “guard rails”. A dedicated and experienced team of engineers often called the Platform team are assigned this task and they will need to potentially spend several months to years building and maintaining this inhouse automation platform.

In this whitepaper, we describe how the DuploCloud platform can be leveraged as an out-of-the-box IDP. Organizations have also built a layer of customization on top of DuploCloud to add workflows not supported natively. We are looking at saving millions of dollars and years of effort.

Did You Know?

DuploCloud provides a new no-code based approach to DevOps automation that affords cloud-native application developers 10x faster automation, out-of-box secure and compliant application deployment, and up to 70% reduction in cloud operating costs. Click below to get in touch with us and learn more.

New call-to-action

Just IaC alone is not an IDP

Any modern day application  consists of many independent pieces often called microservices. These include both cloud provider services like S3, SQS, Kafka, Elasticsearch, etc. as well as application components owned by the organization and deployed asDocker containers in Kubernetes. Cloud providers support hundreds of services for applications  to use. While this has obvious advantages of scale, availability and agility, it is extremely hard to manage. Too many moving pieces, access controls, thousands of nuances of infrastructure configurations, hundreds of compliance controls and more. Infrastructure-as-Code is a scripting language that is optimized for building and operating these configurations. But there are several challenges with IaC in its current form:

DevOps is a very difficult skill

DevOps demands a single individual to be proficient in operations and security, as well as programming (i.e. Infrastructure-as-Code).  These have traditionally been three independent job profiles. Developers are not operators. Operators’ programming skills are limited to basic scripting and most operators don’t have a good grasp of compliance standards. 

There are ready made libraries or modules for some standard functions, but nevertheless an engineer without a sound operations background cannot build and operate IaC.

IaC cannot enforce compliance by itself

Being a scripting tool that requires attended execution, the scope of the system is limited to the time when the user executes it. There are many scenarios where the infrastructure may deviate from the desired state which includes users making changes directly in the cloud. So one needs to build out-of-band systems to monitor for these that would alert a user to take corrective action manually. Compare this with an intent based Configuration Management systems like Kubernetes, AWS, Azure etc where once the intent is configured in the platform they will drive the underlying infrastructure to goal state, detect drifts and remediate.

Lack of ability to track Intent

None of the platforms like Azure, AWS, Kubernetes, etc. are built on top of scripting tools. They are all written with higher level programming languages.  IaC is a scripting tool that executes instructions serially and is meant to be attended by a human. A self-service cloud automation solution requires an intent based platform where you could define a higher level specification and the platform would asynchronously apply the configuration to the cloud provider by coordinating various dependencies in a state machine. One cannot build a self-service cloud management platform using Terraform.

IaC does not provide a user interface, RBAC or manage Access Control

For ongoing operations and debuggability, multiple users need scoped access to the cloud components. Role based access, JIT access control with the principle of least privilege and tying together various other such operational elements needs to be built and is not in the scope of IaC.  

It is unrealistic to expect that purely with IaC automation, developers would own the end to end  lifecycle of an application from coding and running deployments to maintaining uptime of the application, thereby achieving developer self-service. Hence comes the need for an IDP which can abstract things away from developers while providing a system that is self-service with minimal requirements for operational and security experience. 

Desired Goals and KPIs for an IDP

Like all software and projects it is important to have clear goals and KPIs. In the case of infrastructure automation it is extremely important to define these because there can be a very wide spectrum when considering the level of automation. Following are the key goals that we set while building DuploCloud. We show the KPIs we have tracked towards those goals:

Reduction in manual labor and Cost Savings

The bottom line to the success of cloud automation is the reduction in the level of human involvement in daily measurements. The best way to measure this is by counting the number of DevOps engineers an organization has to employ proportional to the size of their cloud workload measured in terms of either virtual machines or cloud services. Figure 1 shows the quantification of this metric. We have also seen that in most organizations secops is a dedicated job profile. If the IDP is built right then Compliance and security need not require a separate head count. 

 

As detailed in the blog Are You Spending Too Much on DevOps? – DevOps (duplocloud.com), 80% of the DevOps cost is for manual labor while 20% is tools. Thus the efficiencies of reduction in manual labor reflect directly in cost savings.

 

Infrastructure Size

Inhouse DevOps Engineers

Less than 50 VMs and 10 Micro-services

0 – 1

50-200 VMs and 30-50 Services

1 – 2

> 200 VMs and 100+ Services

2 Engineers + (1 Engineer for every 200 VMs) + 1 Secops Engineer

Figure1: KPI for Reduction of Human Labor and Operational cost

Comprehensive Automation Platform

An IDP should be able to automate most of the low level tasks and only expect users to specify high level intent. This will ensure that developers can get things done without knowing a lot of low level details. While DevOps automation is a broad spectrum, one should strive to automate 95% or more functionality out-of-the-box in the platform. The KPIs for this goal are the number of cloud automation functions, cloud provider services and third-party tools that can be deployed using the platform. Figure 2 shows the representative services DuploCloud’s platform supports and new services are added on a monthly release cadence. User requested services typically take 1-2 weeks. Once added to the platform they are available to all users. 
CICD
Integrations with Jenkins, GitHub Actions, Bitbucket, Gitlab, CircleCI, Azure DevOps DAST SAST Self-Hosted Runner Management
Observability and Diagnostics
Central Logging with Open Search Metrics with Prometheus, Grafana, Azure Mon, Cloud Watch Alerting with Pager duty, Sentry and New relic Audit Trails
Application Provisioning
Containers Big Data Serverless AI/ML
Kubernetes; ECS; Azure Webapp; GKE Auto Pilot Airflow On K8S (MWAA); Spark, EMR; Glue; Datapipe line; Lambda; Azure Functions; GCP Cloud Functions; AWS Batch; CloudFront Sage Maker; Kubeflow; Azure ML Studio
Cloud Platform Services (AWS, Azure and GCP)
Managed Services Access Control Connectivity Configs and Secrets
200+ cloud PaaS services like Managed Databases, Redis, Managed Kafka, Message queues, SNS, Service bus, S3,  Single-sign on; Just-in-time access; Local Development; Kubectl, App Shell and VM SSH  Load Balancers, Ingress, DNS, WAF, Security Groups Secrets Manager, SSM, K8S config map, secrets, Azure Key Vault
Data Protection and Backup Encryption Cost Management High Availability
Snapshots, Azure Backups, Log Analytics, Database and Open Search backups KMS, Certificates Per service and Per Tenant cost views Resource Tagging; Resource Quotas; Billing Alerts VM Auto-scaling; Kubernetes Cluster and Pod Auto-scaler; Availability Zones, Multi-Region Deployments
Networking and Guard Rails
VNET, VPC Subnets and Routing VPN and Peering Cloud Trail

Figure2: Representative Services supported by DuploCloud as KPI for Comprehensiveness of the Platform (As of 08/15/2022).

Developer Self-service

While this is an important goal and KPI for an IDP, it is also difficult to quantify because developer skill levels vary widely. We have chosen to quantify this goal using the metrics shown in Figure 3. One can see 50,000 infrastructure changes are enabled across 75 organizations with an overwhelming number of users being developers. Here if you notice, across our user base, there are only 35 DevOps people for 800 developers, which is a very low number for this scale of infrastructure. 

Customers

75

VMs

2,200

Developers

800

Containers

7000

DevOps

15

Unique Cloud Services

200

Cloud Providers

4

Avg Infra changes/mo

50,000

Cloud Spend under management

$8M/Yr

Compliance certifications / yr

45

170% YOY growth in User base and Infrastructure under management

Figure3: Developer Self-Service KPIs (As of 08/15/2022). All Numbers cumulative across clients

Time to Compliance

Compliance to regulatory standards have become now a table stake requirement to operate a cloud infrastructure. Security and compliance cannot be an afterthought for an IDP platform. Thus an important metric for an IDP should be time to compliance and in case the organization is operating in multiple verticals then all of those would need to be supported. In our own experience we saw that for an overwhelming majority of our customers, the primary motivation to adopt DuploCloud platform was to achieve regulatory compliance for their cloud infrastructure. DuploCloud’s automation approach is inherently secure and compliant as the platform bakes in compliance controls during infrastructure provisioning.

Standards Supported out-of-box

10+

Avg. Time to Implement

2-4 Weeks

Number of unique customers Certified/yr

45

Biggest Infrastructure Certified

400 VMs, 1,000 Containers

Avg Audits per month across the customer base

4

Compliance KPIs (As of 08/15/2022)

Secure by Design

DuploCloud’s strength lies in the fact that the platform controls the end-to-end configuration stack covering more than 80% of controls in various security standards.

Standard

Controls Implemented

Detailed Documentation

SOC2

80

https://duplocloud.com/white-papers/soc-2-compliance-with-duplocloud/

HIPAA

29

https://duplocloud.com/white-papers/pci-and-hipaa-compliance-with-duplocloud/

PCI

79

https://duplocloud.com/solutions/security-and-compliance/pci-dss/

ISO

50+

 

HiTrust

75+

 

NIST

200+

 

Secure by Design KPIs (As of 08/15/2022)

Cost Savings

As detailed in the blog Are You Spending Too Much on DevOps? – DevOps (duplocloud.com), 80% of the DevOps cost is for manual labor while 20% is tools. Using DuploCloud we are able to reduce the resources required by an order of magnitude.     

Infrastructure Size

# of Inhouse DevOps Engineers

Less than 50 VMs and 10 Micro-services

0 – 1

50-200 VMs and 30-50 Services

1 – 2

> 200 Vms and 100+ Services

2 Engineers + (1 Engineer for every 200 VMs) + 1 Secops Engineer

Cost Savings KPIs

Design and Architecture

The founding team at DuploCloud were among the original inventors of the Public Cloud working for Azure and AWS back in 2008 having built the platform where millions of workloads are deployed across the globe, but managed with just a handful of operators. The design of DuploCloud comes from their learnings and experience in this Hyper-scale environment. There are 6 key elements to the DuploCloud design:

Self-Hosted and Single Tenant

The DuploCloud platform is a self-hosted solution that is deployed within the customer’s cloud account. It inherits permissions from the Instance Profile / Managed Identity of the VM and manages the environment through cloud provider APIs. With the customer’s permission, DuploCloud provides a fully managed service to maintain uptime, updates and on-going support. In the case of AWS each account will have a DuploCloud VM and a unique endpoint in alignment with IAM architecture that is tied to an account. In the case of Azure a single DuploCloud VM maps to an AD and can manage multiple subscriptions.

No-code / Low-code UX

DuploCloud gives an option to users to use both a purely no-code UI or a low-code Terraform provider (for those who prefer IaC). DuploCloud’s Terraform Provider is like an SDK in terraform that allows the user to configure the cloud infrastructure using DuploCloud constructs, rather than directly using lower level cloud provider constructs. This allows the user to get the benefits of Infrastructure-as-Code while significantly reducing the amount of code that needs to be written. The DuploCloud Terraform Provider simply calls DuploCloud APIs. Our DevOps White Paper provides detailed examples.

It is important to note that terraform is a layer on top of DuploCloud and DuploCloud does not generate terraform underneath to provision the cloud provider, rather DuploCloud’s provisioning is via native cloud APIs.

Application Focused Constructs / Policy Model

The greatest capability of the DuploCloud platform is the application centric abstraction created on top of the cloud provider which enables the user to deploy and operate their applications without knowledge of lower level DevOps nuances. Further, unlike a PaaS such as Heroku, the platform does not get in the way of users consuming cloud services directly from the cloud provider, as in a user directly operating on constructs like S3, DynamoDB, Lambda functions, etc., while offering greater scale and unlimited flexibility.

Some concepts relating to security (DevSecOps) are hidden from the end user, for example IAM roles, KMS keys, etc. However, even those are configurable for the operator and since this is a self-hosted platform running in the customer’s own cloud account, the platform is capable of working in tandem with direct changes on the cloud account by an administrator. This is explained with examples at https://duplocloud.com/white-papers/DevOps/

While there are many concepts in the policy model, the following are the main ones to be aware of

Infrastructure

Each infrastructure is a unique VNET, in a region with an AKS cluster and Log Analytics workspace among other constructs.

Tenant

Tenant is the most fundamental construct in DuploCloud which is essentially like a project or a workspace and is a child of the infrastructure. While Infrastructure is a VNET level isolation, Tenant is the next level of isolation implemented by segregating Tenants using Security Groups, Managed Identity, Kubernetes Namespace in parent AKS cluster, Key Vault, etc. Tenant is fundamentally 3 things at the logical level:

Container of resources

All resources (except ones corresponding to infrastructure) are created within the Tenant. If we delete the tenant then all resources within that are terminated.

Security Boundary

All resources within the tenant can talk to each other. For example, a Docker container deployed in an Azure VM instance within the tenant will have access to storage accounts and SQL instances within the same tenant. SQL instances in another tenant cannot be reached, by default. Tenants can expose endpoints to each other either via load balancers or explicit inter-tenant SG and Managed Identity policies.

User Access Control

Self-service is the bedrock of the DuploCloud platform. To that end, users can be granted Tenant level access.

Billing Unit

Each Tenant is also a Billing Unit, so customers can see the billing dashboard segregated by Tenants. This helps them in knowing the cost for each of the application deployment environments like dev, staging and production.

Plan

Corresponding to each infrastructure is the concept of a Plan. Plan is a place holder or a template for configurations. These configurations are consistently applied to all tenants within the plan (or Infrastructure). Examples of such configurations are:

  • Certificates available to be attached to load balancers in tenants of this plan
  • Machine images
  • WAF web ACLs
  • Common policies and SG rules to be applied to all resources in tenants within the plan
  • Resource Quota: Plan also has a resource quota that is enforced in each of the tenants within that plan

Rules-based Engine

As the user submits higher level deployment configurations via the application centric interface, an internal rules-based engine translates this to low level infrastructure constructs automatically while also incorporating the desired compliance standard.

State Machine

The fundamental limitation of IaC is that it is a serial execution of steps requiring human supervision. The DuploCloud platform includes an intelligent state machine that applies the lower-level configuration generated by the rules-engine to the cloud provider by invoking the APIs working asynchronously in multiple threads. Repeated failures are flagged as faults in the user interface.

Ongoing Reconciliation

The system is constantly comparing the current state of the infrastructure with that of the desired state that includes the compliance standards and security requirements. If there is a difference then either DuploCloud will auto-remediate it or raise an alert. 

User Personas and Workflows

There are 4 main user personas: Administrators, Developers, Security Admins and SREs. Each persona is captured by a set of workflows and features.

Administrators (used by DevOps)

This part of the platform covers the role of the administrator typically played by either an inhouse DevOps engineer or Team lead. There are three types of activities or workflows that involve the administrators:

Resource Provisioning

These are resources that are relatively infrequently created and/or updated. A few examples of these are:

  • Infrastructure setup that includes VPC/VNETs, subnets, Kubernetes cluster and in case of Azure Log analytics, Azure Automation account among other things.
  • Kubernetes upgrades are one of the core administrative capabilities.
  • Setup of the Centralized Diagnostics stack like Open Search, Prometheus and Grafana used by the tenants.

Create resources directly in Cloud Provider and Reference them in DuploCloud

Many resources like DNS domain, SSL Certificate, WAF Rules, and hardened Images are typically created outside of the platform and then their identifiers added into the DuploCloud platform under the “Plan” constructs.

User Access and RBAC

Administrators control which users have access to what tenants and their roles.

Resource Quotas

Administrators are able to limit the user’s ability within the tenant to create resources within a specific type and size.

Foundational Security Controls

Administrators control the setup of various application agnostic security features like AWS CloudTrail, AWS SecurityHub, Azure Defender, and others.

Policies and Guard Rails

There are several policies and guard rails configurable in the system. For example, blocking tenant users from exposing public endpoints, enforcing certain prefixes for S3 buckets and S3 bucket policies that should apply across the system.

Resource Tagging

Administrators can set tags at the Tenant level that will automatically be propagated and applied to all the resources created within the Tenant.

Developer Role (used by Developer and Data Scientists)

Developers form the majority of our audience as DuploCloud is essentially a Developer Platform. Developers are responsible for deploying, updating and managing their application infrastructure within a given Tenant. Each user can have access to multiple Tenants and each tenant may have multiple users. The main developer workflows can be categorized as follows:

Cloud Service Deployments

These include dozens of cloud provider services like EC2, Azure VMs, S3, Azure blob stores, RDS, MSK, Managed Open search, SQS, SNS, redshift, Azure DB, etc. We support hundreds of services and new services are added regularly. A typical turnaround time to add a new cloud provider service is about a week.

Config and Secrets Management

Developers can leverage a vast set of cloud native services for this purpose like Kubernetes secrets and config maps, AWS SSM Param store, Secret Store, Azure Key Vault etc. Developers can thus in a self service way create, update and manage their secrets referenced by their applications without having to deal with the lower level nuances of policies, encryption, Kubernetes drivers etc. See the documentation page Passing Config and Secrets for more detailed information.

Application Deployment

Four common application Deployment patterns are commonly used by Developers:

Docker

DuploCloud integrates with Cloud managed Kubernetes like EKS, AKS, GKE or cloud provider container orchestrators like ECS and Azure Web App. Almost all complexities of Kubernetes are hidden from the user.

Serverless

Lambda, Azure Functions, and GCP cloud functions are typical serverless features developers deploy their application.

Big Data

EMR, Apache Airflow, Glue, Azure Databricks are examples of services data scientists use.

AI/ML

Sagemaker, Azure Machine Learning are examples of AI/ML services.

Application connectivity

Exposing applications via load balancers, ingress controllers, and API gateways that include configuring SSL certificates (provisioned by admins).

Local Development

JIT Access keys for Local Development

Many times developers need to build and test code in a local environment and for that they need access to cloud provider services via access keys. DuploCloud facilitates that by creating tenant scoped keys with limited lifetime. See the documentation page JIT Access: Access Through Command Line for more detailed information.

Diagnostics Workflows for DevOps, Developers and SRE Personas

There are 4 key diagnostics functions leveraged by DuploCloud users:

Cloud Portal, Kubectl and Shell Access

Developers sometimes need access to direct cloud portals and services, Kubectl, and access to the application container’s shell. DuploCloud creates a Just-in-case (JIT) into these systems by orchestrating underlying substrates like Kubernetes Service accounts, AWS federated login and Azure AD. These are done strictly on a need basis using principles of least privilege. For example when a user gets access to Kubectl that is scoped to the tenant’s namespace only.

Central Logging

Central logging is implemented by orchestrating Elasticsearch, Kibana and File Beat. Internally, nuances for AKS service accounts, ES ILM policy, index lifecycle and other low level details are automated. Kibana dashboards are displayed per tenant and per service.

Metrics

Metrics are implemented using Prometheus, Grafana, and Azure monitoring with the platform managing the lower level nuances around AKS and Azure for the same.

Monitoring and Alerts

The platform is constantly monitoring the infrastructure for anomalies by default and also allows the user to define custom alerts.

Notifications

DuploCloud consolidates all anomalies in the system, tenant by tenant, into the Faults sections that is sent to one of the many supported alerting tools like Sentry, PagerDuty, and New Relic.

Security and Compliance Workflows for the SecOps Persona

Built-in best practices for various security standards is core to the DuploCloud portal. Detailed security whitepapers that describe in depth the implementation of security controls is at https://duplocloud.com/white-papers/

Compliance Standards

The DuploCloud platform implements compliance controls to the level of NIST 800-53, which is a super set of virtually all standards we have come across and subsumes, at the level of cloud infrastructure, most other compliance standards. More than 70% of our user base operates in regulated industries and leverages DuploCloud for the following standards

  • SOC2
  • HIPAA
  • PCI-DSS
  • ISO
  • GDPR
  • NIST
  • HITRUST

Secure by Design

If one were to go through the list of security controls in standards like PCI and SOC2, we would see that 70% of the controls are to be implemented at the provisioning of the resources and 30% of the controls are monitoring controls that are to be done post provisioning. The advantage with DuploCloud being an end-to-end automation platform is that all the necessary controls are injected into the configuration automatically both at provisioning time as well as post-provisioning. This is in contrast to a traditional security approach where SecOps teams get involved mostly post provisioning and monitoring.

Examples of Provisioning Time Controls

  • Network Provisioning and Landing zones that includes VPC/VNET/VPN
  • Access control roles and policies using cloud provider IAM
  • Encryption at rest using cloud provider key management systems like KMS, Azure Key Vault etc.
  • Transport Encryption (transit) using certificates that involves configuring load balancers, gateways, and certificate managers
  • Secrets management using secret stores like AWS secret store, Azure Key Vault, Kubernetes secrets
  • Provisioning scores of cloud native services like s3, Dynamo, Azure storage, Kafka, OpenSearch etc. This includes tying together various access policies, availability considerations, scale, and of course various compliance configurations. As an example, while S3 setting up the system manages SSE, public access block, versioning (when needed), IAM access control among other things.

Examples of Post Provisioning Controls

  • Vulnerability Detection
  • CIS benchmarks
  • Cloud Vulnerability and Cloud trail Monitoring.
  • File Integrity Monitoring
  • Host and Network Intrusion Detection
  • Virus Scanning and Malware detection
  • Inventory management
  • Host Anomaly Detection
  • Email Alerting
  • Incident Management
For a detailed list of security controls categorized by standards please refer to https://duplocloud.com/white-papers/

Foundational Guard Rails and System Setup

Security features like AWS CloudTrail, AWS SecurityHub, Azure Defender, AWS GuardDuty as well as baseline policies can be turned on with a simple click as shown below.

SIEM (Security Incident and Event Management)

SIEM is a centralized system to aggregate and process all events. We use the open source Wazuh as SIEM and it is all orchestrated and integrated into the workflows. The primary functions of the system are:

  • Data Repository
  • Event Processing Rules
  • Dashboard
  • Events and Alerting

Distributed agents of this platform (Ossec Agents) are deployed at various endpoints (VMs in Cloud) where they collect events data from various logs like syslogs, virus scan results, NIDS alerts, File Integrity events, etc. Data is sent to a centralized server and undergoes a set of rules to produce events and alerts that are stored in typically Elasticsearch where dashboards can then be generated. Data can also be ingested from sources like AWS CloudTrail, AWS Trusted Advisor, Azure Security Center and other non-VM based sources.

Agent Modules

For many of the security features, several agent-based software packages are installed in each VM that is in scope. A few examples are the Wazuh agent to fetch all the logs, ClamAV virus scanner, AWS Inspector that provides vulnerability scanning, Azure OMS and CloudWatch agents for host metrics. While these agents are installed by default, DuploCloud provides a framework where the user can specify an arbitrary list of agents in the following format and DuploCloud will install these automatically in any launched VM. If any of these agents crash, then DuploCloud will send an alert. Another use case of this could be that you can integrate with your own XDR, SIEM and other solutions by leveraging this feature for agent installation.

Audit Trails in Application Context

When using raw IaC without a management system like DuploCloud, DevOps teams build cloud deployment more with an operations and infrastructure viewpoint as against that of the application. Most times resources are not appropriately tagged with application context. If one were to require an audit trail at the cloud provider level like AWS cloud trail or Azure event logs, it’s hard to co-relate to the application. In the case of DuploCloud audit trails are available per tenant with detailed metadata in the trails in application specific context. 

AWS SecurityHub and Azure Defender

DuploCloud integrates natively with cloud provider’s native solutions like security hub and Azure Defender that includes their setup, management and operations.

Inventory

Inventory management is a key element of security and cost management as well a compliance need. The DuploCloud platform manages inventory at three levels:

Tagging

All resources are by default tagged by tenant name and also the custom tags set by the user at tenant level. Thus when new resources are created within the tenant all tags are automatically propagated to all the underlying resources associated with the Tenant.

Cloud Inventory

DuploCloud provides a catalog of all the resources both in an application centric view as well as flat cloud service view.

VM Inventory

Both through the SIEM as well as cloud provider solutions like AWS Inspector or Azure Mon agent we pull OS level inventory.

Continuous Integration and Deployment (CI/CD)

CI/CD is a layer on top of DuploCloud and hence any CI/CD system like Jenkins, GitHub, GitLab, and Azure DevOps can seamlessly integrate with DuploCloud by either calling our rest APIs or via Terraform. One would build their pipelines and CI/CD workflows in these CI/CD systems that would invoke DuploCloud software via APIs or terraform as shown in the figure below.

We have created prepackaged libraries and modules to invoke DuploCloud functionalities from CI/CD systems like github actions. Please see documentation at https://docs.duplocloud.com/docs/ci-cd/github-actions

Following are the typical integration points between CI/CD systems and DuploCloud:

Cloud Access for Hosted Runners

In this case the builds are executed in the CI/CD platform’s SaaS infrastructure and outside of the organization’s own infrastructure. In order for the builds to reach the infrastructure they either need credentials or a VPN access. DuploCloud facilitates this by providing JIT (Just-in-Time) access scoped to tenants for the build pipelines. Users would create a “CICD” user in the DuploCloud portal that has limited access to the desired tenants. Then a token is created for this user and added to the CI/CD pipelines. The most common example of a workflow is where one builds a docker image and pushes the Docker image to the Cloud Provider registry and access to the cloud provider registry is facilitated via DuploCloud.

Deploying Self-Hosted Runners within the tenant

In this case a set of build containers are deployed within the same tenant as the application itself. This allows the build to seamlessly access the tenant’s resources as if it were the application, and includes Docker registries, internal APIs, object stores, SQL, etc.

Deployment of new Builds

Within the deployment step, once the docker image has been built, the build script would invoke DuploCloud’s service update API with Tenant ID, Service Name and Image ID as parameters and DuploCloud would then execute the deployment. Think of this as the same API that the DuploCloud UI calls when a user updates a service image via the DuploCloud API.

Status Checks

In the CI/CD pipelines after a certain build has been deployed, the pipeline can invoke the DuploCloud API to get the overall status of the services.

Environment Create, Delete and Update

Some use cases involve bringing up a whole new environment by triggering a certain pipeline which underneath executes a terraform script that invokes DuploCloud to deploy the whole environment. Similarly it can be destroyed by a user trigger of the pipeline.

Table of Contents

Summary

DuploCloud delivers an Integrated Developer and DevOps Platform out of the box, so organizations don’t have to build it themselves by writing thousands of lines of code over many months and years. Developers can build, deploy and manage applications in a self-service manner within the guard rails defined by the DevOps and security teams. Compliance controls and best security practices are built in. The biggest impact of the platform is self-responsibility for engineers without being subject matter experts in operations and security. Our platform allows developers to take services and apps from idea to production without needing to involve operations. This drives the ownership level as product teams are now responsible for the configuration, deployment, or rollback process. Increased visibility, and monitoring allows teams to collaborate better and troubleshoot faster.

Duplocloud is the world’s first IDP that supports multiple clouds, handles security and compliance and provides self-service to developers.

The three key advantages of using DuploCloud are:

  1. 10X faster automation
  2. Out-of-box secure and compliant application deployment
  3. 70% reduction in cloud operating costs

New call-to-action