This is the complete product tour & demo of the env0 platform as presented originally on the SweetOps DevOps "Office Hours" (2020-12-16).
Transcript
env0, unlike Hashicorp, are not advocates of Terraform. We acknowledge that Terraform is now the de-facto standard for Infrastructure-as-Code, but we're taking a multi-framework approach.
We just added support for Terragrunt in parallel to Terraform and we plan next year to add a few more icons here—Pulumi, CloudFormation, or others—but all in all we aim to support our customers in any technology that they prefer.
I want to explain env0, and the concept in env0 is that you have two types of users: you have the admins (the platform team, the DevOps team), and you have the others—mainly the developers but anybody who needs to update and provision cloud resources.
And env0 has a hierarchy of a few entities. We have the global organization, and then we have the projects. Each project is a logical entity that combines users, git repo, permissions, cloud accounts, and policies in one entity called the “project." And we see for our customers, usually that they have different projects for different types of usages in their cloud accounts. So, we usually see a project for production, a project for dev staging environments. And I think we put a lot of focus on this DNA, which is the on-demand, ephemeral developer pull request environment. So, let's start looking at those.
So each project, even if it's for production or for on-demand environments, has variables that basically connect you to the cloud account and any other settings that you want. Those are sensitive and hidden. Then, the admin can define who can do what with which privileges.
So, we have the “viewer” for the CFO or product managers that can look at cloud resources and their cost. “planner”—it’s for the approval flow; A developer in production can change some variable or value, but another person or colleague needs to approve it. So, that's the deployer that can actually deploy cloud resources. So, DevOps in production or developers in production are “developers."
An “admin” is the group manager that can decide who can do what. And what is the “what?" The “what” is a catalog of a Terragrunt or Terraform based Git repos that already exist in the source control of the customer. But in addition to Terraform or Terragrunt, we have a unique concept of custom flow.
If you add `env0.yaml` to your Git repo, you can do whatever you want, whenever you want. And when I mean “whenever," it's before or after the terraform.init plan apply destroys success arrow and some other great places. We see a lot of Chef, Puppet, Ansible, kubectl, AWS and other CLIs; Bash, Python, PowerShell for the Windows-oriented that eventually the user can create a new environment from scratch and later on update those environments. So, that's how you create a new environment and you can see the policies that we have here.
So, very unique for the on-demand environments is the TTL policy because it's easy to request resources, but usually those developers or QA or sales engineers for demos need to eventually destroy those resources. So we have the concept of default and maximum time-to-live policies.
Okay, so you can have the flexibility of self-service for the developers, and also give them the flexibility to extend—it but only up to a maximum. And you cannot create more than the maximum unless you are an admin. So that's one thing about the TTL policies.
And the other thing is our hierarchy of variables that we found very useful. We have four layers of places to put variables. We have global, for the organization, per Git repo template, per project, and specific for the environment. And we can simplify if needed with a drop-down list. The options allow for various values for the user. We have the option to deploy all the way or to stop after the plan. So, let's just click here that we want to apply all the way to the actual deployment.
So while this is being created, let's look at an already existing environment and understand how we manage existing environments.
Okay so this is an existing environment. The person who wrote the template can define what are the outputs which are exposed to the users in order to access the resources. But we also automatically provide interesting information—we passed the state so we understand all of the resources that were recreated, the last logs, the recent plugs, all of the deployment history, the variables, values that were used, the number of resources in that workspace, and something very unique for env0—our cost. We do actual “cost over time” and add tags.
If you look at our deployments, there is a step that’s optional. But if you decide to do so, it will execute Terratag. Terratag is an open source project (you can check it in our GitHub) to inject tags exclusively in Terraform code. We support the main three providers—AWS, Azure, and Google—so that injects those tags. We understand which tags in those providers are taggable, and which are not taggable.
So you don't just have the orange line, you also have the vertical green lines that correlate a deployment to the cost. So, you can isolate and understand that this was a bad deployment that increased the cost. Or, that this was a neutral deployment, or this was a great deployment that reduced the cost.
You can easily understand from each deployment what happened and how they affected that cost, because if you look at the plan, you will see that there is a good reason why this affected the cost. Because the results here change from from small to medium.
In addition to our TTL policy, we have scheduling. So if you need to destroy those in nights and weekends to reduce cost, perhaps start those automatically on Monday mornings, you can do that as well.
So now, let's talk about how you update an environment. To update an environment you click “redeploy”—everything you see here is using API or CLI as well. You can just get the latest code and change the code that it's based on. You can change the variable values. So you can redeploy automatically on push. You see the plans on pull request in your source control like GitHub, and you can filter it depending on if you have a monorepo or not monorepo and some folders, so you can understand and tell us if you want that for every change/every Git pushed to that repo, or just on the relevant folder. So that’s how you update an existing environment.
A few more interesting things…that's an environment that we created four minutes ago and it will be destroyed automatically in 12 hours. Before it's destroyed, the user will get a notification (we have email support, Slack support, and soon we're going to have Microsoft Teams support). They're going to get a notification in advance: “Your environment is going to be destroyed, you need an extension,” and the user can extend it if they want. So that's a very useful use case for developers to have.
So now the approval flow—if you have something waiting for approval, you get a notification for it that’s also presented in a dedicated way.
With detention environments—if you look at those environments you will see that you need to make a decision. The deployer can make a decision whether or not to deploy it—”yes” or “no”—and if you have several pending we manage that in a queue.
I will not go over policy as code, but all in all we are integrated with OPA so that's our way to do policy as code. And you know open standard non-vendor locking, you can take your OPA code later on if you don't want to use env0 and use your Jenkins or Scalr or Spacelift.
A few more things: We want to add more capabilities to the managed self service. We want to add limits on what our users can do now that we have environment limits. So you can set no more than three environments per user per project.
We want to add budget limits—that's in our roadmap. Because we have the actual cost, so we can prevent deployments if they exceed $200 per month per user, and $500 per month per team, and $1,000 per project.
I'm pretty much done here. I want to add that we support SAML, that we support SAML by the way, free or not free, the regular paid tiers. So SMBs you can use our SAML. You can use our SOC2 as well and we have unlimited concurrent runs on all of our tiers, including on our free tier.
So that’s our story. One last thing, if you liked it, please follow us on Twitter at @envzero and my personal Twitter handle, @DevOpsOhad.
Thank you very much!
This is the complete product tour & demo of the env0 platform as presented originally on the SweetOps DevOps "Office Hours" (2020-12-16).
Transcript
env0, unlike Hashicorp, are not advocates of Terraform. We acknowledge that Terraform is now the de-facto standard for Infrastructure-as-Code, but we're taking a multi-framework approach.
We just added support for Terragrunt in parallel to Terraform and we plan next year to add a few more icons here—Pulumi, CloudFormation, or others—but all in all we aim to support our customers in any technology that they prefer.
I want to explain env0, and the concept in env0 is that you have two types of users: you have the admins (the platform team, the DevOps team), and you have the others—mainly the developers but anybody who needs to update and provision cloud resources.
And env0 has a hierarchy of a few entities. We have the global organization, and then we have the projects. Each project is a logical entity that combines users, git repo, permissions, cloud accounts, and policies in one entity called the “project." And we see for our customers, usually that they have different projects for different types of usages in their cloud accounts. So, we usually see a project for production, a project for dev staging environments. And I think we put a lot of focus on this DNA, which is the on-demand, ephemeral developer pull request environment. So, let's start looking at those.
So each project, even if it's for production or for on-demand environments, has variables that basically connect you to the cloud account and any other settings that you want. Those are sensitive and hidden. Then, the admin can define who can do what with which privileges.
So, we have the “viewer” for the CFO or product managers that can look at cloud resources and their cost. “planner”—it’s for the approval flow; A developer in production can change some variable or value, but another person or colleague needs to approve it. So, that's the deployer that can actually deploy cloud resources. So, DevOps in production or developers in production are “developers."
An “admin” is the group manager that can decide who can do what. And what is the “what?" The “what” is a catalog of a Terragrunt or Terraform based Git repos that already exist in the source control of the customer. But in addition to Terraform or Terragrunt, we have a unique concept of custom flow.
If you add `env0.yaml` to your Git repo, you can do whatever you want, whenever you want. And when I mean “whenever," it's before or after the terraform.init plan apply destroys success arrow and some other great places. We see a lot of Chef, Puppet, Ansible, kubectl, AWS and other CLIs; Bash, Python, PowerShell for the Windows-oriented that eventually the user can create a new environment from scratch and later on update those environments. So, that's how you create a new environment and you can see the policies that we have here.
So, very unique for the on-demand environments is the TTL policy because it's easy to request resources, but usually those developers or QA or sales engineers for demos need to eventually destroy those resources. So we have the concept of default and maximum time-to-live policies.
Okay, so you can have the flexibility of self-service for the developers, and also give them the flexibility to extend—it but only up to a maximum. And you cannot create more than the maximum unless you are an admin. So that's one thing about the TTL policies.
And the other thing is our hierarchy of variables that we found very useful. We have four layers of places to put variables. We have global, for the organization, per Git repo template, per project, and specific for the environment. And we can simplify if needed with a drop-down list. The options allow for various values for the user. We have the option to deploy all the way or to stop after the plan. So, let's just click here that we want to apply all the way to the actual deployment.
So while this is being created, let's look at an already existing environment and understand how we manage existing environments.
Okay so this is an existing environment. The person who wrote the template can define what are the outputs which are exposed to the users in order to access the resources. But we also automatically provide interesting information—we passed the state so we understand all of the resources that were recreated, the last logs, the recent plugs, all of the deployment history, the variables, values that were used, the number of resources in that workspace, and something very unique for env0—our cost. We do actual “cost over time” and add tags.
If you look at our deployments, there is a step that’s optional. But if you decide to do so, it will execute Terratag. Terratag is an open source project (you can check it in our GitHub) to inject tags exclusively in Terraform code. We support the main three providers—AWS, Azure, and Google—so that injects those tags. We understand which tags in those providers are taggable, and which are not taggable.
So you don't just have the orange line, you also have the vertical green lines that correlate a deployment to the cost. So, you can isolate and understand that this was a bad deployment that increased the cost. Or, that this was a neutral deployment, or this was a great deployment that reduced the cost.
You can easily understand from each deployment what happened and how they affected that cost, because if you look at the plan, you will see that there is a good reason why this affected the cost. Because the results here change from from small to medium.
In addition to our TTL policy, we have scheduling. So if you need to destroy those in nights and weekends to reduce cost, perhaps start those automatically on Monday mornings, you can do that as well.
So now, let's talk about how you update an environment. To update an environment you click “redeploy”—everything you see here is using API or CLI as well. You can just get the latest code and change the code that it's based on. You can change the variable values. So you can redeploy automatically on push. You see the plans on pull request in your source control like GitHub, and you can filter it depending on if you have a monorepo or not monorepo and some folders, so you can understand and tell us if you want that for every change/every Git pushed to that repo, or just on the relevant folder. So that’s how you update an existing environment.
A few more interesting things…that's an environment that we created four minutes ago and it will be destroyed automatically in 12 hours. Before it's destroyed, the user will get a notification (we have email support, Slack support, and soon we're going to have Microsoft Teams support). They're going to get a notification in advance: “Your environment is going to be destroyed, you need an extension,” and the user can extend it if they want. So that's a very useful use case for developers to have.
So now the approval flow—if you have something waiting for approval, you get a notification for it that’s also presented in a dedicated way.
With detention environments—if you look at those environments you will see that you need to make a decision. The deployer can make a decision whether or not to deploy it—”yes” or “no”—and if you have several pending we manage that in a queue.
I will not go over policy as code, but all in all we are integrated with OPA so that's our way to do policy as code. And you know open standard non-vendor locking, you can take your OPA code later on if you don't want to use env0 and use your Jenkins or Scalr or Spacelift.
A few more things: We want to add more capabilities to the managed self service. We want to add limits on what our users can do now that we have environment limits. So you can set no more than three environments per user per project.
We want to add budget limits—that's in our roadmap. Because we have the actual cost, so we can prevent deployments if they exceed $200 per month per user, and $500 per month per team, and $1,000 per project.
I'm pretty much done here. I want to add that we support SAML, that we support SAML by the way, free or not free, the regular paid tiers. So SMBs you can use our SAML. You can use our SOC2 as well and we have unlimited concurrent runs on all of our tiers, including on our free tier.
So that’s our story. One last thing, if you liked it, please follow us on Twitter at @envzero and my personal Twitter handle, @DevOpsOhad.
Thank you very much!