{"componentChunkName":"component---src-templates-tag-jsx","path":"/blog/tag/engineering/4/","result":{"data":{"prismic":{"allFeaturedblogs":{"edges":[{"node":{"featured_blogs_enabled":true,"heading":[{"type":"paragraph","text":"Featured posts","spans":[]}],"featured_blog_1":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/6d8d81b1-971a-4313-b033-b4e125cb14a0_MondoDB-blog-header-790x395.PNG?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing DigitalOcean Managed MongoDB – a fully managed, database as a service for modern apps","spans":[]}],"blog_post_date":"2021-06-29","blog_post_content":[{"type":"paragraph","text":"MongoDB is one of the most popular databases, and it’s ideal for apps that evolve rapidly and need to handle huge volumes of data and traffic. It offers advantages like flexible document schemas, code-native data access, change-friendly design, and easy horizontal scale-out.","spans":[{"start":22,"end":44,"type":"hyperlink","data":{"link_type":"Web","url":"https://db-engines.com/en/ranking","target":"_blank"}}]},{"type":"paragraph","text":"However, building and maintaining MongoDB clusters from the ground up can be a huge undertaking. Developers often complain that they have to spend their valuable time and resources on database management. Well, we’ve been listening and have some great news: accessing and managing MongoDB on DigitalOcean just got a lot simpler!","spans":[]},{"type":"paragraph","text":"We are excited to announce that DigitalOcean Managed MongoDB is now in General Availability. Managed MongoDB is a fully managed, database as a service (DBaaS) offering from DigitalOcean, built in partnership with and certified by MongoDB Inc. It provides you all the technical capabilities that make MongoDB so beloved in the developer community. Together we have ensured that you will get access to all the latest releases of the MongoDB document database as they become available.","spans":[{"start":32,"end":91,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases-mongodb/"}},{"start":230,"end":241,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/","target":"_blank"}}]},{"type":"paragraph","text":"Managed MongoDB simplifies the MongoDB administration. Developers of all skill levels, even those who do not have prior experience in databases, can spin up MongoDB clusters in just a few minutes. We handle the provisioning, managing, scaling, updates, backups, and security of your MongoDB clusters, allowing you to offload the complex, time consuming –yet critical – database administration tasks to us. This empowers you to focus on what really matters: building awesome apps.","spans":[]},{"type":"embed","oembed":{"height":113,"width":200,"embed_url":"https://www.youtube.com/watch?v=NvHQSV7jnKA","type":"video","version":"1.0","title":"Create a MongoDB Database on DigitalOcean","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","provider_name":"YouTube","provider_url":"https://www.youtube.com/","cache_age":null,"thumbnail_url":"https://i.ytimg.com/vi/NvHQSV7jnKA/hqdefault.jpg","thumbnail_width":480,"thumbnail_height":360,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/NvHQSV7jnKA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"heading2","text":"Benefits of Managed MongoDB","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Easy set up and maintenance: We create the database clusters for you. Simply choose the cluster configuration (e.g., memory, disk size, number of nodes, etc.), and the data center in which you want to host the database. Follow a few simple steps and your database cluster will be up and running in a matter of minutes. You can spin up clusters using the cloud control panel, CLI, or API.\n\n","spans":[{"start":0,"end":28,"type":"strong"}]},{"type":"list-item","text":"Automatic daily backups with point in time recovery: Data is one of the most important assets of an app, so it’s critical to backup your database. We take backups of your entire clusters automatically on a daily basis, for free. We also provide a point in time recovery for 7 days, that way if things go wrong due to human error, machine error, or some combination of both, you can easily restore the database as it was at any point in the previous 7 days. \n\n","spans":[{"start":0,"end":52,"type":"strong"}]},{"type":"list-item","text":"Automatic updates and access to latest MongoDB releases: You get access to MongoDB 4.4. This is the latest release of MongoDB and comes packed with numerous enhancements like hedged reads, rust, and swift drivers. Since we have developed Managed MongoDB in partnership with MongoDB Inc, you will always get access to new releases as they become available. With Managed MongoDB, the updates happen automatically. Just select a date and time for the updates and we take care of the rest. This makes it easy to stay up to date with MongoDB releases without disrupting your business.\n\n","spans":[{"start":0,"end":56,"type":"strong"},{"start":148,"end":169,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/new","target":"_blank"}}]},{"type":"list-item","text":"High availability with automated failover: If your database goes down, it can take down the entire app, leading to bad customer experiences. With Managed MongoDB, you can easily minimize the downtime for your database and make it highly available with standby nodes. Standby nodes add redundancy, so if for example the primary node fails, the standby node is immediately promoted to primary and begins serving requests while we provision a replacement standby node in the background.\n\n","spans":[{"start":0,"end":42,"type":"strong"}]},{"type":"list-item","text":"Scale up easily to handle traffic spikes: As your app gains traction and the usage grows, it’s important to have a database that can keep up with the increased demand. With Managed MongoDB, you can easily scale up the size of database nodes when needed.\n\n","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Secure by default: Since data is critical, it also needs to be secure. We encrypt data at rest with LUKS and in transit with SSL. When you create a new cluster, it’s placed in a VPC network by default that provides a more secure connection between resources. You can also restrict access to your nodes to prevent brute-force password and denial-of-service attacks.","spans":[{"start":0,"end":18,"type":"strong"},{"start":178,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/vpc/"}}]},{"type":"heading2","text":"The need for Managed Databases","spans":[]},{"type":"paragraph","text":"DigitalOcean’s mission is to simplify cloud computing so developers, startups, and SMBs can spend more time building software that changes the world. While databases are a critical component to any application, building, maintaining, and scaling them can be complex and time consuming. For developers that are building apps for their business, database administration is often not a core focus area. But it’s quite common to find developers that write the code and then also roll up their sleeves to maintain databases. Such users would rather offload the tedious database administration and focus their limited time and energy on building and enhancing their apps. ","spans":[]},{"type":"paragraph","text":"With this in mind, we introduced Managed Databases a couple of years ago and are excited to add Managed MongoDB to our portfolio. With this release, DigitalOcean Managed Databases now supports the following engines:","spans":[{"start":33,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases/"}}]},{"type":"image","url":"https://images.prismic.io/www-static/87745cc1-1c5f-4463-b104-104b7fc30dc7_managed-databases-logos.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":849,"height":104}},{"type":"paragraph","text":"Managed MongoDB launch comes on the heels of DigitalOcean App Platform, a modern, reimagined PaaS (Platform as a Service) that we released a few months ago. App Platform makes it very easy to build, deploy, and scale apps and static sites. You can deploy code by simply pointing to your GitHub and GitLab repos, and App Platform will do all the heavy lifting of managing infrastructure, app runtimes, and dependencies. App Platform, along with Managed Databases, helps fulfill DigitalOcean’s mission by empowering developers, startups, and SMBs to focus more on their apps, and less on the underlying infrastructure and databases.","spans":[{"start":45,"end":70,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"heading2","text":"How Managed MongoDB works","spans":[]},{"type":"paragraph","text":"DigitalOcean provides you with various compute options to build your apps like:","spans":[]},{"type":"list-item","text":"Droplets: On-demand, Linux virtual machines suitable for production business applications and personal passion projects.","spans":[{"start":0,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/droplets/"}}]},{"type":"list-item","text":"DigitalOcean Kubernetes: Managed Kubernetes with automatic scaling, upgrades, and a free control plane.","spans":[{"start":0,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"list-item","text":"DigitalOcean App Platform: A fully managed Platform as a Service.","spans":[{"start":0,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"paragraph","text":"No matter which compute option you choose to build your apps, you can easily add Managed MongoDB to it. In addition to this, Managed MongoDB also integrates with the Node.js 1-Click App from DigitalOcean Marketplace making it a lot easier to build Node.js apps.","spans":[{"start":166,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/nodejs"}}]},{"type":"heading2","text":"Simple, predictable pricing","spans":[]},{"type":"paragraph","text":"Just like all DigitalOcean products, Managed MongoDB provides simple, predictable pricing that allows you to control costs and prevent any surprise bills. You can spin up a database cluster for just $15/month, or a highly available three-node replica set for $45/month. Click here for more information.","spans":[{"start":270,"end":301,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/#managed-databases"}}]},{"type":"heading2","text":"Regional availability","spans":[]},{"type":"paragraph","text":"Managed MongoDB is currently available in the following regions:","spans":[]},{"type":"list-item","text":"NYC3 (New York, USA)","spans":[]},{"type":"list-item","text":"FRA1 (Frankfurt, Germany)","spans":[]},{"type":"list-item","text":"AMS3 (Amsterdam, Netherlands)","spans":[]},{"type":"paragraph","text":"We will be making Managed Mongo available in other regions soon. Please check out the release notes for most up to date information on regional availability.","spans":[{"start":86,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/release-notes/"}}]},{"type":"heading2","text":"Join us at deploy, DigitalOcean’s virtual user conference","spans":[]},{"type":"paragraph","text":"Today we have deploy, DigitalOcean’s signature user conference, which focuses on celebrating, educating, and connecting awesome builders from all over the world.","spans":[{"start":14,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/home"}}]},{"type":"paragraph","text":"Check out the keynote session from DigitalOcean's CEO, Yancey Spruill, in which he talks about where we're headed as a company and shares some exciting product updates. His keynote will be followed by sessions from community members, engineers, customers, and other experts that are building technologies and businesses powered by the cloud. With live Q&A and an active Discord server, there’s ample opportunity to engage and learn something new. Click here to attend the deploy conference.","spans":[{"start":14,"end":69,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/agenda/session/552806"}},{"start":347,"end":384,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy-discord"}},{"start":461,"end":489,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy"}}]},{"type":"paragraph","text":"We are also launching a hackathon for DigitalOcean Managed MongoDB. Learn how you can participate, submit an app and get a t-shirt.","spans":[{"start":24,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/mongodb-hackathon"}}]},{"type":"paragraph","text":"We hope you will give Managed MongoDB a try. Here are some sample datasets and sample apps that you can use to kick the tires. Check out the docs and let us know what you think!","spans":[{"start":22,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/databases/new?engine=mongodb"}},{"start":59,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/do-community/mongodb-resources","target":"_blank"}},{"start":141,"end":145,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/databases/mongodb/"}}]},{"type":"paragraph","text":"If you’d like to have a conversation about using DigitalOcean and Managed MongoDB in your business, please feel free to contact our sales team.","spans":[{"start":120,"end":142,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"André Bearfield","spans":[]},{"type":"paragraph","text":"Director of Product Management","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"_meta":{"uid":"introducing-digitalocean-managed-mongodb"}},"featured_blog_2":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":"Droplet Console","copyright":null,"url":"https://images.prismic.io/www-static/710499ae-78cc-4179-afc1-15793637b200_DODX3727-790x400-logo-2.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Securely connect to Droplets with SSH key pairs using a new Droplet Console","spans":[]}],"blog_post_date":"2021-08-10","blog_post_content":[{"type":"paragraph","text":"The famous author Ken Blanchard once said, “Feedback is the breakfast of champions.\" This is something we truly believe at DigitalOcean, and we always strive to enhance our products based on customer feedback.","spans":[]},{"type":"paragraph","text":"With this goal in mind, we are excited to introduce a new Droplet Console that will make it much easier to connect to your Droplets securely. The new Droplet Console provides one-click SSH access to your Droplets through a native-like SSH/Terminal experience. It also eliminates the need for a password or manual configuration of SSH keys. Starting today, we’re pleased to announce that the new Droplet Console is now available to all Droplet users.","spans":[]},{"type":"heading2","text":"Why you should be using Secure Shell (SSH) ","spans":[]},{"type":"paragraph","text":"Password-based security is notoriously insecure due to password fatigue and the overuse of passwords such as ‘123456’. Secure Shell or SSH is a network communication protocol that solves this by using passwordless solutions for encryption, enabling two computers to communicate and securely share data. At a high level, SSH works by creating cryptographic key pairs consisting of a public and private key, which are computer generated and stored separately to ensure their security. ","spans":[{"start":80,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://cybernews.com/best-password-managers/most-common-passwords/"}}]},{"type":"paragraph","text":"SSH has become the default encryption protocol for many industries, but it was difficult to use SSH keys with DigitalOcean’s current Recovery (VNC) console, which is why we developed our new Droplet Console. The new Droplet Console is backed by an agent that security supervises the key pair, while also providing one-click SSH access to our users. You can see the full list of features below.","spans":[]},{"type":"heading2","text":"The new Droplet Console: More time saving, less time wasting ","spans":[]},{"type":"paragraph","text":"The new Droplet Console is for everyone who is looking to build fast, secure apps and avoid hassles with SSH access & usability issues.","spans":[]},{"type":"paragraph","text":"In addition to easier SSH access, the new Droplet Console comes with:","spans":[]},{"type":"list-item","text":"Copy/paste text: Instead of typing lengthy key pairs and text manually, you can use copy/paste to save time. ","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Multi-color support: Multi-color support makes the console more useful and intuitive, and breaks the conventional standard appearance which is black text on a white background. ","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Multi-language support: DigitalOcean’s new Droplet Console supports multiple languages, meaning you can now type and view any content in any language that is supported by UTF-8","spans":[{"start":0,"end":24,"type":"strong"}]},{"type":"list-item","text":"OS/images supported: Linux distributions (Ubuntu(16.04 - 20.04), Fedora (32 & 33), Debian (9), CentOS (7.6 & 8.3), CentOS 8 Stream, Rocky Linux and Marketplace images.","spans":[{"start":0,"end":20,"type":"strong"},{"start":148,"end":159,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/"}}]},{"type":"paragraph","text":"The new Droplet Console is available by default on any new Droplets you spin up. You can also enable it manually on older Droplets. Click here to learn more!","spans":[{"start":132,"end":157,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/droplets/how-to/connect-with-console/"}}]},{"type":"paragraph","text":"Check out this short walkthrough video that shows the new Droplet Console in action: ","spans":[]},{"type":"embed","oembed":{"type":"video","embed_url":"https://www.youtube.com/watch?v=Qt7QihVuxiE","title":"Access Your Droplet Terminal Through the Web Console","provider_name":"YouTube","thumbnail_url":"https://i.ytimg.com/vi/Qt7QihVuxiE/hqdefault.jpg","provider_url":"https://www.youtube.com/","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","height":113,"width":200,"version":"1.0","thumbnail_height":360,"thumbnail_width":480,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/Qt7QihVuxiE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"paragraph","text":"We hope you’re excited about the new Droplet Console. You’re welcome to spin some Droplets up right now, and try out the new Droplet Console – why wait?","spans":[{"start":72,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/droplets/new"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"Harsh Banwait, Senior Product Manager","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Harsh Banwait","author_image":{"dimensions":{"width":600,"height":399},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/e83ff690-b20c-4d88-a2b6-57e562558cd6_download.png?auto=compress,format"},"_meta":{"uid":"harsh-banwait"}},"_meta":{"uid":"new-droplet-console-ssh-support"}},"featured_blog_3":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/588e28d3-d41e-480b-937b-8c3b19201f6e_DODX3568-790x400-Blog.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to scale your SaaS product without breaking the bank","spans":[]}],"blog_post_date":"2021-06-22","blog_post_content":[{"type":"paragraph","text":"These days, if you are in the business of software, chances are you are delivering or plan to deliver your services using a Software-as-a-Service (SaaS) model. A combination of internet-based delivery, subscription-based pricing, and low-friction product experiences have made SaaS solutions valuable tools for their users, and an excellent vehicle for software builders looking to distribute their products.","spans":[]},{"type":"paragraph","text":"These factors have made SaaS solutions ubiquitous; SaaS is the largest segment in the public cloud market, and is used to provide functionality ranging from personal finance apps for consumers, to productivity software for businesses, and even tools and services for software developers themselves to compose their applications and simplify their workflows. It is also not uncommon to find micro-SaaS applications being built for specific industries such as retail, job functions such as accounting or marketing, or tasks such as event management. ","spans":[]},{"type":"paragraph","text":"The best thing about this SaaS wave has been that it has allowed a new generation of software builders to build and monetize applications and participate in the digital economy. Previously, you had to be a big company with lots of resources, name recognition and distribution networks to successfully sell software products. Now, irrespective of whether you are a single person working on a passion project, a small team of developers in a startup, or a small and medium-sized business (SMB), the SaaS model enables you to express your ideas in the form of software and deliver them to customers anywhere in the world.","spans":[]},{"type":"heading2","text":"The unique challenges of building SaaS solutions","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Despite the opportunities that come with the widespread adoption of SaaS products, software builders still have to answer key questions in their journey to building successful SaaS products. Understanding what customers to target, features to prioritize, how to price your product, and how to acquire customers are all critical questions to figure out while you are also doing the important job of actually building and operating the product. ","spans":[]},{"type":"paragraph","text":"Writing the code, testing, deployment, monitoring the usage in production, and ensuring that your apps are able to handle the additional demand when customer base and usage grows are all essential and time-consuming tasks.","spans":[]},{"type":"paragraph","text":"Additionally, being able to test multiple ideas, pivot, and double down on the ideas that actually work is critical in early stages of SaaS development. Once growth comes, it is equally important to scale up without compromising on performance or reliability. Needless to say, all of this needs to be economically viable as well, since not everyone has the resources of large SaaS providers like Salesforce or Adobe.","spans":[]},{"type":"heading2","text":"Cloud Computing enables builders but also poses challenges","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Fortunately, for the act of building and operating your apps, cloud computing can help take some load off your shoulders. Unless you have the scale and resources of Facebook, chances are you are not going to set up your own data centers to host the computing infrastructure that powers your SaaS company. Public cloud infrastructure providers can bring great value to SaaS builders by providing on-demand computing services with usage-based pricing. However, just like how the legacy software companies weren't built for the SaaS model, the early (and big) cloud computing services were not optimized for the unique needs of small SaaS building teams. ","spans":[]},{"type":"paragraph","text":"Smaller SaaS teams face challenges with large cloud computing providers, including:","spans":[]},{"type":"heading4","text":"Too many technology options","spans":[]},{"type":"paragraph","text":"There are just too many options for tech stacks on which to build your SaaS - programming languages, application development frameworks, libraries, runtime environments, architectural patterns, and deployment models - and the list is growing by the day.","spans":[]},{"type":"heading4","text":"Complexity of cloud computing services","spans":[]},{"type":"paragraph","text":"Even when you have decided on a technology stack, there is a lot of cloud vendor-specific terminology you need to learn and heavy lifting you need to do to build on the cloud, not all of which contributes to making your SaaS applications successful.","spans":[]},{"type":"heading4","text":"Unpredictable costs","spans":[]},{"type":"paragraph","text":"The experimentation necessary in early stages of SaaS development, as well as the scaling of applications required during the growth phase, call for affordable and predictable pricing from your cloud provider. The last thing SaaS teams want is surprising and indecipherable bills from your cloud provider. Unfortunately, smaller businesses often experience unpredictable costs with cloud providers who are busy serving only the large enterprises.","spans":[]},{"type":"heading2","text":"DigitalOcean provides a simple, cost effective solution for SaaS builders","spans":[]},{"type":"paragraph","text":"Fortunately, at DigitalOcean we have a laser focus on small software development teams, who are trying to build the next generation of applications. Today, DigitalOcean customers are already building SaaS applications which serve all kinds of customers.","spans":[{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/solutions/saas/"}}]},{"type":"paragraph","text":"We believe SaaS builders should focus on building apps that power their business, and not spend their valuable time on managing infrastructure. That is exactly what we have been able to enable through our intuitive products that are built for scale and reliability.","spans":[{"start":205,"end":223,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/"}}]},{"type":"list-item","text":"Vidazoo is an advertising technology company specializing in video streaming and serving. It serves video ads to thousands of websites and handles close to 10 billion requests per day. \n\n“We are as much a data company as an adtech company. Our business relies on speedy and accurate data processing at massive scale. DigitalOcean provides us the perfect set of tools to operate our SaaS business profitably, while not making us feel the need to become full time system administrators. We plan to move a lot of our apps to DigitalOcean App Platform and other fully managed products.” - Roman Svichar, CTO of Vidazoo","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://vidazoo.com/"}},{"start":187,"end":583,"type":"em"}]},{"type":"paragraph","text":"We believe in meeting customers where they are. If they already have an understanding of cloud infrastructure technologies, they should be able to leverage that knowledge and get started with our products without any further ramp up.","spans":[]},{"type":"list-item","text":"Whatfix is an enterprise SaaS provider that offers a digital adoption platform to businesses. The company helps enterprises gain the full value of their investments in enterprise applications by providing real-time, interactive, and contextual guidance to users of those applications. \n\n“What we really love about the DigitalOcean platform is the ease of use. We feel like we know infrastructure and can handle most of the configuration and management. What we needed from a cloud was not bells and whistles but efficiency and reliability. DigitalOcean provides us a platform to build our apps and then gets out of the way. Just how we like it.” - Achyuth Krishna, Director of Engineering of Whatfix","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://whatfix.com/blog/driving-the-future-now-were-excited-to-announce-our-90-million-series-d-funding/"}},{"start":287,"end":648,"type":"em"}]},{"type":"paragraph","text":"We understand that scaling while maintaining reliability of applications and profitability of business is important, so we provide robust solutions which minimize downtime.","spans":[]},{"type":"list-item","text":"Centra is a SaaS-based e-commerce platform for global direct-to-consumer and wholesale e-commerce brands. Centra provides a powerful e-commerce backend that lets brands build pixel-perfect, custom designed, online flagship stores. \n\n“How do we enable our customers to create differentiated online experiences? How do we ensure their e-commerce apps stay up and running at all times? How do we scale on-demand when traffic grows or new customers come in? These are the questions that we ask ourselves every day. Thankfully, we have a partner in DigitalOcean that provides just the platform to answer those questions enabling us to guarantee 99.9% uptime for our clients.” - Martin Jensen, CEO of Centra","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"https://centra.com/"}},{"start":233,"end":673,"type":"em"}]},{"type":"paragraph","text":"These are just a few examples of SaaS businesses finding success on DigitalOcean. We are constantly amazed by the creativity and innovation that software builders are utilizing our platform for. If you are interested in learning more about product updates, technical deep-dives and best practices for building SaaS products and businesses, please contact us to learn how we can help you get started. ","spans":[{"start":340,"end":357,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"Come build with DigitalOcean!","spans":[]},{"type":"paragraph","text":"Looking to migrate your SaaS to DigitalOcean? Leverage free infrastructure credits, robust training, and technical support to ensure a worry-free migration.","spans":[{"start":0,"end":156,"type":"strong"},{"start":0,"end":156,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Raman Sharma","spans":[]},{"type":"paragraph","text":"Vice President, Product & Programs Marketing","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Raman Sharma","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/497b4b14-d192-493a-8b66-7ae176ba99f3_raman.png?auto=compress,format"},"_meta":{"uid":"raman-sharma"}},"_meta":{"uid":"how-to-scale-your-saas-product-without-breaking-the-bank"}}}}]}}},"pageContext":{"limit":12,"skip":36,"numTagPages":5,"currentPage":4,"uid":"engineering","data":[{"node":{"author":{"_linkType":"Link.document","author_name":"Nick Vigier","author_image":null,"_meta":{"uid":"nick_vigier"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"DigitalOcean, Your Data, and the Cloudflare Vulnerability","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Over the course of the last several hours, we have received a number of inquiries about the Cloudflare vulnerability reported on February 23, 2017. Since the information release, we have been told by Cloudflare that none of our customer data has appeared in search caches. The DigitalOcean security team has done its own research into the issue, and we have not found any customer data present in the breach.","spans":[{"start":92,"end":116,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/"}}]},{"type":"paragraph","text":"Out of an abundance of caution, DigitalOcean's engineering teams have reset all session tokens for our users, which will require that you log in again.","spans":[]},{"type":"paragraph","text":"We recommend that you do the following to further protect your account:","spans":[]},{"type":"list-item","text":"Update your password","spans":[]},{"type":"list-item","text":"Rotate your API tokens","spans":[]},{"type":"list-item","text":"Take the opportunity to turn on Two-Factor Authentication (we posted a blog entry earlier this week about our improved process)","spans":[{"start":71,"end":81,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/updates-to-digitalocean-two-factor-authentication/"}}]},{"type":"paragraph","text":"Again, we would like to reiterate that there is no evidence that any customer data has been exposed as a result of this vulnerability, but we care about your security. So we are therefore taking this precaution as well as continuing to monitor the situation.","spans":[]},{"type":"paragraph","text":"Nick Vigier, Director of Security","spans":[]}],"blog_post_date":"2017-02-24","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"digitalocean-your-data-and-the-cloudflare-vulnerability"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":1500,"height":800},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/28ccd8ae-ad02-40aa-a1af-358551aa7d14_goquemu.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Open Source at DigitalOcean: Introducing go-qemu and go-libvirt","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"At DigitalOcean, we use libvirt with QEMU to create and manage the virtual machines that compose our Droplet product. QEMU is the workhorse that enables hundreds of Droplets to run on a single server within our data centers. To perform management actions (like powering off a Droplet), we originally built automation which relied on shelling out to `virsh`, a command-line client used to interact with the libvirt daemon.","spans":[{"start":118,"end":122,"type":"hyperlink","data":{"link_type":"Web","url":"http://wiki.qemu.org/Main_Page"}}]},{"type":"paragraph","text":"As we began to deploy Go into production, we realized we would need simple and powerful building blocks for future Droplet management tooling. In particular, we wanted packages with:","spans":[]},{"type":"list-item","text":"Well-thought-out, idiomatic APIs with great documentation","spans":[]},{"type":"list-item","text":"No use of cgo to simplify our build pipelines and allow easy cross compilation","spans":[]},{"type":"list-item","text":"Direct interaction with QEMU monitor sockets to enable maximum control","spans":[]},{"type":"paragraph","text":"We explored several open source packages for managing libvirt and QEMU, but none of them were able to completely fulfill our wants and needs, so we created our own: go-qemu.","spans":[{"start":165,"end":172,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/go-qemu) and [go-libvirt](https://github.com/digitalocean/go-libvirt"}}]},{"type":"heading2","text":"How Do QEMU and go-qemu Work?","spans":[]},{"type":"paragraph","text":"QEMU provides the hardware emulation layer between Droplets and our bare metal servers. Each QEMU process provides a JSON API over a UNIX or TCP socket, much like a REST API you might find when working with web services. However, instead of using HTTP, it communicates over a protocol known as the QEMU Monitor Protocol (QMP). When you request an action, like powering off a Droplet, the request eventually makes its way to the QEMU process via the QMP socket in the form of `{ \"execute\" : \"system_powerdown\" }`.","spans":[{"start":321,"end":324,"type":"hyperlink","data":{"link_type":"Web","url":"http://wiki.qemu.org/Documentation/QMP"}}]},{"type":"paragraph","text":"go-qemu is a Go package that provides a simple interface for communicating with QEMU instances over QMP. It enables the management of QEMU virtual machines directly, using either the monitor socket of a VM or by proxying the request through libvirt. All go-qemu interactions rely on the qemu.Domain and qmp.Monitor types. A qemu.Domain is constructed with an underlying qmp.Monitor, which understands how to speak to the monitor socket of a given VM.","spans":[{"start":287,"end":298,"type":"hyperlink","data":{"link_type":"Web","url":"https://godoc.org/github.com/digitalocean/go-qemu/qemu#Domain"}},{"start":303,"end":314,"type":"hyperlink","data":{"link_type":"Web","url":"https://godoc.org/github.com/digitalocean/go-qemu/qmp#Monitor"}}]},{"type":"heading2","text":"How Do libvirt and go-libvirt Work?","spans":[]},{"type":"paragraph","text":"libvirt was designed for client-server communication. Users typically interact with the libvirt daemon through the command-line client `virsh`. `virsh` establishes a connection to the daemon either through a local UNIX socket or a TCP connection. Communication follows a custom asynchronous protocol whereby each RPC request or response is preceded by a header describing the incoming payload. Most notably, the header contains a procedure identifier (e.g,. \"start domain\"), the type of request (e.g., `call` or `reply`), and a unique serial number used to correlate RPC calls with their respective responses. The payload following the header is XDR encoded providing an architecture-agnostic method for describing strict data types.","spans":[{"start":271,"end":299,"type":"hyperlink","data":{"link_type":"Web","url":"https://libvirt.org/internals/rpc.html"}},{"start":646,"end":649,"type":"hyperlink","data":{"link_type":"Web","url":"https://tools.ietf.org/html/rfc4506"}}]},{"type":"paragraph","text":"go-libvirt is a Go package which provides a pure Go interface to libvirt. go-libvirt can be used in conjunction with go-qemu to manage VMs by proxying communication through the libvirt daemon.","spans":[]},{"type":"paragraph","text":"go-libvirt exploits the availability of the RPC protocol to communicate with libvirt without the need for cgo and C bindings. While using the libvirt's C bindings would be easier up front, we try to avoid cgo when possible. Dave Cheney has written an excellent blog post which mirrors many of our own findings. A pure Go library simplifies our build pipelines, reduces dependency headaches, and keeps cross-compilation simple.","spans":[{"start":261,"end":270,"type":"hyperlink","data":{"link_type":"Web","url":"https://dave.cheney.net/2016/01/18/cgo-is-not-go"}}]},{"type":"paragraph","text":"By circumventing the C library, we need to keep a close eye on changes in new libvirt releases; libvirt developers may modify the RPC protocol at any time, potentially breaking go-libvirt. To ensure stability and compatibility with various versions of libvirt, we install and run it within Travis CI which allows integration tests to be run for each new commit to go-libvirt.","spans":[]},{"type":"heading3","text":"Example","spans":[]},{"type":"paragraph","text":"The following code demonstrates usage of go-qemu and go-libvirt to interact with all libvirt-managed virtual machines on a given hypervisor.","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    package main","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    import (  ","spans":[]},{"type":"paragraph","text":"        \"fmt\"","spans":[]},{"type":"paragraph","text":"        \"log\"","spans":[]},{"type":"paragraph","text":"        \"net\"","spans":[]},{"type":"paragraph","text":"        \"time\"","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"        \"github.com/digitalocean/go-qemu/hypervisor\"","spans":[]},{"type":"paragraph","text":"    )","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    func main() {  ","spans":[]},{"type":"paragraph","text":"        driver := hypervisor.NewRPCDriver(func() (net.Conn, error) {","spans":[]},{"type":"paragraph","text":"            return net.DialTimeout(\"unix\", \"/var/run/libvirt/libvirt-sock\", 2*time.Second)","spans":[]},{"type":"paragraph","text":"        })","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"        hv := hypervisor.New(driver)","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"        fmt.Println(\"Domain\\t\\tQEMU Version\")","spans":[]},{"type":"paragraph","text":"        fmt.Println(\"--------------------------------------\")","spans":[]},{"type":"paragraph","text":"        domains, err := hv.Domains()","spans":[]},{"type":"paragraph","text":"        if err != nil {","spans":[]},{"type":"paragraph","text":"            log.Fatal(err)","spans":[]},{"type":"paragraph","text":"        }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"        for _, dom := range domains {","spans":[]},{"type":"paragraph","text":"            version, err := dom.Version()","spans":[]},{"type":"paragraph","text":"            if err != nil {","spans":[]},{"type":"paragraph","text":"                log.Fatal(err)","spans":[]},{"type":"paragraph","text":"            }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"            fmt.Printf(\"%s\\t\\t%s\\n\", dom.Name, version)","spans":[]},{"type":"paragraph","text":"            dom.Close()","spans":[]},{"type":"paragraph","text":"        }","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"heading4","text":"Output","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    Droplet-1        2.7.0","spans":[]},{"type":"paragraph","text":"    Droplet-2        2.6.0","spans":[]},{"type":"paragraph","text":"    Droplet-3        2.5.0","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"heading2","text":"What's Next?","spans":[]},{"type":"paragraph","text":"Both go-qemu and go-libvirt are still under active development, in the future, we intend to provide an optional cgo QMP monitor which wraps the libvirt C API using the libvirt-go package.","spans":[{"start":5,"end":12,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/go-qemu"}},{"start":17,"end":27,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/go-libvirt"}},{"start":164,"end":186,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/rgbkrk/libvirt-go"}}]},{"type":"paragraph","text":"go-qemu and go-libvirt are used in production at DigitalOcean, but the APIs should be treated as unstable, and we recommend that users of these packages vendor them into their applications.","spans":[]},{"type":"paragraph","text":"We welcome contributions to the project! In fact, a recent major feature in the go-qemu project was contributed by an engineer outside of DigitalOcean. David Anderson is working on a way to automatically generate QMP structures using the QMP specification in go-qemu. This will save an enormous amount of tedious development and enables contributors to simply wrap these raw types in higher-level types to provide a more idiomatic interface to interact with QEMU instances.","spans":[{"start":152,"end":166,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/danderson"}}]},{"type":"paragraph","text":"If you'd like to join the fun, feel free to open a GitHub pull-request, file an issue, or join us on IRC (freenode/#go-qemu).","spans":[]},{"type":"paragraph","text":"Edit: as clarified by user \"eskultet\" in our IRC channel, libvirt does indeed guarantee API and ABI stability, and the RPC layer is able to detect any extra or missing elements that would cause the RPC payload to not meet a fixed size requirement.  This blog has been updated to reflect this misunderstanding.","spans":[]},{"type":"paragraph","text":"  ","spans":[]},{"type":"paragraph","text":"  by Matt Layher & Ben LeMasurier","spans":[{"start":5,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/mdlayher"}},{"start":19,"end":33,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/lemasurier"}}]}],"blog_post_date":"2016-11-21","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"introducing-go-qemu-and-go-libvirt"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Tommy Murphy","author_image":null,"_meta":{"uid":"tommy_murphy"}},"blog_header_image":{"dimensions":{"width":785,"height":419},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/f91f044c-bd4a-44b3-acf8-a2d233442933_vault.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Using Vault as a Certificate Authority for Kubernetes","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"The Delivery team at DigitalOcean is tasked to make shipping internal services quick and easy. In December of 2015, we set out to design and implement a platform built on top of Kubernetes. We wanted to follow the best practices for securing our cluster from the start, which included enabling mutual TLS authentication between all etcd and Kubernetes components.","spans":[{"start":178,"end":188,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-kubernetes-on-top-of-a-coreos-cluster"}}]},{"type":"paragraph","text":"However, this is easier said than done. DigitalOcean currently has 12 datacenters in 3 continents. We needed to deploy at least one Kubernetes cluster to each datacenter, but setting up the certificates for even a single Kubernetes cluster is a significant undertaking, not to mention dealing with certificate renewal and revocation for every datacenter.","spans":[]},{"type":"paragraph","text":"So, before we started expanding the number of clusters, we set out to automate all certificate management using Hashicorp's Vault. In this post, we'll go over the details of how we designed and implemented our certificate authority (CA).","spans":[{"start":112,"end":129,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.vaultproject.io/"}}]},{"type":"heading2","text":"Planning","spans":[]},{"type":"paragraph","text":"We found it helpful to look at all of the communication paths before designing the structure of our certificate authority.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/3436dd99-b5a4-4b08-81cd-b7b4c036b498_communication_paths.png?auto=compress,format","alt":"communication path diagrams","copyright":null,"dimensions":{"width":1999,"height":876}},{"type":"paragraph","text":"All Kubernetes operations flow through the kube-apiserver and persist in the etcd datastore. etcd nodes should only accept communication from their peers and the API server. The kubelets or other clients must not be able to communicate with etcd directly. Otherwise, the kube-apiserver's access controls could be circumvented. We also need to ensure that consumers of the Kubernetes API are given an identity (a client certificate) to authenticate to kube-apiserver.","spans":[{"start":288,"end":303,"type":"hyperlink","data":{"link_type":"Web","url":"http://kubernetes.io/docs/admin/authorization/"}},{"start":400,"end":408,"type":"hyperlink","data":{"link_type":"Web","url":"http://kubernetes.io/docs/admin/authentication/"}}]},{"type":"paragraph","text":"With that information, we decided to create 2 certificate authorities per cluster. The first would be used to issue etcd related certificates (given to each etcd node and the kube-apiserver). The second certificate authority would be for Kubernetes, issuing the kube-apiserver and the other Kubernetes components their certificates. The diagram above shows the communications that use the etcd CA in dashed lines and the Kubernetes CA in solid lines.","spans":[]},{"type":"paragraph","text":"With the design finalized, we could move on to implementation. First, we created the CAs and configured the roles to issue certificates. We then configured vault policies to control access to CA roles and created authentication tokens with the necessary policies. Finally, we used the tokens to pull the certificates for each service.","spans":[]},{"type":"heading2","text":"Creating the CAs","spans":[]},{"type":"paragraph","text":"We wrote a script that bootstraps the CAs in Vault required for each new Kubernetes cluster. This script mounts new pki backends to cluster-unique paths and generates a 10 year root certificate for each pki backend.","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    vault mount -path $CLUSTER_ID/pki/$COMPONENT pki","spans":[]},{"type":"paragraph","text":"    vault mount-tune -max-lease-ttl=87600h $CLUSTER_ID/pki/etcd","spans":[]},{"type":"paragraph","text":"    vault write $CLUSTER_ID/pki/$COMPONENT/root/generate/internal \\","spans":[]},{"type":"paragraph","text":"    common_name=$CLUSTER_ID/pki/$COMPONENT ttl=87600h","spans":[]},{"type":"paragraph","text":"`}```","spans":[]},{"type":"paragraph","text":"In Kubernetes, it is possible to use the Common Name (CN) field of client certificates as their user name. We leveraged this by creating different roles for each set of CN certificate requests:","spans":[{"start":41,"end":52,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/X.509#Sample_X.509_certificates"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    vault write $CLUSTER_ID/pki/etcd/roles/member \\","spans":[]},{"type":"paragraph","text":"        allow_any_name=true \\","spans":[]},{"type":"paragraph","text":"        max_ttl=\"720h\"","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"The role above, under the cluster's etcd CA, can create a 30 day cert for any CN. The role below, under the Kubernetes CA, can only create a certificate with the CN of \"kubelet\".","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    vault write $CLUSTER_ID/pki/k8s/roles/kubelet \\","spans":[]},{"type":"paragraph","text":"        allowed_domains=\"kubelet\" \\","spans":[]},{"type":"paragraph","text":"        allow_bare_domains=true \\","spans":[]},{"type":"paragraph","text":"        allow_subdomains=false \\","spans":[]},{"type":"paragraph","text":"        max_ttl=\"720h\"","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"We can create roles that are limited to individual CNs, such as \"kube-proxy\" or \"kube-scheduler\", for each component that we want to communicate with the kube-apiserver.","spans":[]},{"type":"paragraph","text":"Because we configure our kube-apiserver in a high availability configuration, separate from the kube-controller-manager, we also generated a shared secret for those components to use with the `--service-account-private-key-file`flag and write it to the generic secrets backend:","spans":[{"start":45,"end":76,"type":"hyperlink","data":{"link_type":"Web","url":"http://kubernetes.io/docs/admin/high-availability/"}},{"start":228,"end":232,"type":"hyperlink","data":{"link_type":"Web","url":"http://kubernetes.io/docs/admin/kube-controller-manager/"}},{"start":253,"end":276,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.vaultproject.io/docs/secrets/generic/index.html"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    openssl genrsa 4096 > token-key","spans":[]},{"type":"paragraph","text":"    vault write secret/$CLUSTER_ID/k8s/token key=@token-key","spans":[]},{"type":"paragraph","text":"    rm token-key","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"In addition to these roles, we created individual policies for each component of the cluster which are used to restrict which paths individual vault tokens can access. Here, we created a policy for etcd members that will only have access to the path to create an etcd member certificate.","spans":[{"start":50,"end":58,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.vaultproject.io/docs/concepts/policies.html"}},{"start":143,"end":155,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.vaultproject.io/docs/concepts/tokens.html"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    cat <<EOT | vault policy-write $CLUSTER_ID/pki/etcd/member -","spans":[]},{"type":"paragraph","text":"    path \"$CLUSTER_ID/pki/etcd/issue/member\" {","spans":[]},{"type":"paragraph","text":"      policy = \"write\"","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    EOT","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"This kube-apiserver policy only has access to the path to create a kube-apiserver certificate and to read the service account private key generated above.","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    cat <<EOT | vault policy-write $CLUSTER_ID/pki/k8s/kube-apiserver -","spans":[]},{"type":"paragraph","text":"    path \"$CLUSTER_ID/pki/k8s/issue/kube-apiserver\" {","spans":[]},{"type":"paragraph","text":"      policy = \"write\"","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    path \"secret/$CLUSTER_ID/k8s/token\" {","spans":[]},{"type":"paragraph","text":"      policy = \"read\"","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    EOT","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"Now that we have the structure of CAs and policies created in Vault, we need to configure each component to fetch and renew its own certificates.","spans":[]},{"type":"heading2","text":"Getting Certificates","spans":[]},{"type":"paragraph","text":"We provided each machine with a Vault token that can be renewed indefinitely. This token is only granted the policies that it requires. We set up the token role in Vault with:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    vault write auth/token/roles/k8s-$CLUSTER_ID \\","spans":[]},{"type":"paragraph","text":"    period=\"720h\" \\","spans":[]},{"type":"paragraph","text":"    orphan=true \\","spans":[]},{"type":"paragraph","text":"    allowed_policies=\"$CLUSTER_ID/pki/etcd/member,$CLUSTER_ID/pki/k8s/kube-apiserver...\"","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"Then, we built tokens from that token role with the necessary policies for the given node. As an example, the etcd nodes were provisioned with a token generated from this command:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    vault token-create \\","spans":[]},{"type":"paragraph","text":"      -policy=\"$CLUSTER_ID/pki/etcd/member\" \\","spans":[]},{"type":"paragraph","text":"      -role=\"k8s-$CLUSTER\"","spans":[]},{"type":"paragraph","text":"   `}``` ","spans":[]},{"type":"paragraph","text":"All that is left now is to configure each service with the appropriate certificates.","spans":[]},{"type":"heading2","text":"Configuring the Services","spans":[]},{"type":"paragraph","text":"We chose to use consul-template to configure services since it will take care of renewing the Vault token, fetching new certificates, and notifying the services to restart when new certificates are available. Our etcd node consul-template configuration is:","spans":[{"start":16,"end":31,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/hashicorp/consul-template"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    {","spans":[]},{"type":"paragraph","text":"      \"template\": {","spans":[]},{"type":"paragraph","text":"        \"source\": \"/opt/consul-template/templates/cert.template\",","spans":[]},{"type":"paragraph","text":"        \"destination\": \"/opt/certs/etcd.serial\",","spans":[]},{"type":"paragraph","text":"        \"command\": \"/usr/sbin/service etcd restart\"","spans":[]},{"type":"paragraph","text":"      },","spans":[]},{"type":"paragraph","text":"      \"vault\": {","spans":[]},{"type":"paragraph","text":"        \"address\": \"VAULT_ADDRESS\",","spans":[]},{"type":"paragraph","text":"        \"token\": \"VAULT_TOKEN\",","spans":[]},{"type":"paragraph","text":"        \"renew\": true","spans":[]},{"type":"paragraph","text":"      }","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"Because consul-template will only write one file per template and we needed to split our certificate into its components (certificate, private key, and issuing certificate), we wrote a custom plugin that takes in the data, a file path, and an file owner. Our certificate template for etcd nodes uses this plugin:","spans":[{"start":185,"end":198,"type":"hyperlink","data":{"link_type":"Web","url":"https://gist.github.com/tam7t/1b45125ae4de13b3fc6fd0455954c08e"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    {{ with secret \"$CLUSTER_ID/pki/data/issue/member\" \"common_name=$FQDN\"}}","spans":[]},{"type":"paragraph","text":"    {{ .Data.serial_number }}","spans":[]},{"type":"paragraph","text":"    {{ .Data.certificate | plugin \"certdump\" \"/opt/certs/etcd-cert.pem\" \"etcd\"}}","spans":[]},{"type":"paragraph","text":"    {{ .Data.private_key | plugin \"certdump\" \"/opt/certs/etcd-key.pem\" \"etcd\"}}","spans":[]},{"type":"paragraph","text":"    {{ .Data.issuing_ca | plugin \"certdump\" \"/opt/certs/etcd-ca.pem\" \"etcd\"}}","spans":[]},{"type":"paragraph","text":"    {{ end }}","spans":[]},{"type":"paragraph","text":"   `}``` ","spans":[]},{"type":"paragraph","text":"The etcd process was then configured with the following options so that both peers and clients must present a certificate issued from Vault in order to communicate:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    --peer-cert-file=/opt/certs/etcd-cert.pem ","spans":[]},{"type":"paragraph","text":"    --peer-key-file=/opt/certs/etcd-key.pem ","spans":[]},{"type":"paragraph","text":"    --peer-trusted-ca-file=/opt/certs/etcd-ca.pem ","spans":[]},{"type":"paragraph","text":"    --peer-client-cert-auth","spans":[]},{"type":"paragraph","text":"    --cert-file=/opt/certs/etcd-cert.pem ","spans":[]},{"type":"paragraph","text":"    --key-file=/opt/certs/etcd-key.pem ","spans":[]},{"type":"paragraph","text":"    --trusted-ca-file=/opt/certs/etcd-ca.pem ","spans":[]},{"type":"paragraph","text":"    --client-cert-auth","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"The kube-apiserver has one certificate template for communicating with etcd and one for the Kubernetes components, and the process is configured with the appropriate flags: ","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    --etcd-certfile=/opt/certs/etcd-cert.pem ","spans":[]},{"type":"paragraph","text":"    --etcd-keyfile=/opt/certs/etcd-key.pem ","spans":[]},{"type":"paragraph","text":"    --etcd-cafile=/opt/certs/etcd-ca.pem","spans":[]},{"type":"paragraph","text":"    --tls-cert-file=/opt/certs/apiserver-cert.pem ","spans":[]},{"type":"paragraph","text":"    --tls-private-key-file=/opt/certs/apiserver-key.pem ","spans":[]},{"type":"paragraph","text":"    --client-ca-file=/opt/certs/apiserver-ca.pem ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"The first three etcd flags allow the kube-apiserver to communicate with etcd with a client certificate; the two TLS flags allow it to host the API over a TLS connection; the last flag allows it to verify clients by ensuring that their certificates were signed by the same CA that issued the kube-apiserver certificate.","spans":[]},{"type":"heading2","text":"Conclusion","spans":[]},{"type":"paragraph","text":"Each component of the architecture is issued a unique certificate and the entire process is fully automated. Additionally, we have an audit log of all certificates issued, and frequently exercise certificate expiration and rotation.","spans":[]},{"type":"paragraph","text":"We did have to put in some time up front to learn Vault, discover the appropriate command line arguments, and integrate the solution discussed here into our existing configuration management system. However, by using Vault as a certificate authority, we drastically reduced the effort required to set up and maintain many Kubernetes clusters.","spans":[]},{"type":"paragraph","text":"  ","spans":[]},{"type":"paragraph","text":"  by Tommy Murphy","spans":[{"start":5,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/tam7t"}}]}],"blog_post_date":"2016-09-05","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"vault-and-kubernetes"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Bryan Liles","author_image":null,"_meta":{"uid":"bryan_liles"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/c883d3cf-14fb-4648-8d30-cbd199bcea48_doctl.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing doctl: the Command Line Interface to DigitalOcean","spans":[]}],"blog_post_content":[{"type":"heading3","text":"Why a CLI utility?","spans":[]},{"type":"paragraph","text":"When DigitalOcean entered the market four years ago, our team spent an extraordinary amount of time designing a web user interface that was easy to use and inviting for developers. Simple and elegant design is something we have always strived for as a company. Over time, as the amount of functionality has increased, the ease of use has remained.","spans":[]},{"type":"paragraph","text":"That goal goes beyond just the web interface; we've sought to build an API that is just as easy to use. When we released version 1 of our API, a few popular tools emerged. Tugboat, which allowed you to manage your DigitalOcean resources from the comfort of your command line, was a particular favorite. Late last year, we deprecated V1 and released DigitalOcean API V2. With API V2 came a plethora of improvements and an enhanced developer's portal which provides information on every API endpoint along with usage examples and guides.","spans":[{"start":172,"end":179,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/pearkes/tugboat"}},{"start":349,"end":368,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2"}},{"start":430,"end":448,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/"}}]},{"type":"paragraph","text":"As developers ourselves, we spend a lot of our time in a terminal. So we have decided to upgrade that experience as well with an official command line interface (CLI) tool entitled `doctl`. `doctl` provides an accessible interface to our API, taking full advantage of improvements introduced in API V2 and support for newer DigitalOcean features like Floating IPs. It allows us to deliver more complex features and workflows as well.","spans":[{"start":351,"end":363,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-floating-ips-on-digitalocean"}}]},{"type":"heading3","text":"Installation and usage","spans":[]},{"type":"paragraph","text":"`doctl` is available as a precompiled binary for Linux, Mac OS X, and Windows. You can download the release on GitHub.","spans":[{"start":96,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/doctl/releases/"}}]},{"type":"paragraph","text":"Getting started with `doctl` is easy. To retrieve your DigitalOcean access token and save it locally, just run:","spans":[]},{"type":"paragraph","text":"    ```[php]{`doctl auth login`}```","spans":[]},{"type":"paragraph","text":"You can view your account settings with:","spans":[]},{"type":"paragraph","text":"   ```[php]{`doctl account get`}```","spans":[]},{"type":"paragraph","text":"As an example of what `doctl` can do, we can create a Debian 8 Droplet in NYC1 with a public SSH key installed for the root user in one line:","spans":[]},{"type":"paragraph","text":"    ```[php]{`doctl compute droplet create webserver01 --region nyc1 --image debian-8-x64 --size 4gb --ssh-keys 1234 --wait`}```","spans":[]},{"type":"paragraph","text":"`doctl` can also configure the output. By default, it will be displayed in a table. If you wanted to use the output programmatically, JSON might be a better choice. For instance, you could list all of your Droplets in NYC3 as JSON using:","spans":[]},{"type":"paragraph","text":"    ```[php]{`doctl compute droplet list --region nyc3 --output json`}```","spans":[]},{"type":"paragraph","text":"To learn about all the features available, check out the full tutorial over on our community site.","spans":[{"start":53,"end":97,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client"}}]},{"type":"heading3","text":"Simple and powerful","spans":[]},{"type":"paragraph","text":"`doctl` is not only an interface to the DigitalOcean V2 API. It also simplifies more complex workflows. Previously, when using the API to snapshot a Droplet, you'd have to separately retrieve the action ID and continuously query the action endpoint to know the status of the snapshot. Now `doctl` can handle that for you. Using the `--wait` flag, it can snapshot a Droplet and block until the action completes. The same concept applies to other activities which don't complete instantaneously, like Droplet creates.","spans":[]},{"type":"paragraph","text":"`doctl` also simplifies activities which do not have an API endpoint. If you create a Droplet and don't assign the IP address in DNS, `doctl` allows you to SSH to your Droplet by name.","spans":[]},{"type":"paragraph","text":"    ```[php]{`doctl compute ssh <droplet name>`}```","spans":[]},{"type":"paragraph","text":"By default, it assumes you are using the `root` user. If you want to SSH as a specific user, you can do that as well:","spans":[]},{"type":"paragraph","text":"    ```[php]{`doctl compute ssh <user>@<droplet name>`}```","spans":[]},{"type":"heading3","text":"Contribute","spans":[]},{"type":"paragraph","text":"Like much of our internal tooling, `doctl` is written in Go. It is completely open source and available on GitHub. We're excited to be able to share this with our community and look forward to collaborating on building a tool we hope you'll love. Check out the contribution guidelines, and dive into the code.","spans":[{"start":94,"end":113,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/doctl/"}},{"start":257,"end":284,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/doctl/blob/master/CONTRIBUTING.md"}}]},{"type":"paragraph","text":"What else you would like to see from `doctl`? Let us know in the comments.","spans":[]}],"blog_post_date":"2016-03-28","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"introducing-doctl"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Tim Vogler","author_image":null,"_meta":{"uid":"tim_vogler"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Connect the dots with internet peering at IX points text on graphic of lines","copyright":null,"url":"https://images.prismic.io/www-static/1496c49ab95fa5d92a10923bad9947a663f85289_hero-2.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Connect the Dots with Internet Peering at IX Points","spans":[]}],"blog_post_content":[{"type":"preformatted","text":"This post by DigitalOcean network administrator Tim Vogler explains what IX points are, why they're integral to the medium-sized networks of the Internet, how DigitalOcean uses them, and how you can encourage your local networks (such as your ISP) to be good neighbors.","spans":[]},{"type":"paragraph","text":"All over the world, IX points connect the dots of the Internet.","spans":[]},{"type":"paragraph","text":"IX points, short for Internet Exchange points, are where companies, schools, internet service providers (ISPs) and other organizations connect their traffic directly to each other over a single Local Area Network (LAN).","spans":[]},{"type":"paragraph","text":"Some notable IXs include LINX in the UK, AMS-IX in the Netherlands, and NYIIX in New York City, to name a few. DE-CIX in Germany hits peak traffic at 4.7 Tbps each day with an average of 2.78 Tbps and the next biggest IX, AMS-IX, is no slouch with 4.27 Tbps peak and 2.49 Tbps average. That's a whole lot of cat videos being pushed.","spans":[]},{"type":"paragraph","text":"Cat video throughput is a whimsical way to measure traffic, but the Internet actually relies on these direct connections between providers. Let's imagine, for a moment, what would happen if nobody used IX points.","spans":[]},{"type":"paragraph","text":"If nobody used IX points, most Internet traffic would go through ISPs on the public Internet.","spans":[]},{"type":"paragraph","text":"The routing protocol that connects different networks together to form the Internet is called BGP, or the Border Gateway Protocol. This protocol handles the giant web-like structure of the Internet and its 500,000+ routes. While necessary for the Internet to work, BGP does have its shortcomings. It leaves out the latency of a route when deciding where to send traffic, and it is also heavily tuned by engineers, who occasionally make mistakes.","spans":[{"start":106,"end":129,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Border_Gateway_Protocol"}}]},{"type":"paragraph","text":"BGP's main decision-making mechanism is path length. For BGP, this means the fewer networks along the path from origin to destination, the better. Instead of counting router hops (number of physical devices in the path) it counts how many autonomous systems (or an organization's network) it crosses to reach the end network. Some examples of autonomous systems include ISPs and large networked enterprises such as AT&T, DigitalOcean, CloudFlare, and NTT.","spans":[{"start":343,"end":361,"type":"em"},{"start":343,"end":361,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Autonomous_system_(Internet)"}}]},{"type":"paragraph","text":"Here's an example of a route that takes a very convoluted path to reach its end point. (A route is the path traffic takes from start to end):","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/f4f4a92fd18e69c0653573828f566569b6788b9c_route.png?auto=compress,format","alt":"Route","copyright":null,"dimensions":{"width":784,"height":418}},{"type":"paragraph","text":"Here's a text copy of the same route:","spans":[]},{"type":"preformatted","text":"    89.32.120.0/22      *[BGP/170] 00:46:26, MED 1000, localpref 100\n    AS path: **4637 3356 5588 48095** I, validation-state:unverified\n","spans":[]},{"type":"paragraph","text":"The interesting part here is the list of providers after \"AS path,\" which is four different providers including the start and end points. So, traffic on this route needs to pass through two intermediary providers before reaching the end user's ISP. Each hop can introduce latency or packet loss.","spans":[]},{"type":"paragraph","text":"How can we address this issue? You might have guessed by the title of this post: IX points, of course!","spans":[]},{"type":"paragraph","text":"IX points send data over self-contained layer 2 networks (that use fast layer 2 switching) instead, shortening the routes data needs to travel, and thus reducing the cost and latency of sending data over the Internet.","spans":[]},{"type":"paragraph","text":"Here's another route showing a direct connection between DigitalOcean and CloudFlare, unencumbered by middleman networks:","spans":[]},{"type":"preformatted","text":"    103.31.5.0/24      *[BGP/170] 2w6d 18:21:37, MED 200, localpref 100\n    AS path: **13335**\n","spans":[]},{"type":"paragraph","text":"As you can see, the traffic hops directly to Cloudflare's network without having to wade through the depths of the public Internet.","spans":[]},{"type":"paragraph","text":"IX points provide a better user experience because traffic follows a shorter, faster path that's easier to control and provide network consistency on. In fact, the more IX points, the better the Internet functions.","spans":[]},{"type":"paragraph","text":"Setting up a peering relationship on an exchange is extremely simple. You'll need to contact the organization that runs the IX to get a LAN connection and IP address. Once you have that, it's as simple as sending an email with your interface details and asking politely to peer.","spans":[]},{"type":"paragraph","text":"With our newest datacenter in Toronto, for example, all it took was an email with our ASN and an IP address to set up a peering relationship with Cloudflare. We send a lot of traffic their direction, so the mutual benefits made it a no-brainer.","spans":[]},{"type":"paragraph","text":"It'd be great if the entire Internet was fueled by IX points. Achieving 100% peering isn't realistic, considering that there are 300 IXs worldwide, you need to be close enough to plug a cable in, and someone has to pay for the switching equipment (usually fueled by membership fees).","spans":[]},{"type":"paragraph","text":"However, connecting any sizable network to its neighbors can help make the Internet better for everyone. If you have access to an IX, reach out to the other members to set up some sessions and connect those dots.","spans":[]},{"type":"paragraph","text":"by Tim Vogler","spans":[]}],"blog_post_date":"2015-12-09","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"connect-the-dots-with-internet-peering-at-ix-points"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Luca Salvatore","author_image":{"dimensions":{"width":250,"height":250},"alt":"Luca Salvatore","copyright":null,"url":"https://images.prismic.io/www-static/fd8fb2a54e8e54d882c33bccac82b22a684d920e_9bb2be860884302b74920173da25866a.jpg?auto=compress,format"},"_meta":{"uid":"luca_salvatore"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/b654ce2f-b279-40ab-b1a7-b7fe43a28fae_ZeroTouchProvisioning-blog.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Zero Touch Provisioning: How to Build a Network Without Touching Anything","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Last month, we proudly launched our 11th datacenter in Toronto, Canada. Building new datacenters is becoming a pretty common occurrence for us; we launched three last year, two this year, and there are plenty more coming in the near future.","spans":[{"start":23,"end":62,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/introducing-our-new-canadian-datacenter-tor1/"}},{"start":111,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/introducing-our-new-european-region-frankfurt/"}},{"start":118,"end":124,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/introducing-our-london-region/"}},{"start":125,"end":135,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/announcing-nyc3-with-ipv6-support/"}},{"start":136,"end":142,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/we-re-excited-to-announce-our-singapore-datacenter-sgp1/"}}]},{"type":"paragraph","text":"This means that building a DC has become a repeatable task, and repeatable tasks are tasks that are begging for automation. This blog is the story of how we have built our last few datacenters without needing to manually log in to the majority of our devices.","spans":[]},{"type":"heading2","text":"The Slow Way","spans":[]},{"type":"paragraph","text":"There is a fair amount of effort that goes into building a network in a brand new location, not to mention the tight timeline. It's critical that the network is up and running early in the build process; without it, there's no connectivity and our platform engineers can't come in and build the hypervisors.","spans":[]},{"type":"paragraph","text":"In any new deployment, there are typically around 50 new switches to configure. Most switches have an identical configuration (except for some unique things, like the management IP address), and the new switches will almost always need their software updated to our standard version.","spans":[]},{"type":"paragraph","text":"In our early days, deploying a new network meant logging into every switch via the console port, pasting a config from a template, and then upgrading the software. With so many switches to build, it was time consuming and — let's face it — pretty boring. The whole process was in need of a total overhaul.","spans":[]},{"type":"heading2","text":"Not Touching Anything… Almost","spans":[]},{"type":"paragraph","text":"For our automated network deployment to work, we have to address a chicken and egg problem: there needs to be some form of networking already in place so the new switches can download their updated code and grab their configuration template.","spans":[]},{"type":"paragraph","text":"As a result, a small part of the network still does need to be built by hand. This is typically a small-ish firewall connected to what we call our \"out of band\" (OOB) internet link, plus a few switches to provide connectivity to the management ports of our switches. These devices have a very basic configuration, so it's easy to copy and paste it and get some initial connectivity.","spans":[]},{"type":"paragraph","text":"Additionally, we need to know the MAC address of each switch, which is printed on the side of the chassis. Fortunately, we have a fantastic datacenter team that flies all over the world to do all the physical labor involved with deploying a new location. These folks have racking and stacking down to a fine art, and part of their process is to note down the MAC address of each switch they are racking into a file for use later on.","spans":[{"start":130,"end":155,"type":"hyperlink","data":{"link_type":"Web","url":"https://instagram.com/p/yxB1Tos8F-/"}}]},{"type":"heading2","text":"The Fast Way, aka Zero Touch Provisioning","spans":[]},{"type":"paragraph","text":"The actual automation of the building process is known as Zero Touch Provisioning (ZTP). Most major networking vendors have some form of ZTP support, and the process is pretty simple. There are a few specific configurations needed on the ZTP server to make everything work.","spans":[]},{"type":"heading3","text":"Setting Up DHCP","spans":[]},{"type":"paragraph","text":"First, we need a DHCP server. We use good old ISC DHCP running on a Ubuntu server, and configure it to give the switch the information it needs once it boots up. This is the top of our dhcpd.conf file:","spans":[{"start":185,"end":195,"type":"strong"}]},{"type":"preformatted","text":"    option ztp-file-server code 150 = { ip-address };","spans":[]},{"type":"preformatted","text":"    option space ZTP;","spans":[]},{"type":"preformatted","text":"    option ZTP.image-file-name code 0 = text;","spans":[]},{"type":"preformatted","text":"    option ZTP.config-file-name code 1 = text;","spans":[]},{"type":"preformatted","text":"    option ZTP.image-file-type code 2 = text;","spans":[]},{"type":"preformatted","text":"    option ZTP.transfer-mode code 3 = text;","spans":[]},{"type":"preformatted","text":"    option ZTP-encap code 43 = encapsulate ZTP;","spans":[]},{"type":"preformatted","text":"    option ztp-file-server 10.126.1.1;","spans":[]},{"type":"preformatted","text":"    option ZTP.image-file-name \"/software/switch-image-file.tgz\";","spans":[]},{"type":"preformatted","text":"    option ZTP.transfer-mode \"http\";","spans":[]},{"type":"paragraph","text":"This basically tells a switch what it needs to know to grab its template and where to grab its updated software.","spans":[]},{"type":"paragraph","text":"The next bit of the dhcpd.conf file looks similar to this:","spans":[{"start":20,"end":30,"type":"strong"}]},{"type":"preformatted","text":"         group {","spans":[]},{"type":"preformatted","text":"            host tor1-spine1 {","spans":[]},{"type":"preformatted","text":"            hardware ethernet               5C:45:27:23:2F:01;","spans":[]},{"type":"preformatted","text":"            fixed-address                   10.200.72.138;","spans":[]},{"type":"preformatted","text":"            option routers                  10.200.72.129;","spans":[]},{"type":"preformatted","text":"            option subnet-mask              255.255.255.192;","spans":[]},{"type":"preformatted","text":"            option ZTP.config-file-name \"/tor1-spine1.config\";","spans":[]},{"type":"preformatted","text":"            }","spans":[]},{"type":"preformatted","text":"    }","spans":[]},{"type":"paragraph","text":"This is where the MAC address from the side of the switches' chassis comes into play. We need each switch to pull down the correct configuration template, so the MAC address is used to identify the switch. The `dhcpd.conf` file will have an entry like the one above for every single switch that we want to ZTP.","spans":[]},{"type":"paragraph","text":"Because creating a entry for 50 or so switches would be pretty annoying, we also automate this using simple Python script which spits out the appropriate `dhcpd.conf` file containing all the correct MAC addresses and IP addresses.","spans":[]},{"type":"heading3","text":"Configuration Templates","spans":[]},{"type":"paragraph","text":"For this process to be fully automated, each new switch needs to have a configuration template ready to go. To make this happen, we use the Jinja2 templating software and some Python, which makes it easy to create a whole bunch of templates quickly. We create a template for every device that is going to be deployed and upload the templates to the ZTP server.","spans":[]},{"type":"heading2","text":"Voila!","spans":[]},{"type":"paragraph","text":"The switch boots up and sends out a DHCP request, which the OOB firewall relays to the ZTP server. The switch then grabs its config template, downloads its software, and that's it!","spans":[]},{"type":"paragraph","text":"Here is the console output from a real Juniper QFX switch going through the process:","spans":[]},{"type":"preformatted","text":"root>","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: DHCP Client Bound interfaces:","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: DHCP Client Unbound interfaces: irb.0  vme.0  et-0/0/0.0  e","spans":[]},{"type":"preformatted","text":"    t-0/0/1.0  et-0/0/2.0  et-0/0/3.0  et-0/0/4.0  et-0/0/5.0  et-0/0/6.0  et-0/0/7","spans":[]},{"type":"preformatted","text":"    .0  et-0/0/8.0  et-0/0/9.0  et-0/0/10.0  et-0/0/11.0  et-0/0/12.0  et-0/0/13.0","spans":[]},{"type":"preformatted","text":"     et-0/0/14.0  et-0/0/15.0  et-0/0/16.0  et-0/0/17.0  et-0/0/18.0  et-0/0/19.0","spans":[]},{"type":"preformatted","text":"    et-0/0/20.0  et-0/0/21.0  et-0/0/22.0  et-0/0/23.0  et-0/1/0.0  et-0/1/1.0  et-","spans":[]},{"type":"preformatted","text":"    0/1/2.0  et-0/1/3.0  et-0/2/0.0  et-0/2/1.0","spans":[]},{"type":"preformatted","text":"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: No DHCP Client in bound state, reset all enabled DHCP clients","spans":[]},{"type":"preformatted","text":"    ","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: DHCP Options for client interface vme.0:","spans":[]},{"type":"preformatted","text":"    ConfigFile: /nyc3-spine3.config","spans":[]},{"type":"preformatted","text":"    ImageFile: /jinstall-qfx-5-13.2X51-D35.3-domestic-signed.tgz","spans":[]},{"type":"preformatted","text":"    Gateway: 10.198.73.129","spans":[]},{"type":"preformatted","text":"    File Server: 10.1.2.3","spans":[]},{"type":"preformatted","text":"    Options state: All options set","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: DHCP Client Bound interfaces: vme.0","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Active on client interface: vme.0","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Interface::   \"vme\"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Server::      \"10.1.2.3\"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Image File:: \"jinstall-qfx-5-13.2X51-D35.3-domestic-signed","spans":[]},{"type":"preformatted","text":"    .tgz\"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Config File:: \"nyc3-spine3.config\"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Gateway::     \"10.198.73.129\"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Protocol::    \"http\"","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Start fetching nyc3-a1-spine3.config file from server 10.1.2.3 through vme using http","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: File nyc3-spine3.config fetched from server 10.1.2.3 through vme","spans":[]},{"type":"preformatted","text":"    Auto Image Upgrade: Start fetching jinstall-qfx-5-13.2X51-D35.3-domestic-signed","spans":[]},{"type":"preformatted","text":"    .tgz file from server 10.1.2.3 through vme using http","spans":[]},{"type":"preformatted","text":"    ","spans":[]},{"type":"preformatted","text":"    WARNING!!! On successful image installation, system will reboot automatically","spans":[]},{"type":"paragraph","text":"With the old process, it would take a full day of work to build 50 switches. With the new process, it takes 5 minutes, and the longest part is just waiting for the switch to reboot for its software update.","spans":[]},{"type":"paragraph","text":"Instead of manually logging into each device, we now set up a ZTP server, upload the configuration templates, then sit back and watch the network build itself.","spans":[]},{"type":"paragraph","text":"by Luca Salvatore","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/_LucaNet"}}]}],"blog_post_date":"2015-10-21","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"zero-touch-provisioning-how-to-build-a-network-without-touching-anything"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Luca Salvatore","author_image":{"dimensions":{"width":250,"height":250},"alt":"Luca Salvatore","copyright":null,"url":"https://images.prismic.io/www-static/fd8fb2a54e8e54d882c33bccac82b22a684d920e_9bb2be860884302b74920173da25866a.jpg?auto=compress,format"},"_meta":{"uid":"luca_salvatore"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Floating IPs illustration letters","copyright":null,"url":"https://images.prismic.io/www-static/3f964a6a976e625cdda27221a7b22d764fbdcacd_hero.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Floating IPs: Start Architecting Your Applications for High Availability","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"High Availability is key to any production environment. It grants developers peace of mind knowing their application is architected to withstand failure scenarios.","spans":[]},{"type":"paragraph","text":"Today, we are excited to announce Floating IPs. A Floating IP is an IP address that can be instantly moved from one Droplet to another Droplet in the same datacenter.","spans":[{"start":34,"end":46,"type":"strong"}]},{"type":"paragraph","text":"Part of a highly available infrastructure is being able to immediately point an IP address to a redundant server. This is now possible with the addition of Floating IPs.","spans":[]},{"type":"heading2","text":"How It Works","spans":[]},{"type":"paragraph","text":"Single points of failure can be the downfall of any application. With Floating IPs, customers can associate an IP address with a different Droplet, with minimal downtime. This makes it possible to set up a standby Droplet, ready to receive your production traffic at a moment's notice.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/MDliOWQwNTgtMjFkMy00YzJkLTkxOWQtZjYyMDg4Y2NlMjk3_ha-diagram-animated.gif?auto=compress,format","alt":"Traffic is switched to another load balancer using the floating IP","copyright":null,"dimensions":{"width":1200,"height":577}},{"type":"paragraph","text":"Floating IPs are free to use. However, due to the shortage of IPv4 addresses available, if you reserve an address but don't assign it to a Droplet,  we charge $0.006 per hour for each unassigned, reserved IP. (You can relinquish unused IPs from the control panel.) To keep billing simple, you will not be charged unless you accrue $1 or more.","spans":[{"start":118,"end":146,"type":"em"}]},{"type":"paragraph","text":"Automatic Failover","spans":[{"start":0,"end":18,"type":"strong"}]},{"type":"paragraph","text":"With a bit of scripting, you're able to set up redundant load balancers that automatically fail over. If the primary load balancer goes offline, your traffic can be redirected to the secondary one with minimal application downtime.","spans":[{"start":14,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-set-up-highly-available-web-servers-with-keepalived-and-floating-ips-on-ubuntu-14-04"}}]},{"type":"paragraph","text":"Smooth Upgrades","spans":[{"start":0,"end":15,"type":"strong"}]},{"type":"paragraph","text":"Floating IPs aren't just for failover situations. You can also use them for application upgrades. For example, you can spin up a new Droplet, run the upgrades on the new Droplet, and then switch the flow of traffic to the new Droplet.","spans":[]},{"type":"heading2","text":"Getting Started","spans":[]},{"type":"paragraph","text":"Our Ruby and Go wrappers have been updated to support Floating IPs. You can also check out our API documentation.","spans":[{"start":4,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/droplet_kit"}},{"start":13,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/godo"}},{"start":95,"end":112,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/documentation/v2/"}}]},{"type":"paragraph","text":"The easiest way to start using Floating IPs is to read our Floating IPs on DigitalOcean tutorial. It covers everything you need to know about Floating IPs, and includes links to further guides that will step you through creating your own high availability setup.","spans":[{"start":59,"end":87,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-floating-ips-on-digitalocean"}}]},{"type":"paragraph","text":"Floating IPs is our first step in addressing high availability, and you can expect more in the near future.","spans":[]},{"type":"paragraph","text":"by Brooke McKim","spans":[{"start":3,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/brookemckim"}}]}],"blog_post_date":"2015-10-19","tags":[{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"floating-ips-start-architecting-your-applications-for-high-availability"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Jay Gordon","author_image":null,"_meta":{"uid":"jay_gordon"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Banishing your sysadmin fears illustration with developer on computer","copyright":null,"url":"https://images.prismic.io/www-static/94f6484de6d04ed9fc8489ce1603727070dc40e0_hero-1.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Inside DO: Banishing Your Sysadmin Fears","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Jay Gordon, TechOps Engineer at DigitalOcean, shares his theory of the sysadmin mindset.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/ce03ec5f2b014ddec5b7526847260515fdfeae59_jay_gordon.png?auto=compress,format","alt":"Jay Gordon","copyright":null,"dimensions":{"width":188,"height":188}},{"type":"heading2","text":"My life as a sysadmin used to be filled with fear.","spans":[]},{"type":"paragraph","text":"I was afraid I would do something terribly wrong with the systems I was running; something that would take a long time and a lot of money to fix.","spans":[]},{"type":"paragraph","text":"For example, if I upgraded my Linux kernel, would my services fail to boot? Would it take intense troubleshooting to correct while our production system was out of action for hours? Getting a new server up meant spending hours with my host: waiting for credentials, waiting for restoration from a traditional backup service, and then finally working with engineering staff to bring the application back online.","spans":[]},{"type":"paragraph","text":"Even worse, the longer I took to restore a failed system, the more my coworkers subjected me to increasing blame and shame. I was afraid, my team experienced internal strife, and my company lost money. These are just not cool things to have directed at you day after day, simply for trying to get work done. Fear is not a motivator, and fear is not a way to run a business.","spans":[]},{"type":"heading2","text":"You can fail. It's okay.","spans":[]},{"type":"paragraph","text":"Fear of failure should not stop you using Linux. The answer is not to stop failing. It's to encourage it.","spans":[]},{"type":"preformatted","text":"\"I must not fear. Fear is the mind-killer.\"—Frank Herbert","spans":[]},{"type":"paragraph","text":"Encourage failure? What kind of crazy are you talking about?","spans":[]},{"type":"paragraph","text":"When failure is cheap, it's not a problem if your new code doesn't deploy the way you thought.","spans":[]},{"type":"paragraph","text":"It's not a problem if your new custom-compiled version of MySQL doesn't start the way you thought.","spans":[]},{"type":"paragraph","text":"It's not a problem if your WordPress upgrade failed.","spans":[]},{"type":"heading2","text":"Snapshots Are Your Friends","spans":[]},{"type":"paragraph","text":"I want to share a development workflow that makes failure cheap in terms of both time and money.","spans":[{"start":81,"end":85,"type":"em"},{"start":90,"end":95,"type":"em"}]},{"type":"paragraph","text":"Let's say you're about to test an upgrade to your application.","spans":[]},{"type":"o-list-item","text":"Snapshot the server running your old, working version of the app","spans":[]},{"type":"o-list-item","text":"Create a staging environment from the snapshot","spans":[]},{"type":"o-list-item","text":"Test your new app on the staging server","spans":[]},{"type":"o-list-item","text":"Deploy to your live production server","spans":[]},{"type":"o-list-item","text":"Did it fail? Restore quickly to your working snapshot","spans":[]},{"type":"o-list-item","text":"Continue troubleshooting on the staging server at your leisure","spans":[]},{"type":"paragraph","text":"With this workflow, you can fail often and still return to operation quickly without having to rebuild everything. This workflow requires on-demand snapshots, which become possible with a responsive cloud host.","spans":[]},{"type":"paragraph","text":"A quick failure/restoration cycle also lets you meet a low Recovery Point Objective (RPO) — i.e., a recent restoration point for your data — and Recovery Time Objective (RTO) — i.e., the amount of time to restore service.","spans":[{"start":59,"end":89,"type":"em"},{"start":145,"end":174,"type":"em"}]},{"type":"paragraph","text":"A workflow that uses lots of snapshots becomes cost effective in the cloud, where typical server costs are minimal (pennies for hours of service). You can iterate through software troubleshooting without the costs of the hardware and people traditionally needed to set up new staging or restored server environments. With cloud hosting, the only sysadmin you need to get from zero to 88 MPH on your application is you.","spans":[]},{"type":"paragraph","text":"Failure is okay, as long as you have a plan. Using tools to help you move, fail, and recover should be part of your planning. If you fail often but cheaply, you'll have the time, money and most importantly, the confidence to fail, learn, and eventually prosper as a developer and sysadmin.","spans":[]},{"type":"paragraph","text":"Here are a few resources to learn more about snapshot-level backups.","spans":[]},{"type":"list-item","text":"VPS-Level Backups","spans":[{"start":0,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-choose-an-effective-backup-strategy-for-your-vps#vps-level-backups"}}]},{"type":"list-item","text":"Whether you run MySQL or PostgreSQL, back up your relational databases before taking the snapshot","spans":[{"start":16,"end":21,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-backup-mysql-databases-on-an-ubuntu-vps"}},{"start":25,"end":35,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-backup-postgresql-databases-on-an-ubuntu-vps"}}]},{"type":"paragraph","text":"by Jay Gordon","spans":[{"start":3,"end":13,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/jaydestro"}}]}],"blog_post_date":"2015-09-08","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"inside-do-banishing-your-sysadmin-fears"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Luca Salvatore","author_image":{"dimensions":{"width":250,"height":250},"alt":"Luca Salvatore","copyright":null,"url":"https://images.prismic.io/www-static/fd8fb2a54e8e54d882c33bccac82b22a684d920e_9bb2be860884302b74920173da25866a.jpg?auto=compress,format"},"_meta":{"uid":"luca_salvatore"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"next generation of digital networking","copyright":null,"url":"https://images.prismic.io/www-static/76d24fe4-9a7c-4cfc-aed1-57e3956539a1_hero.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Building the Next Generation of DigitalOcean Networking","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"On April 15th, we opened our newest datacenter in Frankfurt. The blog post announcing FRA1 mentioned that the new datacenter was built using 40G networking and faster SSDs. What that blog didn't mention was that the entire network was actually redesigned from the ground up to allow us to grow and scale to some pretty impressive numbers. On top of that, we have also started retrofitting some of our older locations with this new design, which includes 40G networking and newer, faster hypervisors.","spans":[{"start":61,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/introducing-our-new-european-region-frankfurt/"}}]},{"type":"paragraph","text":"In this blog, we'll go into more detail about how we've been upgrading live DCs to the new architecture, and also take a look into how we are building out future datacenters.","spans":[]},{"type":"heading2","text":"Frankfurt's Design","spans":[]},{"type":"paragraph","text":"First, let's look at the at the new design that we used in Frankfurt. It is based on the widely used Clos topology which utilizes spine and leaf switches to build a highly scalable and redundant network. In this case, leaf switches are top of rack (TOR) switches, and spine switches act as an aggregation layer before heading into the core of the network.","spans":[]},{"type":"paragraph","text":"Here's how it all looks:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/2661f8383196470a0ea2537161de1ba5256b1d84_fig1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1058,"height":290}},{"type":"paragraph","text":"The leaf switches in this diagram represent TOR switches. Each hypervisor in the rack has a 40G connection to both leaf switches. From there, every leaf switch connects to every spine (again, with 40G connections). This allows for huge amounts of bandwidth and provides a high level of redundancy.","spans":[]},{"type":"heading2","text":"Scaling the Network","spans":[]},{"type":"paragraph","text":"The diagram above only shows 4 pairs of leaf switches, which means 4 racks in this case. In reality, the number of racks each pod supports is limited to the number of ports in the spine. Each spine switch can have a maximum of 32 40G ports. We reserve some uplinks for the connections to the cores, so the number we work with is 12 racks per pod, i.e., 24 ports per spine. Once the spine is full, we scale out horizontally.","spans":[]},{"type":"paragraph","text":"This diagram shows another pod added, along with the connections up to the core:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/41d67a2c6d708745b0e8db6f5e82a4217582bac0_fig2.jpg?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1178,"height":505}},{"type":"paragraph","text":"Scaling the network out in this manner presents an interesting problem. All switches have limits on the amount of information they can hold. These are things like MAC addresses, ARPs entries, and routing information. In the core of our network, we are mainly concerned with the number of ARP entries that the core switches can store. The maximum number on the ARPs we support in the core is 256,000. While this may seem like a big number (and it is), only the biggest switches can support ARP tables that size.","spans":[]},{"type":"heading2","text":"Using Zones","spans":[]},{"type":"paragraph","text":"The solution to this problem is to split the network into zones. Where each pod has its own set of spine switches, each zone has its own set of core switches. Expanding on the diagram above, we can now have a network which looks like the diagram below, which also includes edge routers to show how two (or more) zones would connect to each other.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/93d21a5e1b9f22378788e3e4065eb13a0ed14684_fig3.jpg?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1182,"height":1102}},{"type":"paragraph","text":"With this design, we are now able to scale out the network horizontally. The limiting factor becomes the availability of ports on the edge routers. Because that count is quite large, this design will last us for some time to come. Each different level of the network is connected using 40G interfaces, including the hypervisors. In most cases, these are configured using aggregation (802.3ad), so the bandwidth could be 80G or even 160G at some points.","spans":[]},{"type":"heading2","text":"Current and Future DCs","spans":[]},{"type":"paragraph","text":"Using this design, it's very easy to retrofit our existing locations. We simply build out the network in new racks, and when it's all working, we connect the new cores to the edge routers, make some changes in our backend code, and the new zone is live!","spans":[]},{"type":"paragraph","text":"This new 40G design is what we will be using in all future DCs, and has already been deployed in FRA1, NYC1, and NYC3. We will also be adding this design into two more locations in the next few months. With this type of architecture, the DigitalOcean network will be able to continue to scale and serve our customers for many years to come.","spans":[]},{"type":"paragraph","text":"by Luca Salvatore","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/_LucaNet"}}]}],"blog_post_date":"2015-08-25","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"building-the-next-generation-of-digitalocean-networking"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Erika Heidi","author_image":null,"_meta":{"uid":"erika_heidi"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"elephant super heros in masks flying with the words 'Get ready for PHP 7'","copyright":null,"url":"https://images.prismic.io/www-static/eda8fb19-1f01-4d9e-b729-59107e7fecf0_get_ready.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Getting Ready for PHP 7","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"2015 has been an important year for PHP. Eleven years after its 5.0 release, a new major version is finally coming our way! PHP 7 is scheduled for release before the end of the year, bringing many new language features and an impressive performance boost.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"But how will this impact your current PHP codebase? What really changed? How safe is it to update? This post will answer these questions and give you a taste of what's to come with PHP 7.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Performance Improvements","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Performance is undoubtedly the biggest reason why you should upgrade your servers as soon as a stable version is released. The core refactoring introduced by the [phpng RFC](https://wiki.php.net/rfc/phpng) makes PHP 7 as fast as (or faster than) HHVM. The official benchmarks are impressive: most real world applications running on PHP 5.6 will run at least twice as fast on PHP 7. ","spans":[{"start":358,"end":371,"type":"strong"}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"For detailed performance benchmarks, have a look at Rasmus Lerdorf's presentation at PHP Australia. (You can use the arrow keys to navigate through the slides.) Here are some WordPress benchmarks from that presentation:","spans":[{"start":52,"end":98,"type":"hyperlink","data":{"link_type":"Web","url":"http://talks.php.net/oz15#/wpbench"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/fa4737dd-0d2a-4389-8809-fccf2128af79_php7_graph.jpg?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":784,"height":490}},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"PHP 7 handles more than twice as many requests per second, which in practical terms will represent a 100% improvement on performance for Wordpress websites.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Backwards Compatibility Pitfalls","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Let's talk about the few things that could potentially break a legacy application running on older versions of PHP.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":" Deprecated Items Removed","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"A number of deprecated items have been removed. Because they've been deprecated for some time now, hopefully you aren't using them! This might, however, have an impact on legacy applications.","spans":[{"start":12,"end":46,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/remove_deprecated_functionality_in_php7"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"In particular, ASP-style tags ( `<%`, `<%=` and `%>` ), were removed along with script tags ( `<script language=\"php\">` ). Make sure you are using the recommended `<?php` tag instead. Other functions that were previously deprecated, like [split](http://php.net/manual/en/function.split.php), have also been removed in PHP 7.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The ereg extension (and all `ereg_*` functions) have been deprecated since PHP 5.3. It should be replaced with the PCRE extension (`preg_*` functions), which offers many more features. The mysql extension (and the `mysql_*` functions) have been deprecated since PHP 5.5.  For a direct migration, you can use the mysqli extension and the `mysqli_*` functions instead.","spans":[{"start":4,"end":8,"type":"strong"},{"start":115,"end":119,"type":"strong"},{"start":189,"end":194,"type":"strong"},{"start":245,"end":269,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/mysql_deprecation"}},{"start":312,"end":318,"type":"strong"}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":" Uniform Variable Syntax","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The [uniform variable syntax](https://wiki.php.net/rfc/uniform_variable_syntax) is meant to solve a series of inconsistencies when evaluating variable-variable expressions. Consider the following code:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    <?php  ","spans":[]},{"type":"paragraph","text":"    class Person  ","spans":[]},{"type":"paragraph","text":"    {","spans":[]},{"type":"paragraph","text":"       public $name = 'Erika';","spans":[]},{"type":"paragraph","text":"       public $job = 'Developer Advocate';","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    $person = new Person();","spans":[]},{"type":"paragraph","text":"    $property = [ 'first' => 'name', 'second' => 'info' ];","spans":[]},{"type":"paragraph","text":"    echo \"\\nMy name is \" . $person->$property['first'] . \"\\n\\n\";  ","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"In PHP 5, the expression `$person->$property['first']` is evaluated as `$person->{$property['first']}`. In practical terms, this will be interpreted as `$person->name`, giving you the result \"My name is Erika\". Even though this is an edge case, it shows clear inconsistencies with the normal expression evaluation order, which is left to right.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"In PHP 7, the expression `$person->$property['first']` is evaluated as `{$person->$property}['first']`. The interpreter will evaluate `$person->$property` first; consequently, the previous code example won't work in PHP 7 because `$property` is an array and cannot be converted to a string.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"A quick and easy way to fix this problem is by explicitly defining the evaluation order with the help of curly braces (e.g. `$person->{$property['first']}`), which will guarantee the same behavior on both PHP 5 and PHP 7.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Thanks to the new uniform left-to-right variable syntax, many expressions previously treated as invalid will now become valid. To illustrate this new behavior, consider the following class:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    <?php  ","spans":[]},{"type":"paragraph","text":"    class Person  ","spans":[]},{"type":"paragraph","text":"    {","spans":[]},{"type":"paragraph","text":"       public static $company = 'DigitalOcean';","spans":[]},{"type":"paragraph","text":"       public function getFriends()","spans":[]},{"type":"paragraph","text":"       {","spans":[]},{"type":"paragraph","text":"           return [","spans":[]},{"type":"paragraph","text":"               'erika' => function () {","spans":[]},{"type":"paragraph","text":"                   return 'Elephpants and Cats';","spans":[]},{"type":"paragraph","text":"               },","spans":[]},{"type":"paragraph","text":"               'sammy' => function () {","spans":[]},{"type":"paragraph","text":"                   return 'Sharks and Penguins';","spans":[]},{"type":"paragraph","text":"               }","spans":[]},{"type":"paragraph","text":"           ];","spans":[]},{"type":"paragraph","text":"       }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"       public function getFriendsOf($someone)","spans":[]},{"type":"paragraph","text":"       {","spans":[]},{"type":"paragraph","text":"           return $this->getFriends()[$someone];","spans":[]},{"type":"paragraph","text":"       }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"       public static function getNewPerson()","spans":[]},{"type":"paragraph","text":"       {","spans":[]},{"type":"paragraph","text":"           return new Person();","spans":[]},{"type":"paragraph","text":"       }","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"With PHP 7, we can create nested associations and different combinations between operators:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    $person = new Person();","spans":[]},{"type":"paragraph","text":"    echo \"\\n\" . $person->getFriends()['erika']() . \"\\n\\n\";  ","spans":[]},{"type":"paragraph","text":"    echo \"\\n\" . $person->getFriendsOf('sammy')() . \"\\n\\n\";  ","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This snippet would give us a parse error on PHP 5, but works as expected on PHP 7.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Similarly, nested static access is also possible:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    echo \"\\n\" . $person::getNewPerson()::$company . \"\\n\\n\";  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"In PHP 5, this would give us the classic `T_PAAMAYIM_NEKUDOTAYIM` syntax error.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Fatal Error with multiple \"default\" clauses","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This is, again, an edge case and it's more related to logic errors in your code. There's no use for multiple default clauses in a switch, but because it never caused any trouble (e.g. no warnings), it can be difficult to detect the mistake. In PHP 5, the last *default* would be used, but in PHP 7 you will now get a *Fatal Error: Switch statements may only contain one default clause*.","spans":[{"start":100,"end":136,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/switch.default.multiple"}},{"start":261,"end":268,"type":"em"},{"start":318,"end":384,"type":"em"}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Engine Exceptions","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Engine exceptions are meant to facilitate handling errors in your application. Existing fatal and recoverable fatal errors were replaced by exceptions, making it possible for us to catch said errors and take action — like displaying them in a nicer way, logging them, or performing recovery procedures.","spans":[{"start":0,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/engine_exceptions_for_php7"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The implementation of engine exceptions was done in such a way to keep backwards compatibility, but there is an edge case that could impact legacy applications when they have a custom error handling function in place. Consider the following code:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    <?php  ","spans":[]},{"type":"paragraph","text":"    set_error_handler(function ($code, $message) {  ","spans":[]},{"type":"paragraph","text":"       echo \"ERROR $code: \" . $message . \"\\n\\n\";","spans":[]},{"type":"paragraph","text":"    });","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    function a(ArrayObject $b){  ","spans":[]},{"type":"paragraph","text":"       return $b;","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    a(\"test\");","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    echo \"Hello World\";  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This code generates a recoverable error caused by the type mismatch when calling the function `a()` using a string as parameter. In PHP 5, it generates an `E_RECOVERABLE` that is caught by the custom error handler, so this is the output you get:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    ERROR 4096: Argument 1 passed to a() must be an instance of ArrayObject, string given, called in /data/Projects/php7dev/tests/test04.php on line 12 and defined(...)","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    Hello World  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Notice that the execution continues because the error was handled. In PHP 7, this code generates a `TypeError` exception (not an error!) so the custom error handler won't be called. This is the output you get:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    Fatal error: Uncaught TypeError: Argument 1 passed to a() must be an instance of ArrayObject, string given, called in /vagrant/tests/test04.php on line 12 and defined in /vagrant/tests/test04.php:7  ","spans":[]},{"type":"paragraph","text":"    Stack trace:  ","spans":[]},{"type":"paragraph","text":"    #0 /vagrant/tests/test04.php(12): a('test')","spans":[]},{"type":"paragraph","text":"    #1 {main}","spans":[]},{"type":"paragraph","text":"      thrown in /vagrant/tests/test04.php on line 7","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The execution is stopped because the exception was not caught. To solve this problem, you should catch the exceptions using try/catch blocks. It's important to notice that the [Exception hierarchy](https://wiki.php.net/rfc/throwable-interface) had to change to accommodate the new engine exceptions with minimal impact on legacy code:","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Throwable interface","spans":[]},{"type":"list-item","text":"Exception implements Throwable","spans":[]},{"type":"list-item","text":"ErrorException extends Exception","spans":[]},{"type":"list-item","text":"RuntimeException extends Exception","spans":[]},{"type":"list-item","text":"Error implements Throwable","spans":[]},{"type":"list-item","text":"TypeError extends Error","spans":[]},{"type":"list-item","text":"ParseError extends Error","spans":[]},{"type":"list-item","text":"AssertionError extends Error","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This basically means that the new catch-all Exception is now `Throwable` instead of `Exception`. This should not impact any legacy code, but keep it in mind when handling the new engine exceptions in PHP 7.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"New Language Features","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Now the fun part — let's talk about the most exciting new features that will be available when you upgrade to PHP 7.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"New Operators","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"PHP 7 comes with two shiny new operators: the spaceship (or combined comparison operator) and the null coalesce operator.","spans":[{"start":46,"end":55,"type":"strong"},{"start":98,"end":120,"type":"strong"}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The spaceship operator ( `<=>` ), also known as combined comparison operator, can be used to make your chained comparison more concise. Consider the following expression:","spans":[{"start":4,"end":22,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/combined-comparison-operator"}}]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    $a <=> $b","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This expression will evaluate to **-1** if `$a` is smaller than `$b`, **0** if `$a` equals `$b`, and **1** if `$a` is greater than `$b`. It's basically a shortcut for the following expression:","spans":[{"start":35,"end":37,"type":"strong"},{"start":72,"end":73,"type":"strong"},{"start":103,"end":104,"type":"strong"}]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    ($a < $b) ? -1 : (($a > $b) ? 1 : 0)","spans":[]},{"type":"paragraph","text":"    `}```   ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The null coalesce operator ( `??` ) also works as a shortcut for a common use case: a conditional attribution that checks if a value is set before using it. In PHP 5, you would usually do something like this:","spans":[{"start":4,"end":26,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/isset_ternary"}}]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    $a = isset($b) ? $b : \"default\";","spans":[]},{"type":"paragraph","text":"`}```       ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"With the null coalesce operator in PHP 7, we can simply use:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    $a = $b ?? \"default\";","spans":[]},{"type":"paragraph","text":"    `}```   ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Scalar Type Hints","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"One of the most debated new features coming with PHP 7, scalar type hints, will finally make it possible to use integers, floats, strings, and booleans as type hints for functions and methods. By default, scalar type hints are non-restrictive, which means that if you pass a float value to an integer parameter, it will just coerce it to int without generating any errors or warnings.","spans":[{"start":56,"end":73,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/scalar_type_hints_v5","target":"_blank"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It is possible, however, to enable a strict mode that will throw errors when the wrong type is passed as an argument. Consider the following code:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    <?php  ","spans":[]},{"type":"paragraph","text":"    function double(int $value)  ","spans":[]},{"type":"paragraph","text":"    {","spans":[]},{"type":"paragraph","text":"       return 2 * $value;","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    $a = double(\"5\");","spans":[]},{"type":"paragraph","text":"    var_dump($a);  ","spans":[]},{"type":"paragraph","text":"   `}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This code won't generate any errors because we are not using strict mode. The only thing that will happen is a type conversion, so the string \"5\" passed as an argument will be coerced into integer inside the double function.","spans":[{"start":208,"end":214,"type":"em"}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"If we want to make sure only integers are allowed to be passed to the double function, we can enable strict mode by including the directive `declare(strict_types = 1)` as the very first line in our script:","spans":[{"start":70,"end":76,"type":"em"}]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    <?php  ","spans":[]},{"type":"paragraph","text":"    declare(strict_types = 1);  ","spans":[]},{"type":"paragraph","text":"    function double(int $value)  ","spans":[]},{"type":"paragraph","text":"    {","spans":[]},{"type":"paragraph","text":"       return 2 * $value;","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    $a = double(\"5\");","spans":[]},{"type":"paragraph","text":"    var_dump($a);  ","spans":[]},{"type":"paragraph","text":"   `}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This code will generate a Fatal error: Uncaught TypeError: Argument 1 passed to double() must be of the type integer, string given.","spans":[{"start":26,"end":130,"type":"em"}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":" Return Type Hints","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Another important new feature coming with PHP 7 is the ability to define the return type of methods and functions, and it behaves in the same fashion as scalar type hints in regards of coercion and strict mode:","spans":[{"start":77,"end":88,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/return_types"}}]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    <?php  ","spans":[]},{"type":"paragraph","text":"    function a() : bool  ","spans":[]},{"type":"paragraph","text":"    {","spans":[]},{"type":"paragraph","text":"       return 1;","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    var_dump(a());  ","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This snippet will run without warnings and the returned value will be converted to bool automatically. If you enable strict mode (just the same as with scalar type hints), you will get a fatal error instead:","spans":[]},{"type":"paragraph","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"`Fatal error: Uncaught TypeError: Return value of a() must be of the type boolean, integer returned`","spans":[]},{"type":"paragraph","text":"`}```","spans":[]},{"type":"paragraph","text":"Once again, notice that these errors are actually Exceptions that can be caught and handled using try/catch blocks. It's also important to highlight that you can use any valid type hint, not only scalar types.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"What's Next","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The PHP 7 timeline indicates a stable release in mid-October, subject to quality. We are currently on release candidate cycles, and a beta version is already available for tests. Check out the RFC with all changes coming with PHP 7 for more information.","spans":[{"start":3,"end":18,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc/php7timeline"}},{"start":134,"end":146,"type":"hyperlink","data":{"link_type":"Web","url":"http://php.net"}},{"start":193,"end":231,"type":"hyperlink","data":{"link_type":"Web","url":"https://wiki.php.net/rfc#php_70"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Note that even though no new features will be included, some changes might still happen before the final release, so please don't use PHP 7 in production just yet! There's a Vagrant VM created and shared by Rasmus Lerdorf that you can use to test your current code on PHP 7. You are strongly encouraged to test your applications and report any problems you might find.","spans":[{"start":174,"end":184,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/rlerdorf/php7dev"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by Erika Heidi","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/erikaheidi"}}]}],"blog_post_date":"2015-07-15","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"getting-ready-for-php-7"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Brian Knox","author_image":null,"_meta":{"uid":"brian_knox"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"open source doors","copyright":null,"url":"https://images.prismic.io/www-static/34c2992c-c506-4f5c-976a-714b8b9907b0_open_source.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Community, Collaboration, and Problem Solving: The Value of Open Source","spans":[]}],"blog_post_content":[{"type":"heading2","text":"Open Source is a Tool","spans":[]},{"type":"paragraph","text":"When managing a large computing infrastructure, it's paramount to understand what's happening within it; as your infrastructure scales, this becomes increasingly challenging. Fortunately, while this can be a difficult problem, there is a powerful tool available to help solve it: open source software.","spans":[]},{"type":"paragraph","text":"When describing open source software as a tool, I'm referring to the process of open source itself, not a specific piece of open source software. Any open source tool with a history in a particular problem space can be thought of, at a higher level, as a collection of solved problems within that space. This is a perspective on open source software I gained from Pieter Hintjens while working on various ZeroMQ projects, and I find it a highly valuable framing of what open source provides.","spans":[]},{"type":"paragraph","text":"The community that forms around a particular successful piece of software over time evolves into a community of experts about the problems that software solves. Being a contributing member of such communities yields great benefits — not only better tools, but strong and lasting relationships with other people working on the same issues.","spans":[]},{"type":"heading2","text":"A Real World Example","spans":[]},{"type":"paragraph","text":"As a specific example, the metrics team at DO was recently trying to figure out how to safely and securely tail logs in real-time from remote servers as conveniently as if they were local.  To solve this, we first looked at Rsyslog, which is a logging daemon common on many Linux distributions.","spans":[{"start":224,"end":231,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.rsyslog.com/"}}]},{"type":"paragraph","text":"At its heart, Rsyslog provides the ability to receive, parse, filter, and forward log messages. Over the eleven years of its life, it has become a collection of solutions to a wide range of problems, like how to handle logs as efficiently as possible, how to deal with devices logging messages that aren't standards-compliant, how to insert logs into databases for indexing and storage, how to transform logs into a common structured format, and many more.","spans":[]},{"type":"paragraph","text":"Our problem fell into an appropriate scope for Rsyslog, but we couldn't quite solve it with that alone. Our solution was to integrate Rsyslog with CZMQ, a high level C binding for ZeroMQ. CZMQ provides certificate-based authentication, libsodium-based encryption, and support for publisher filtered publish-subscribe buses, among other things. This makes it a natural fit for our particular problem when combined with Rsyslog's parser and message templates.","spans":[{"start":147,"end":151,"type":"hyperlink","data":{"link_type":"Web","url":"http://czmq.zeromq.org/"}},{"start":180,"end":186,"type":"hyperlink","data":{"link_type":"Web","url":"http://zeromq.org/"}}]},{"type":"paragraph","text":"With this combination, we can create dynamic topic streams for log messages and provide them over secure, encrypted streams without additional infrastructure. We've contributed the input and output plugins back to Rsyslog, and they'll soon be part of the official packages, so anyone else with a similar problem can use them. That said, we've tried only one possible approach of many to solving this problem, and we're looking forward to iterating on it, making improvements, and releasing more software around this idea.","spans":[{"start":181,"end":186,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/rsyslog/rsyslog/tree/master/contrib/imczmq"}},{"start":191,"end":197,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/rsyslog/rsyslog/tree/master/contrib/omczmq"}}]},{"type":"paragraph","text":"For example, by using these plugins with LogTalez, a minimal API and command line client built on top of the GoCZMQ Go bindings for CZMQ, logs from any Rsyslog aggregator can be requested by host and program name combinations, piped to standard command line tools, and / or used with useful Go libraries. We're currently working on the ability to extract metrics from log streams using these plugins with Prometheus, a monitoring system and time series database released by SoundCloud.","spans":[{"start":41,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/logtalez"}},{"start":109,"end":115,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/zeromq/goczmq"}},{"start":405,"end":415,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/prometheus/prometheus"}}]},{"type":"heading2","text":"DO + OSS","spans":[]},{"type":"paragraph","text":"Actively taking part in open source development allows us to learn from each other's perspectives and have a conversation about different approaches to the same problems. The tools built through these conversations become shared repositories of knowledge gained through direct experience, freeing all of us to work on the core problems our individual organizations are addressing.","spans":[]},{"type":"paragraph","text":"I'm overjoyed to work at an organization that believes in this approach and puts resources behind it. Our Community team has created a site to highlight some of the open source projects that DO folks release and contribute to. Keep an eye on it for more to come!","spans":[{"start":125,"end":139,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/opensource/"}}]},{"type":"paragraph","text":"by Brian Knox","spans":[{"start":3,"end":13,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/taotetek"}}]}],"blog_post_date":"2015-06-29","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"community-collaboration-and-problem-solving"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Sam Kottler","author_image":null,"_meta":{"uid":"sam_kottler"}},"blog_header_image":{"dimensions":{"width":784,"height":392},"alt":"Transparent huge pages and memory usage illustration of paper ","copyright":null,"url":"https://images.prismic.io/www-static/2e69f132-7800-41e7-992c-25deb946f219_image.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Transparent Huge Pages and Alternative Memory Allocators: A Cautionary Tale","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Recently, our site reliability engineering team started getting alerted about memory pressure on some of our Redis instances which have very small working sets *1. As we started digging into the issue, it became clear that there were problems with freeing memory after initial allocation because there were a relatively small number of keys but a comparatively large amount of memory allocated by `redis-server` processes. Despite initially looking like a leak, the problem was actually an issue between an alternative memory allocator and transparent huge pages.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Background","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"If you already know what transparent huge pages are and how `madvise(2)` works, you can skim this section. For those who don't, read on.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Hold on — what are pages?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"A page is a chunk of memory that a processor allocates for use, typically in 4kb chunks. When an application has to access virtual memory, it has to resolve its virtual memory address to the physical address of the page. The intermediary between physical addresses and mapped virtual memory is called the page table. For every 1GB of memory allocated in 4kb pages, there are 262,144 entries in the page table — and, of course, the more pages in the page table, the longer it takes to translate addresses.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Virtual memory makes page management even more complicated by allowing applications to address pages which don't actually exist in main memory. When this happens, it causes a fault, but the kernel knows how to handle faults in virtual memory and will pull pages off of secondary storage (e.g. local spinning rust or flash, NAS, etc.) without the application knowing the fault happened.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Okay, what are (transparent) huge pages?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Huge pages are exactly what they sound like — pages that are much larger than 4kb in size. They cut down on the number of entries in the page table, thereby reducing the number of table lookups needed to find where a specific range of virtual memory is mapped.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Linux implements support for huge pages *2, which requires changes in software running in user space to take advantage of these potential performance benefits. They come in two varieties (2MB and 1GB — the available sizes depend upon the CPU in use) and have to get configured at boot time via parameters that get passed to the kernel.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The implementation of huge pages itself is pretty boring, so let's talk about transparent huge pages. This is where the fun begins.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"User space software has traditionally had to implement its own support for huge pages, but it's difficult to do and requires lots of testing to be utilized effectively. Rather than having these user space applications manage their interactions with huge pages, transparent huge pages allow applications to use huge pages…. well, transparently. This manifests itself as the kernel doing some additional management of memory being allocated, marked, and subsequently freed with (in our case) 2MB underlying pages.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This all sounds useful, but it turns out that some alternative memory allocators don't play nicely with transparent huge pages.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":" madvise(2)","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"`madvise(2)` is not part of POSIX, but it is inspired by the POSIX function `posix_fadvise(2)`*3. It gives advice to the kernel about what it should do with a specific range of memory when it comes time to evict pages. The advice must be given for a specific range of memory starting at an address for `n` bytes after that address.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It also passes a parameter around the piece of advice, like \"free this address and `n` bytes after it whenever you're ready\" (`MADV_DONTNEED`) or \"this address and `n` bytes after it are going to be used soon, so you should probably read some pages ahead\" (`MADV_WILLNEED`).","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Some memory allocators, like the one included as part of glibc, don't deal with marking pages using `madvise(2)`. However, `jemalloc(3)`*does* mark ranges with `madvise(..., MADV_DONTNEED)`, but it's important to note that it's on a range rather than at the \"left\" and \"right\" edges of a specific page or group of pages.","spans":[{"start":137,"end":141,"type":"em"}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"So what happened?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"This rabbit hole began when a `redis-server` process, which had recently been moved over to `LD_PRELOAD``jemalloc.so`, began using significant amounts of memory. Initial signs pointed to the fact that using an alternative allocator might be part of the issue, so that's where we started digging.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It turns out that `jemalloc(3)` uses `madvise(2)` extensively to notify the operating system that it's done with a range of memory which it had previously `malloc`'ed. Because the machine used transparent huge pages, the page size was 2MB. As such, a lot of the memory which was being marked with `madvise(..., MADV_DONTNEED)` was within ranges substantially smaller than 2MB. This meant that the operating system never was able to evict pages which had ranges marked as `MADV_DONTNEED` because the entire page would have to be unneeded to allow it to be reused.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"So despite initially looking like a leak, the operating system itself was unable to free memory because of `madvise(2)` and transparent huge pages. *4 This led to sustained memory pressure on the machine and `redis-server` eventually getting OOM killed.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by Sam Kottler","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/samkottler"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"1. ","spans":[]},{"type":"paragraph","text":"Bugs around memory allocation often become more apparent with data stores because they tend to allocate and free memory at a relatively rapid pace. We use Redis as a cache and queue for ephemeral jobs, meaning that it allocates and frees substantial amounts of memory given that types of operations we are doing.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"2. ","spans":[]},{"type":"paragraph","text":"Huge pages are also incorporated into some other widely used Unix kernels, like FreeBSD, as superpages; the same concept is available on Windows as large pages. Despite the different names, the functionality is fundamentally the same. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"3. ","spans":[]},{"type":"paragraph","text":"undefined is specifically targeted towards file access rather than direct memory management, and takes a file descriptor as the first argument rather than a pointer to an address.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"4. ","spans":[]},{"type":"paragraph","text":"Note that disabling transparent huge pages isn't possible via ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"undefined. Rather, it requires manually echoing settings into ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"undefined at or after boot. In ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"undefined or by hand: ","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"        if test -f /sys/kernel/mm/transparent_hugepage/enabled; then","spans":[]},{"type":"paragraph","text":"          echo never > /sys/kernel/mm/transparent_hugepage/enabled","spans":[]},{"type":"paragraph","text":"        fi","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"        if test -f /sys/kernel/mm/transparent_hugepage/defrag; then","spans":[]},{"type":"paragraph","text":"          echo never > /sys/kernel/mm/transparent_hugepage/defrag","spans":[]},{"type":"paragraph","text":"        fi","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]}],"blog_post_date":"2015-06-15","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"transparent-huge-pages-and-alternative-memory-allocators"}}}]}}}