{"componentChunkName":"component---src-templates-tag-jsx","path":"/blog/tag/engineering/2/","result":{"data":{"prismic":{"allFeaturedblogs":{"edges":[{"node":{"featured_blogs_enabled":true,"heading":[{"type":"paragraph","text":"Featured posts","spans":[]}],"featured_blog_1":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/6d8d81b1-971a-4313-b033-b4e125cb14a0_MondoDB-blog-header-790x395.PNG?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing DigitalOcean Managed MongoDB – a fully managed, database as a service for modern apps","spans":[]}],"blog_post_date":"2021-06-29","blog_post_content":[{"type":"paragraph","text":"MongoDB is one of the most popular databases, and it’s ideal for apps that evolve rapidly and need to handle huge volumes of data and traffic. It offers advantages like flexible document schemas, code-native data access, change-friendly design, and easy horizontal scale-out.","spans":[{"start":22,"end":44,"type":"hyperlink","data":{"link_type":"Web","url":"https://db-engines.com/en/ranking","target":"_blank"}}]},{"type":"paragraph","text":"However, building and maintaining MongoDB clusters from the ground up can be a huge undertaking. Developers often complain that they have to spend their valuable time and resources on database management. Well, we’ve been listening and have some great news: accessing and managing MongoDB on DigitalOcean just got a lot simpler!","spans":[]},{"type":"paragraph","text":"We are excited to announce that DigitalOcean Managed MongoDB is now in General Availability. Managed MongoDB is a fully managed, database as a service (DBaaS) offering from DigitalOcean, built in partnership with and certified by MongoDB Inc. It provides you all the technical capabilities that make MongoDB so beloved in the developer community. Together we have ensured that you will get access to all the latest releases of the MongoDB document database as they become available.","spans":[{"start":32,"end":91,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases-mongodb/"}},{"start":230,"end":241,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/","target":"_blank"}}]},{"type":"paragraph","text":"Managed MongoDB simplifies the MongoDB administration. Developers of all skill levels, even those who do not have prior experience in databases, can spin up MongoDB clusters in just a few minutes. We handle the provisioning, managing, scaling, updates, backups, and security of your MongoDB clusters, allowing you to offload the complex, time consuming –yet critical – database administration tasks to us. This empowers you to focus on what really matters: building awesome apps.","spans":[]},{"type":"embed","oembed":{"height":113,"width":200,"embed_url":"https://www.youtube.com/watch?v=NvHQSV7jnKA","type":"video","version":"1.0","title":"Create a MongoDB Database on DigitalOcean","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","provider_name":"YouTube","provider_url":"https://www.youtube.com/","cache_age":null,"thumbnail_url":"https://i.ytimg.com/vi/NvHQSV7jnKA/hqdefault.jpg","thumbnail_width":480,"thumbnail_height":360,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/NvHQSV7jnKA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"heading2","text":"Benefits of Managed MongoDB","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Easy set up and maintenance: We create the database clusters for you. Simply choose the cluster configuration (e.g., memory, disk size, number of nodes, etc.), and the data center in which you want to host the database. Follow a few simple steps and your database cluster will be up and running in a matter of minutes. You can spin up clusters using the cloud control panel, CLI, or API.\n\n","spans":[{"start":0,"end":28,"type":"strong"}]},{"type":"list-item","text":"Automatic daily backups with point in time recovery: Data is one of the most important assets of an app, so it’s critical to backup your database. We take backups of your entire clusters automatically on a daily basis, for free. We also provide a point in time recovery for 7 days, that way if things go wrong due to human error, machine error, or some combination of both, you can easily restore the database as it was at any point in the previous 7 days. \n\n","spans":[{"start":0,"end":52,"type":"strong"}]},{"type":"list-item","text":"Automatic updates and access to latest MongoDB releases: You get access to MongoDB 4.4. This is the latest release of MongoDB and comes packed with numerous enhancements like hedged reads, rust, and swift drivers. Since we have developed Managed MongoDB in partnership with MongoDB Inc, you will always get access to new releases as they become available. With Managed MongoDB, the updates happen automatically. Just select a date and time for the updates and we take care of the rest. This makes it easy to stay up to date with MongoDB releases without disrupting your business.\n\n","spans":[{"start":0,"end":56,"type":"strong"},{"start":148,"end":169,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/new","target":"_blank"}}]},{"type":"list-item","text":"High availability with automated failover: If your database goes down, it can take down the entire app, leading to bad customer experiences. With Managed MongoDB, you can easily minimize the downtime for your database and make it highly available with standby nodes. Standby nodes add redundancy, so if for example the primary node fails, the standby node is immediately promoted to primary and begins serving requests while we provision a replacement standby node in the background.\n\n","spans":[{"start":0,"end":42,"type":"strong"}]},{"type":"list-item","text":"Scale up easily to handle traffic spikes: As your app gains traction and the usage grows, it’s important to have a database that can keep up with the increased demand. With Managed MongoDB, you can easily scale up the size of database nodes when needed.\n\n","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Secure by default: Since data is critical, it also needs to be secure. We encrypt data at rest with LUKS and in transit with SSL. When you create a new cluster, it’s placed in a VPC network by default that provides a more secure connection between resources. You can also restrict access to your nodes to prevent brute-force password and denial-of-service attacks.","spans":[{"start":0,"end":18,"type":"strong"},{"start":178,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/vpc/"}}]},{"type":"heading2","text":"The need for Managed Databases","spans":[]},{"type":"paragraph","text":"DigitalOcean’s mission is to simplify cloud computing so developers, startups, and SMBs can spend more time building software that changes the world. While databases are a critical component to any application, building, maintaining, and scaling them can be complex and time consuming. For developers that are building apps for their business, database administration is often not a core focus area. But it’s quite common to find developers that write the code and then also roll up their sleeves to maintain databases. Such users would rather offload the tedious database administration and focus their limited time and energy on building and enhancing their apps. ","spans":[]},{"type":"paragraph","text":"With this in mind, we introduced Managed Databases a couple of years ago and are excited to add Managed MongoDB to our portfolio. With this release, DigitalOcean Managed Databases now supports the following engines:","spans":[{"start":33,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases/"}}]},{"type":"image","url":"https://images.prismic.io/www-static/87745cc1-1c5f-4463-b104-104b7fc30dc7_managed-databases-logos.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":849,"height":104}},{"type":"paragraph","text":"Managed MongoDB launch comes on the heels of DigitalOcean App Platform, a modern, reimagined PaaS (Platform as a Service) that we released a few months ago. App Platform makes it very easy to build, deploy, and scale apps and static sites. You can deploy code by simply pointing to your GitHub and GitLab repos, and App Platform will do all the heavy lifting of managing infrastructure, app runtimes, and dependencies. App Platform, along with Managed Databases, helps fulfill DigitalOcean’s mission by empowering developers, startups, and SMBs to focus more on their apps, and less on the underlying infrastructure and databases.","spans":[{"start":45,"end":70,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"heading2","text":"How Managed MongoDB works","spans":[]},{"type":"paragraph","text":"DigitalOcean provides you with various compute options to build your apps like:","spans":[]},{"type":"list-item","text":"Droplets: On-demand, Linux virtual machines suitable for production business applications and personal passion projects.","spans":[{"start":0,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/droplets/"}}]},{"type":"list-item","text":"DigitalOcean Kubernetes: Managed Kubernetes with automatic scaling, upgrades, and a free control plane.","spans":[{"start":0,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"list-item","text":"DigitalOcean App Platform: A fully managed Platform as a Service.","spans":[{"start":0,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"paragraph","text":"No matter which compute option you choose to build your apps, you can easily add Managed MongoDB to it. In addition to this, Managed MongoDB also integrates with the Node.js 1-Click App from DigitalOcean Marketplace making it a lot easier to build Node.js apps.","spans":[{"start":166,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/nodejs"}}]},{"type":"heading2","text":"Simple, predictable pricing","spans":[]},{"type":"paragraph","text":"Just like all DigitalOcean products, Managed MongoDB provides simple, predictable pricing that allows you to control costs and prevent any surprise bills. You can spin up a database cluster for just $15/month, or a highly available three-node replica set for $45/month. Click here for more information.","spans":[{"start":270,"end":301,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/#managed-databases"}}]},{"type":"heading2","text":"Regional availability","spans":[]},{"type":"paragraph","text":"Managed MongoDB is currently available in the following regions:","spans":[]},{"type":"list-item","text":"NYC3 (New York, USA)","spans":[]},{"type":"list-item","text":"FRA1 (Frankfurt, Germany)","spans":[]},{"type":"list-item","text":"AMS3 (Amsterdam, Netherlands)","spans":[]},{"type":"paragraph","text":"We will be making Managed Mongo available in other regions soon. Please check out the release notes for most up to date information on regional availability.","spans":[{"start":86,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/release-notes/"}}]},{"type":"heading2","text":"Join us at deploy, DigitalOcean’s virtual user conference","spans":[]},{"type":"paragraph","text":"Today we have deploy, DigitalOcean’s signature user conference, which focuses on celebrating, educating, and connecting awesome builders from all over the world.","spans":[{"start":14,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/home"}}]},{"type":"paragraph","text":"Check out the keynote session from DigitalOcean's CEO, Yancey Spruill, in which he talks about where we're headed as a company and shares some exciting product updates. His keynote will be followed by sessions from community members, engineers, customers, and other experts that are building technologies and businesses powered by the cloud. With live Q&A and an active Discord server, there’s ample opportunity to engage and learn something new. Click here to attend the deploy conference.","spans":[{"start":14,"end":69,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/agenda/session/552806"}},{"start":347,"end":384,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy-discord"}},{"start":461,"end":489,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy"}}]},{"type":"paragraph","text":"We are also launching a hackathon for DigitalOcean Managed MongoDB. Learn how you can participate, submit an app and get a t-shirt.","spans":[{"start":24,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/mongodb-hackathon"}}]},{"type":"paragraph","text":"We hope you will give Managed MongoDB a try. Here are some sample datasets and sample apps that you can use to kick the tires. Check out the docs and let us know what you think!","spans":[{"start":22,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/databases/new?engine=mongodb"}},{"start":59,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/do-community/mongodb-resources","target":"_blank"}},{"start":141,"end":145,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/databases/mongodb/"}}]},{"type":"paragraph","text":"If you’d like to have a conversation about using DigitalOcean and Managed MongoDB in your business, please feel free to contact our sales team.","spans":[{"start":120,"end":142,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"André Bearfield","spans":[]},{"type":"paragraph","text":"Director of Product Management","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"_meta":{"uid":"introducing-digitalocean-managed-mongodb"}},"featured_blog_2":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":"Droplet Console","copyright":null,"url":"https://images.prismic.io/www-static/710499ae-78cc-4179-afc1-15793637b200_DODX3727-790x400-logo-2.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Securely connect to Droplets with SSH key pairs using a new Droplet Console","spans":[]}],"blog_post_date":"2021-08-10","blog_post_content":[{"type":"paragraph","text":"The famous author Ken Blanchard once said, “Feedback is the breakfast of champions.\" This is something we truly believe at DigitalOcean, and we always strive to enhance our products based on customer feedback.","spans":[]},{"type":"paragraph","text":"With this goal in mind, we are excited to introduce a new Droplet Console that will make it much easier to connect to your Droplets securely. The new Droplet Console provides one-click SSH access to your Droplets through a native-like SSH/Terminal experience. It also eliminates the need for a password or manual configuration of SSH keys. Starting today, we’re pleased to announce that the new Droplet Console is now available to all Droplet users.","spans":[]},{"type":"heading2","text":"Why you should be using Secure Shell (SSH) ","spans":[]},{"type":"paragraph","text":"Password-based security is notoriously insecure due to password fatigue and the overuse of passwords such as ‘123456’. Secure Shell or SSH is a network communication protocol that solves this by using passwordless solutions for encryption, enabling two computers to communicate and securely share data. At a high level, SSH works by creating cryptographic key pairs consisting of a public and private key, which are computer generated and stored separately to ensure their security. ","spans":[{"start":80,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://cybernews.com/best-password-managers/most-common-passwords/"}}]},{"type":"paragraph","text":"SSH has become the default encryption protocol for many industries, but it was difficult to use SSH keys with DigitalOcean’s current Recovery (VNC) console, which is why we developed our new Droplet Console. The new Droplet Console is backed by an agent that security supervises the key pair, while also providing one-click SSH access to our users. You can see the full list of features below.","spans":[]},{"type":"heading2","text":"The new Droplet Console: More time saving, less time wasting ","spans":[]},{"type":"paragraph","text":"The new Droplet Console is for everyone who is looking to build fast, secure apps and avoid hassles with SSH access & usability issues.","spans":[]},{"type":"paragraph","text":"In addition to easier SSH access, the new Droplet Console comes with:","spans":[]},{"type":"list-item","text":"Copy/paste text: Instead of typing lengthy key pairs and text manually, you can use copy/paste to save time. ","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Multi-color support: Multi-color support makes the console more useful and intuitive, and breaks the conventional standard appearance which is black text on a white background. ","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Multi-language support: DigitalOcean’s new Droplet Console supports multiple languages, meaning you can now type and view any content in any language that is supported by UTF-8","spans":[{"start":0,"end":24,"type":"strong"}]},{"type":"list-item","text":"OS/images supported: Linux distributions (Ubuntu(16.04 - 20.04), Fedora (32 & 33), Debian (9), CentOS (7.6 & 8.3), CentOS 8 Stream, Rocky Linux and Marketplace images.","spans":[{"start":0,"end":20,"type":"strong"},{"start":148,"end":159,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/"}}]},{"type":"paragraph","text":"The new Droplet Console is available by default on any new Droplets you spin up. You can also enable it manually on older Droplets. Click here to learn more!","spans":[{"start":132,"end":157,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/droplets/how-to/connect-with-console/"}}]},{"type":"paragraph","text":"Check out this short walkthrough video that shows the new Droplet Console in action: ","spans":[]},{"type":"embed","oembed":{"type":"video","embed_url":"https://www.youtube.com/watch?v=Qt7QihVuxiE","title":"Access Your Droplet Terminal Through the Web Console","provider_name":"YouTube","thumbnail_url":"https://i.ytimg.com/vi/Qt7QihVuxiE/hqdefault.jpg","provider_url":"https://www.youtube.com/","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","height":113,"width":200,"version":"1.0","thumbnail_height":360,"thumbnail_width":480,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/Qt7QihVuxiE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"paragraph","text":"We hope you’re excited about the new Droplet Console. You’re welcome to spin some Droplets up right now, and try out the new Droplet Console – why wait?","spans":[{"start":72,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/droplets/new"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"Harsh Banwait, Senior Product Manager","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Harsh Banwait","author_image":{"dimensions":{"width":600,"height":399},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/e83ff690-b20c-4d88-a2b6-57e562558cd6_download.png?auto=compress,format"},"_meta":{"uid":"harsh-banwait"}},"_meta":{"uid":"new-droplet-console-ssh-support"}},"featured_blog_3":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/588e28d3-d41e-480b-937b-8c3b19201f6e_DODX3568-790x400-Blog.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to scale your SaaS product without breaking the bank","spans":[]}],"blog_post_date":"2021-06-22","blog_post_content":[{"type":"paragraph","text":"These days, if you are in the business of software, chances are you are delivering or plan to deliver your services using a Software-as-a-Service (SaaS) model. A combination of internet-based delivery, subscription-based pricing, and low-friction product experiences have made SaaS solutions valuable tools for their users, and an excellent vehicle for software builders looking to distribute their products.","spans":[]},{"type":"paragraph","text":"These factors have made SaaS solutions ubiquitous; SaaS is the largest segment in the public cloud market, and is used to provide functionality ranging from personal finance apps for consumers, to productivity software for businesses, and even tools and services for software developers themselves to compose their applications and simplify their workflows. It is also not uncommon to find micro-SaaS applications being built for specific industries such as retail, job functions such as accounting or marketing, or tasks such as event management. ","spans":[]},{"type":"paragraph","text":"The best thing about this SaaS wave has been that it has allowed a new generation of software builders to build and monetize applications and participate in the digital economy. Previously, you had to be a big company with lots of resources, name recognition and distribution networks to successfully sell software products. Now, irrespective of whether you are a single person working on a passion project, a small team of developers in a startup, or a small and medium-sized business (SMB), the SaaS model enables you to express your ideas in the form of software and deliver them to customers anywhere in the world.","spans":[]},{"type":"heading2","text":"The unique challenges of building SaaS solutions","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Despite the opportunities that come with the widespread adoption of SaaS products, software builders still have to answer key questions in their journey to building successful SaaS products. Understanding what customers to target, features to prioritize, how to price your product, and how to acquire customers are all critical questions to figure out while you are also doing the important job of actually building and operating the product. ","spans":[]},{"type":"paragraph","text":"Writing the code, testing, deployment, monitoring the usage in production, and ensuring that your apps are able to handle the additional demand when customer base and usage grows are all essential and time-consuming tasks.","spans":[]},{"type":"paragraph","text":"Additionally, being able to test multiple ideas, pivot, and double down on the ideas that actually work is critical in early stages of SaaS development. Once growth comes, it is equally important to scale up without compromising on performance or reliability. Needless to say, all of this needs to be economically viable as well, since not everyone has the resources of large SaaS providers like Salesforce or Adobe.","spans":[]},{"type":"heading2","text":"Cloud Computing enables builders but also poses challenges","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Fortunately, for the act of building and operating your apps, cloud computing can help take some load off your shoulders. Unless you have the scale and resources of Facebook, chances are you are not going to set up your own data centers to host the computing infrastructure that powers your SaaS company. Public cloud infrastructure providers can bring great value to SaaS builders by providing on-demand computing services with usage-based pricing. However, just like how the legacy software companies weren't built for the SaaS model, the early (and big) cloud computing services were not optimized for the unique needs of small SaaS building teams. ","spans":[]},{"type":"paragraph","text":"Smaller SaaS teams face challenges with large cloud computing providers, including:","spans":[]},{"type":"heading4","text":"Too many technology options","spans":[]},{"type":"paragraph","text":"There are just too many options for tech stacks on which to build your SaaS - programming languages, application development frameworks, libraries, runtime environments, architectural patterns, and deployment models - and the list is growing by the day.","spans":[]},{"type":"heading4","text":"Complexity of cloud computing services","spans":[]},{"type":"paragraph","text":"Even when you have decided on a technology stack, there is a lot of cloud vendor-specific terminology you need to learn and heavy lifting you need to do to build on the cloud, not all of which contributes to making your SaaS applications successful.","spans":[]},{"type":"heading4","text":"Unpredictable costs","spans":[]},{"type":"paragraph","text":"The experimentation necessary in early stages of SaaS development, as well as the scaling of applications required during the growth phase, call for affordable and predictable pricing from your cloud provider. The last thing SaaS teams want is surprising and indecipherable bills from your cloud provider. Unfortunately, smaller businesses often experience unpredictable costs with cloud providers who are busy serving only the large enterprises.","spans":[]},{"type":"heading2","text":"DigitalOcean provides a simple, cost effective solution for SaaS builders","spans":[]},{"type":"paragraph","text":"Fortunately, at DigitalOcean we have a laser focus on small software development teams, who are trying to build the next generation of applications. Today, DigitalOcean customers are already building SaaS applications which serve all kinds of customers.","spans":[{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/solutions/saas/"}}]},{"type":"paragraph","text":"We believe SaaS builders should focus on building apps that power their business, and not spend their valuable time on managing infrastructure. That is exactly what we have been able to enable through our intuitive products that are built for scale and reliability.","spans":[{"start":205,"end":223,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/"}}]},{"type":"list-item","text":"Vidazoo is an advertising technology company specializing in video streaming and serving. It serves video ads to thousands of websites and handles close to 10 billion requests per day. \n\n“We are as much a data company as an adtech company. Our business relies on speedy and accurate data processing at massive scale. DigitalOcean provides us the perfect set of tools to operate our SaaS business profitably, while not making us feel the need to become full time system administrators. We plan to move a lot of our apps to DigitalOcean App Platform and other fully managed products.” - Roman Svichar, CTO of Vidazoo","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://vidazoo.com/"}},{"start":187,"end":583,"type":"em"}]},{"type":"paragraph","text":"We believe in meeting customers where they are. If they already have an understanding of cloud infrastructure technologies, they should be able to leverage that knowledge and get started with our products without any further ramp up.","spans":[]},{"type":"list-item","text":"Whatfix is an enterprise SaaS provider that offers a digital adoption platform to businesses. The company helps enterprises gain the full value of their investments in enterprise applications by providing real-time, interactive, and contextual guidance to users of those applications. \n\n“What we really love about the DigitalOcean platform is the ease of use. We feel like we know infrastructure and can handle most of the configuration and management. What we needed from a cloud was not bells and whistles but efficiency and reliability. DigitalOcean provides us a platform to build our apps and then gets out of the way. Just how we like it.” - Achyuth Krishna, Director of Engineering of Whatfix","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://whatfix.com/blog/driving-the-future-now-were-excited-to-announce-our-90-million-series-d-funding/"}},{"start":287,"end":648,"type":"em"}]},{"type":"paragraph","text":"We understand that scaling while maintaining reliability of applications and profitability of business is important, so we provide robust solutions which minimize downtime.","spans":[]},{"type":"list-item","text":"Centra is a SaaS-based e-commerce platform for global direct-to-consumer and wholesale e-commerce brands. Centra provides a powerful e-commerce backend that lets brands build pixel-perfect, custom designed, online flagship stores. \n\n“How do we enable our customers to create differentiated online experiences? How do we ensure their e-commerce apps stay up and running at all times? How do we scale on-demand when traffic grows or new customers come in? These are the questions that we ask ourselves every day. Thankfully, we have a partner in DigitalOcean that provides just the platform to answer those questions enabling us to guarantee 99.9% uptime for our clients.” - Martin Jensen, CEO of Centra","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"https://centra.com/"}},{"start":233,"end":673,"type":"em"}]},{"type":"paragraph","text":"These are just a few examples of SaaS businesses finding success on DigitalOcean. We are constantly amazed by the creativity and innovation that software builders are utilizing our platform for. If you are interested in learning more about product updates, technical deep-dives and best practices for building SaaS products and businesses, please contact us to learn how we can help you get started. ","spans":[{"start":340,"end":357,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"Come build with DigitalOcean!","spans":[]},{"type":"paragraph","text":"Looking to migrate your SaaS to DigitalOcean? Leverage free infrastructure credits, robust training, and technical support to ensure a worry-free migration.","spans":[{"start":0,"end":156,"type":"strong"},{"start":0,"end":156,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Raman Sharma","spans":[]},{"type":"paragraph","text":"Vice President, Product & Programs Marketing","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Raman Sharma","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/497b4b14-d192-493a-8b66-7ae176ba99f3_raman.png?auto=compress,format"},"_meta":{"uid":"raman-sharma"}},"_meta":{"uid":"how-to-scale-your-saas-product-without-breaking-the-bank"}}}}]}}},"pageContext":{"limit":12,"skip":12,"numTagPages":5,"currentPage":2,"uid":"engineering","data":[{"node":{"author":{"_linkType":"Link.document","author_name":"Phil Dougherty","author_image":{"dimensions":{"width":573,"height":557},"alt":"Phil Dougherty","copyright":null,"url":"https://images.prismic.io/www-static/ef89c36114b5e1872e8de0b79eb679b9be5b3765_phil.png?auto=compress,format"},"_meta":{"uid":"phil_dougherty"}},"blog_header_image":{"dimensions":{"width":1200,"height":640},"alt":"Kubernetes illustration","copyright":null,"url":"https://images.prismic.io/www-static/f0ae65520153925bcf7961cce341d2b1a61a293b_image8-1.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"DigitalOcean Kubernetes Is Now Generally Available and Getting Even Better","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Today, to coincide with the first day of CNCF’s KubeCon event, we are delighted to announce that DigitalOcean’s Managed Kubernetes services is now production ready and Generally Available.","spans":[{"start":41,"end":61,"type":"hyperlink","data":{"link_type":"Web","url":"https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/"}}]},{"type":"paragraph","text":"When we introduced DigitalOcean Kubernetes last year, we made it possible for you to spin up Kubernetes in minutes. With our simple and scalable Kubernetes service, all you need to do is define the size and location of your worker nodes, while DigitalOcean provisions, manages, and optimizes the services needed to run your Kubernetes cluster.","spans":[{"start":8,"end":42,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/digitalocean-releases-k8s-as-a-service/"}},{"start":125,"end":163,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"paragraph","text":"Hundreds of businesses and thousands of developers are running their apps using DigitalOcean Kubernetes, and we are grateful for the encouraging feedback we’ve received.","spans":[]},{"type":"preformatted","text":"At Grammofy, our goal is to build exciting digital music experiences for classical music listeners. Since we started using DigitalOcean Kubernetes, we need not spend nearly as much time on IT administration, and even developers without an IT background can control our infrastructure. We are a small company and this frees valuable resources for product development.– Matthias Kümmerer, CTO, Grammofy","spans":[{"start":366,"end":400,"type":"strong"}]},{"type":"preformatted","text":"We are a leading developer & operator of mobile casual games with offices in San Francisco and Singapore. We adopted DigitalOcean's Managed Kubernetes to deploy one of our analytics systems. We chose DigitalOcean because of its developer-friendly dashboards, clear pricing schema, and excellent documentation. These things made it possible for Super Lucky to create a Kubernetes cluster, stateful deployments, load balancers and services in a matter of days instead of weeks.– Alan Morales, Senior Software Engineer, Super Lucky","spans":[{"start":475,"end":528,"type":"strong"}]},{"type":"paragraph","text":"With the help of our customers, we’ve been working hard on enhancements to our Kubernetes service. Most notably, we’re pleased to introduce a free, integrated monitoring service that automatically provides insights and alerts for your clusters. In addition, DigitalOcean Kubernetes now supports the latest Kubernetes release, 1.14, which introduced 31 enhancements to the container orchestration platform. Now you can also schedule automatic patch version upgrades, e.g. 1.14.1 to 1.14.2, for your clusters.","spans":[{"start":299,"end":330,"type":"hyperlink","data":{"link_type":"Web","url":"https://kubernetes.io/blog/2019/03/25/kubernetes-1-14-release-announcement/"}}]},{"type":"paragraph","text":"Finally, because the service is now Generally Available, you can now spin up clusters in each city where we have a data center: New York, San Francisco, Amsterdam, London, Frankfurt, Bangalore, and Toronto.","spans":[]},{"type":"heading2","text":"Monitor resources and manage your Kubernetes cluster, all in one place","spans":[]},{"type":"paragraph","text":"DigitalOcean allows you to run your Kubernetes cluster on top of Standard, General Purpose, and CPU Optimized Droplets, which offer numerous combinations of CPU, RAM, and SSD. In order to right-size your infrastructure for your applications and services, you need visibility into your cluster’s resource utilization. Now, when you visit the Kubernetes section in your dashboard, you’ll see average resource usage for each of your Kubernetes clusters.","spans":[{"start":75,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/general-purpose-droplets-let-you-do-more/"}}]},{"type":"paragraph","text":"From there, you can drill in to view time series graphs for your overall cluster, its node pools, and individual worker nodes. DigitalOcean currently provides these metrics:","spans":[]},{"type":"list-item","text":"CPU usage","spans":[]},{"type":"list-item","text":"Load average (1, 5, and 15 minute)","spans":[]},{"type":"list-item","text":"Memory usage","spans":[]},{"type":"list-item","text":"Disk usage","spans":[]},{"type":"list-item","text":"Disk I/O","spans":[]},{"type":"list-item","text":"Private bandwidth","spans":[]},{"type":"list-item","text":"Public bandwidth","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/3561f3203ff1203c1eda499dee5acd39457bb770_image4-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":820,"height":320}},{"type":"image","url":"https://images.prismic.io/www-static/29845c99a3393858f5f7265cdd45cbc733e10393_image9-2.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":820,"height":320}},{"type":"paragraph","text":"To stay on top of potential issues for individual worker nodes, you can also set alerting thresholds for CPU usage, memory usage, disk usage, disk I/O, incoming bandwidth, and outgoing bandwidth. DigitalOcean can alert you via Slack or email.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/3dd70e2185faeeaeb0c001963125fd2a7390c62e_image1-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1999,"height":708}},{"type":"heading2","text":"Advanced metrics simplify monitoring of your Kubernetes deployment","spans":[]},{"type":"paragraph","text":"In addition, DigitalOcean also provides an option for advanced health metrics. To activate these additional metrics, you’ll need to deploy the kube-state-metrics agent to your cluster.","spans":[{"start":143,"end":161,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kube-state-metrics"}}]},{"type":"paragraph","text":"kube-state-metrics listens to the Kubernetes API server and generates metrics about the state of your cluster deployment and resource allocation, including:","spans":[]},{"type":"list-item","text":"Pod deployment status","spans":[]},{"type":"list-item","text":"DaemonSet deployment status","spans":[]},{"type":"list-item","text":"StatefulSet pod deployment status","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/a50082c1fad4e92ab0029fbc2ef0b1f552150cdf_image6-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":820,"height":320}},{"type":"paragraph","text":"If you’re interested in obtaining additional insight into the performance of your Kubernetes cluster, you may want to consider deploying a service mesh such as Linkerd.","spans":[{"start":139,"end":151,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-service-meshes"}},{"start":160,"end":167,"type":"hyperlink","data":{"link_type":"Web","url":"https://linkerd.io/"}}]},{"type":"preformatted","text":"TEN7 is a full-service agency that creates and cares for Drupal-powered websites. When we were looking for a Kubernetes provider, we first tried Google Kubernetes Engine, but weren't impressed with its pricing or their service. With DigitalOcean, we get strong API support, clear pricing, fast and friendly customer support. The difference is night-and-day. We chose DigitalOcean.– Ivan Stegic, President, TEN7","spans":[{"start":380,"end":410,"type":"strong"}]},{"type":"heading2","text":"Come see us at Kubecon","spans":[]},{"type":"paragraph","text":"If you’re in Barcelona for this week’s Kubecon, we hope that you’ll come to see us at our booth (located at P6). You might also want to check out tomorrow’s talk by our developer advocate, Eddie Zaneski, in which he’ll share his wisdom about monitoring and logging for Kubernetes. We look forward to meeting many of you there!","spans":[{"start":189,"end":202,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/eddiezane"}},{"start":225,"end":279,"type":"hyperlink","data":{"link_type":"Web","url":"https://kccnceu19.sched.com/event/MPba/from-new-cluster-to-insight-deploying-monitoring-and-logging-to-kubernetes-eddie-zaneski-digitalocean"}}]},{"type":"heading2","text":"Coming soon: Marketplace 1-Click Apps for Kubernetes","spans":[]},{"type":"paragraph","text":"Now that DigitalOcean Kubernetes is Generally Available, we’re turning our focus to additional features that will help you do even more with the platform. One high priority: 1-Click Apps for Kubernetes. Over the past few years, a CNCF project called Helm has emerged as the de facto package manager for Kubernetes. With Helm, you can deploy software packages called Charts to your Kubernetes clusters, often to facilitate monitoring, logging, service discovery, and more. While you can deploy Helm charts to your DigitalOcean Kubernetes clusters today, we’re improving DigitalOcean Marketplace so that it includes Kubernetes-ready applications. Once released, you'll be able to deploy 1-Click Apps and Helm charts to your clusters.","spans":[{"start":250,"end":254,"type":"hyperlink","data":{"link_type":"Web","url":"https://helm.sh/"}},{"start":486,"end":551,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-install-software-on-kubernetes-clusters-with-the-helm-package-manager"}}]},{"type":"paragraph","text":"If you’re a software vendor interested in including your applications in the DigitalOcean Marketplace, we’d love to hear from you.","spans":[{"start":103,"end":129,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/vendors"}}]},{"type":"heading2","text":"Stay tuned","spans":[]},{"type":"paragraph","text":"We’ve got much more in store for DigitalOcean Kubernetes, including improvements like auto-scaling and a Container Registry. But what will not change is that you can get started with DigitalOcean Kubernetes without breaking the bank, since your master node is free. If you haven’t yet, we encourage you to spin up a DigitalOcean Kubernetes cluster!","spans":[{"start":306,"end":347,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/kubernetes/clusters/new"}}]},{"type":"paragraph","text":"Happy coding,","spans":[]},{"type":"paragraph","text":"Phil Dougherty","spans":[]},{"type":"paragraph","text":"Senior Product Manager","spans":[]}],"blog_post_date":"2019-05-21","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"doks-in-ga"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"A Message About Intel’s Microarchitectural Data Sampling (MDS) Vulnerability","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Update: June 6, 2019","spans":[{"start":0,"end":20,"type":"em"}]},{"type":"paragraph","text":"Today, we’re happy to share that we have completed Microarchitectural Data Sampling (MDS) mitigations across our fleet. While we applied microcode to mitigate the potential impact of the vulnerability to a majority of our platform several weeks ago, we were awaiting a microcode to apply to a small percentage of servers. Earlier this week, we received the updated microcode from Intel and our team has been working to update the microcode as quickly as possible, and completed those efforts today.","spans":[]},{"type":"paragraph","text":"MDS vulnerability mitigations have been deployed across our entire platform, but we do strongly recommend that all users take steps to ensure your Droplets are up to date and secure, if you have not done so already. If you have already updated your Droplets, no additional action is required.","spans":[{"start":135,"end":181,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/droplets/how-to/kernel/upgrade"}}]},{"type":"paragraph","text":"Original Post: May 14, 2019","spans":[{"start":0,"end":27,"type":"em"}]},{"type":"paragraph","text":"Today, Intel released a statement regarding Microarchitectural Data Sampling (MDS) – also referred to as ZombieLoad – a significant security vulnerability that affects cloud providers with multi-tenant environments, including DigitalOcean. Left unmitigated, this vulnerability could allow sophisticated attackers to gain access to sensitive data, secrets, and credentials that could allow for privilege escalation and unauthorized access to user data.","spans":[]},{"type":"paragraph","text":"We have been working closely with Intel to understand the impact of these vulnerabilities and the best courses of action to protect our platform and our users. We have received updated microcode from Intel and developed a set of kernel updates to mitigate the vulnerability, and we are rapidly rolling out these mitigations with no downtime to our users.","spans":[]},{"type":"paragraph","text":"We also recommend taking steps to ensure your Droplet is up to date and secure. This is especially important if you are running multi-tenant applications or untrusted code inside your Droplet.","spans":[{"start":34,"end":78,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/droplets/how-to/kernel/upgrade"}}]},{"type":"paragraph","text":"In addition to sharing this blog post, we’re reaching out to all users via email. We’ll continue to post informational updates here, and we will reach out directly to users should any additional action be required.","spans":[]},{"type":"paragraph","text":"The security of our platform and our users’ data is our top priority, and we’re taking every measure to ensure our customers remain secure. For more information about MDS, you can read Intel’s initial statement.","spans":[{"start":185,"end":210,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.intel.com/content/www/us/en/architecture-and-technology/mds.html"}}]}],"blog_post_date":"2019-05-14","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}},{"tag1":{"tag":"Trust & Security","_linkType":"Link.document","_meta":{"uid":"trust-security"}}}],"_meta":{"uid":"may-2019-intel-vulnerability"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Community Team","author_image":null,"_meta":{"uid":"community_team"}},"blog_header_image":{"dimensions":{"width":1024,"height":512},"alt":"Machine Learning book illustration","copyright":null,"url":"https://images.prismic.io/www-static/f19477fbeea318fdb15d057e9ccc8ee570ae2da3_machine-learning-book.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Celebrate PyCon 2019 With Our Free Python Machine Learning Projects eBook","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"To commemorate the 2019 PyCon conference and the worldwide Python community, we have put together a free eBook of Python Machine Learning Projects!","spans":[{"start":114,"end":146,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/py-ml-book"}}]},{"type":"paragraph","text":"Project-based learning offers the opportunity to gain hands-on experience by digging into complex, real-world challenges. You can download this book and read it offline, allowing you to work at your own pace as you go through machine learning Python projects. If you are a teacher or workshop leader, you may also use this resource with students or community members.","spans":[]},{"type":"paragraph","text":"The book is Creative Commons licensed, so feel free to redistribute and remix the tutorials (with attribution) for your noncommercial educational needs!","spans":[{"start":12,"end":37,"type":"hyperlink","data":{"link_type":"Web","url":"https://creativecommons.org/licenses/by-nc-sa/4.0/"}}]},{"type":"paragraph","text":"You can download the book in the following formats:","spans":[]},{"type":"list-item","text":"ePub","spans":[{"start":0,"end":4,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/py-ml-book-epub"}}]},{"type":"list-item","text":"PDF","spans":[{"start":0,"end":3,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/py-ml-book-pdf"}}]},{"type":"list-item","text":"Mobi (compatible with Kindle).","spans":[{"start":0,"end":4,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/py-ml-book-mobi"}}]},{"type":"heading2","text":"Why Machine Learning?","spans":[]},{"type":"paragraph","text":"Machine learning is increasingly being used to find patterns, conduct analysis, and make decisions – sometimes without final input from humans who may be impacted by these findings. We created this book to equip developers with tools they can use to better understand, evaluate, and shape machine learning, in order to help ensure that it serves everyone fairly.","spans":[]},{"type":"paragraph","text":"This book will set you up with a Python programming environment if you don’t have one already, then provide you with a conceptual understanding of machine learning. It includes three Python machine learning tutorials that will help you create a machine learning classifier, build a neural network to recognize handwritten digits, and give you a background in deep reinforcement learning through building a bot for Atari.","spans":[]},{"type":"paragraph","text":"If you need Python support or would like reference material, check out our free How To Code in Python 3 eBook!","spans":[{"start":80,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/digitalocean-ebook-how-to-code-in-python"}}]},{"type":"heading2","text":"By the Community for the Community 🐍","spans":[]},{"type":"paragraph","text":"These chapters originally appeared as articles on DigitalOcean's Community site, written and edited by members of the international software developer community. If you are interested in contributing to this knowledge base, consider participating in our Write for DOnations program. DigitalOcean offers payment to authors and provides a matching donation to tech-focused nonprofits.","spans":[{"start":50,"end":79,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community"}},{"start":254,"end":281,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/w4do"}}]},{"type":"paragraph","text":"This eBook was put together by members of the DigitalOcean Developer Education team. To learn more about our eBook creation process,  read the blog post we wrote announcing our How To Code in Python 3 eBook.","spans":[{"start":143,"end":206,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/how-to-code-in-python-ebook/"}}]},{"type":"heading2","text":"Find Us at PyCon","spans":[]},{"type":"paragraph","text":"This year we are happy to be sponsoring PyCon 2019 Sprints, which offer developers the opportunity to collaborate in person on open source projects. Members of the DigitalOcean Community team will be at the conference, so if you are in Cleveland come find us for some great Sammy swag! We also proudly support the Python Software Foundation as a Bronze Sponsor.","spans":[{"start":40,"end":58,"type":"hyperlink","data":{"link_type":"Web","url":"https://us.pycon.org/2019/community/sprints/"}},{"start":314,"end":340,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.python.org/psf/"}}]}],"blog_post_date":"2019-05-03","tags":[{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}},{"tag1":{"tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"gear-up-for-pycon-2019-with-digitaloceans-free-python-machine-learning-projects-ebook"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Andrew Starr-Bochicchio","author_image":null,"_meta":{"uid":"asb"}},"blog_header_image":{"dimensions":{"width":1200,"height":600},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/587ccdbe-88e0-4278-a9c0-8c18408e764a_image5.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to Deploy to DigitalOcean Kubernetes with GitHub Actions","spans":[]}],"blog_post_content":[{"type":"heading4","text":"Update - September 9, 2019:","spans":[]},{"type":"paragraph","text":"The examples in this blog post use the HCL syntax used in the initial version of GitHub Actions. GitHub Actions v2 now uses a new YAML syntax. You can find an updated workflow using the new syntax in the in this example repository.","spans":[{"start":0,"end":231,"type":"strong"},{"start":97,"end":114,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.blog/2019-08-08-github-actions-now-supports-ci-cd/"}},{"start":207,"end":230,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/andrewsomething/example-doctl-action/blob/master/.github/workflows/workflow.yaml"}}]},{"type":"heading2","text":"Introduction","spans":[]},{"type":"paragraph","text":"GitHub Actions were one of the most exciting things launched by our friends at GitHub last year. Now that they're in public beta, people are using them to build awesome stuff, from running tests and linters to more lighthearted use cases. With the DigitalOcean doctl Action, you can interact with all of your DigitalOcean resources.","spans":[{"start":0,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/action-doctl"}},{"start":161,"end":174,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/sdras/awesome-actions"}},{"start":215,"end":237,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/jessfraz/shaking-finger-action"}},{"start":248,"end":273,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/action-doctl"}}]},{"type":"paragraph","text":"One of the most powerful aspects of GitHub Actions is the ability to compose workflows using multiple Actions to accomplish complicated tasks. In this post, we’ll show what that looks like in practice.","spans":[]},{"type":"paragraph","text":"Using multiple Actions, including ones for DigitalOcean and Docker, we’ll build a simple continuous delivery pipeline that deploys an application to a DigitalOcean Kubernetes cluster on push to the master branch of a GitHub repository. Along the way, we’ll dig into some of the details of working with GitHub Actions.","spans":[{"start":151,"end":182,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"heading2","text":"Creating Your Workflow","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/65f9f2b0-2464-4e5d-ac9e-b94693e8f464_image2.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1849,"height":858}},{"type":"paragraph","text":"The first step in using GitHub Actions is to create a workflow. You can do this from the Actions tab of your GitHub repository. This is where you define what will trigger a run of your workflow. ","spans":[]},{"type":"paragraph","text":"Nearly any GitHub event can be used from a new PR being opened to a new release being tagged. In our example, we’ll be using the “push” event so that our workflow is executed when a new commit is pushed to the master branch.","spans":[{"start":0,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://developer.github.com/actions/managing-workflows/workflow-configuration-options/#events-supported-in-workflow-files"}}]},{"type":"image","url":"https://images.prismic.io/www-static/77c1578a-03c6-4b3e-9fe5-532c99fc0bcf_image1-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":289,"height":153}},{"type":"paragraph","text":"This will create a new file in your repository at .github/main.workflow with the following contents:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    workflow \"New workflow\" {  ","spans":[]},{"type":"paragraph","text":"      on = \"push\"","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"This highlights an important aspect of GitHub Actions. While workflows can be created and edited using the GitHub GUI, they are configured in code using HCL – the same format used by tools like [HashiCorp’s Terraform](https://www.terraform.io/). Each change made in the GUI is mirrored in the file and will be committed to the repository. This allows you to edit your workflows offline and collaborate on them via pull requests. For the rest of this post, we’ll mostly be showing the examples as code so that it is easier to see the details of how all the pieces fit together.","spans":[]},{"type":"heading2","text":"Defining Your First Action","spans":[]},{"type":"paragraph","text":"Our repository contains a Dockerfile in its root directory that defines how to build and run our application. In order to keep our example simple and focused on the workflow rather than the details of the application, our “application” is just a static site served by NGINX. The first Action block that we’ll define will build a container image from this Dockerfile:","spans":[{"start":26,"end":36,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/andrewsomething/example-doctl-action/blob/master/Dockerfile"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Build Docker image\" {  ","spans":[]},{"type":"paragraph","text":"      uses = \"actions/docker/cli@master\"","spans":[]},{"type":"paragraph","text":"      args = [\"build\", \"-t\", \"andrewsomething/static-example:$(echo $GITHUB_SHA | head -c7)\", \".\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"The first line is just a label for the block; the interesting bits are inside. The ```[php]{`uses`}``` line specifies the Action that will be run. The path used to reference the Action matches its location on GitHub. For instance, here we are using the Docker CLI Action which can be found in the cli/ directory of the github.com/actions/docker repository. This Action is a wrapper around the same Docker CLI tool that you would use on the command line locally. ","spans":[{"start":319,"end":355,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/actions/docker/tree/master/cli"}}]},{"type":"paragraph","text":"If you have ever built a Docker image, the next line should look familiar. The ```[php]{`args`}``` line is just what it sounds like. Here we can pass arguments to the Docker command needed to build the image.","spans":[]},{"type":"paragraph","text":"When we build the image, we are tagging it so that we can push it to Docker Hub. If you are following along, make sure to replace \"andrewsomething\" with your own username. You probably noticed that we are using the $GITHUB_SHA environment variable as part of the tag. Its value is the SHA of the commit that triggered the workflow. It is one of a number of variables made available in the Action’s runtime environment. ","spans":[{"start":347,"end":366,"type":"hyperlink","data":{"link_type":"Web","url":"https://developer.github.com/actions/creating-github-actions/accessing-the-runtime-environment/#environment-variables"}}]},{"type":"heading2","text":"Using Secrets","spans":[]},{"type":"paragraph","text":"Often you will need to store secrets that your Action will require in order to run. Our next Action block demonstrates this. To push the image we built to Docker Hub, we will first need to log in. Using the `secrets` line of an Action block, we can securely pass the needed information as environment variables:  ","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Docker Login\" {  ","spans":[]},{"type":"paragraph","text":"      uses = \"actions/docker/login@master\"","spans":[]},{"type":"paragraph","text":"      secrets = [\"DOCKER_USERNAME\", \"DOCKER_PASSWORD\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"The contents of these secrets can be configured in the GitHub GUI:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/7298f06d-eb92-40f5-b859-0cc15eb6d928_image3.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":340,"height":311}},{"type":"paragraph","text":"While we’re here, we will also specify a ```[php]{`DIGITALOCEAN_ACCESS_TOKEN`}``` secret using a personal access token generated from the API section of the DigitalOcean Control Panel. We’ll be using this in a later step.","spans":[{"start":138,"end":183,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/api/create-personal-access-token/"}}]},{"type":"heading2","text":"Specifying Dependencies","spans":[]},{"type":"paragraph","text":"In the next step of our workflow, we’ll push the Docker image to Docker Hub. This looks similar to the previous Action blocks, but this time we have a new line:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Push image to Docker Hub\" {  ","spans":[]},{"type":"paragraph","text":"      needs = [\"Docker Login\", \"Build Docker image\"]","spans":[]},{"type":"paragraph","text":"      uses = \"actions/docker/cli@master\"","spans":[]},{"type":"paragraph","text":"      args = [\"push\", \"andrewsomething/static-example\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"Multiple Action blocks may run in parallel. In this case, we need to ensure that the Docker image has been built and that we have logged into Docker Hub before we can push it there. So we’ve specified a `needs` line referencing the labels for those two Action blocks so that they will be executed in the correct order.","spans":[]},{"type":"heading2","text":"Accessing Your Workspace","spans":[]},{"type":"paragraph","text":"The config directory of our repository contains a Kubernetes YAML file specifying our deployment. As committed in git, there is only a placeholder for the Docker image that we want to deploy. It will need to be updated to point to the image we’ve tagged and pushed to Docker Hub. To do this, we’ll use the Shell Action provided by GitHub. Based on Debian, it includes all the standard UNIX tools you’d expect. Here we’re using ```[php]{`sed`}``` to update the contents of our deployment file:","spans":[{"start":61,"end":96,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/andrewsomething/example-doctl-action/blob/master/config/deployment.yml"}},{"start":306,"end":318,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/actions/bin/tree/master/sh"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Update deployment file\" {  ","spans":[]},{"type":"paragraph","text":"      needs = [\"Push image to Docker Hub\"]","spans":[]},{"type":"paragraph","text":"      uses = \"actions/bin/sh@master\"","spans":[]},{"type":"paragraph","text":"      args = [\"TAG=$(echo $GITHUB_SHA | head -c7) && sed -i 's|<IMAGE>|andrewsomething/static-example:'${TAG}'|' $GITHUB_WORKSPACE/config/deployment.yml\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"This demonstrates another important environment variable available to you, ```[php]{`$GITHUB_WORKSPACE`}```. This directory contains a copy of the repository that triggered the workflow. Changes made here will persist from one step to the next.","spans":[]},{"type":"heading2","text":"Deploying to DigitalOcean Kubernetes","spans":[]},{"type":"paragraph","text":"In our next step, we’ll retrieve the credentials needed to access our Kuberenetes cluster using the DigitalOcean doctl Action. This Action enables you to use any doctl sub-command just like from the command line giving you access to all of your DigitalOcean resources. Using the ```[php]{`DIGITALOCEAN_ACCESS_TOKEN`}``` secret we configured earlier, we will save the kubeconfig file for our cluster:","spans":[{"start":113,"end":125,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/action-doctl"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Save DigitalOcean kubeconfig\" {  ","spans":[]},{"type":"paragraph","text":"      uses = \"digitalocean/action-doctl@master\"","spans":[]},{"type":"paragraph","text":"      secrets = [\"DIGITALOCEAN_ACCESS_TOKEN\"]","spans":[]},{"type":"paragraph","text":"      args = [\"kubernetes cluster kubeconfig show actions-example > $HOME/.kubeconfig\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"Next, we’ll configure an Action block using ```[php]{`kubectl`}``` to apply the actual deployment:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Deploy to DigitalOcean Kubernetes\" {  ","spans":[]},{"type":"paragraph","text":"      needs = [\"Save DigitalOcean kubeconfig\", \"Update deployment file\"]","spans":[]},{"type":"paragraph","text":"      uses = \"docker://lachlanevenson/k8s-kubectl\"","spans":[]},{"type":"paragraph","text":"      runs = \"sh -l -c\"","spans":[]},{"type":"paragraph","text":"      args = [\"kubectl --kubeconfig=$HOME/.kubeconfig apply -f $GITHUB_WORKSPACE/config/deployment.yml\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"You’ll notice something new in this block demonstrating just how flexible GitHub Actions can be. In this case, the `uses` line is not specifying an Action on GitHub like our previous steps. Instead, it is referencing a container image hosted on DockerHub. This opens up a whole world of tools not packaged as Actions for use in your workflow.","spans":[{"start":205,"end":234,"type":"hyperlink","data":{"link_type":"Web","url":"https://developer.github.com/actions/managing-workflows/workflow-configuration-options/#using-a-dockerfile-image-in-an-action"}}]},{"type":"heading2","text":"Verifying the Deployment","spans":[]},{"type":"paragraph","text":"In the final step of our workflow, using the same kubectl Docker image, we will check on the status of our deployment. The kubectl rollout status command returns a zero exit code when a deployment was successful:","spans":[{"start":123,"end":153,"type":"hyperlink","data":{"link_type":"Web","url":"https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#complete-deployment"}}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    action \"Verify deployment\" {  ","spans":[]},{"type":"paragraph","text":"      needs = [\"Deploy to DigitalOcean Kubernetes\"]","spans":[]},{"type":"paragraph","text":"      uses = \"docker://lachlanevenson/k8s-kubectl\"","spans":[]},{"type":"paragraph","text":"      runs = \"sh -l -c\"","spans":[]},{"type":"paragraph","text":"      args = [\"kubectl --kubeconfig=$HOME/.kubeconfig rollout status deployment/static-example\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"If the deployment fails, it returns a non-zero exit code. So that the status of our workflow will correctly reflect whether or not our application was successfully deployed, we will return to our workflow block from the first step and add a new ```[php]{`resolves`}``` line:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    workflow \"New workflow\" {  ","spans":[]},{"type":"paragraph","text":"      on = \"push\"","spans":[]},{"type":"paragraph","text":"      resolves = [\"Verify deployment\"]","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"Since our \"Verify deployment\" Action depends on all of our other Actions, we can specify it here alone. If our workflow contained completely independent Actions, we’d want to include each of them here.","spans":[]},{"type":"heading2","text":"Bringing It All Together","spans":[]},{"type":"paragraph","text":"Now that we’ve successfully configured our workflow, each time a commit is pushed to the master branch of our repository it will be triggered. Each step will run in the order that we specified. The GitHub GUI will display the progress:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/ca1f3803-92dc-4ba1-9f54-e2d2ce1202d1_image4.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":701,"height":898}},{"type":"paragraph","text":"With everything green, our site is now live: https://doctl-action.do-api.dev/","spans":[{"start":45,"end":77,"type":"hyperlink","data":{"link_type":"Web","url":"https://doctl-action.do-api.dev/"}}]},{"type":"paragraph","text":"You can find the complete workflow file with the full end-to-end example on GitHub.","spans":[{"start":49,"end":72,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/andrewsomething/example-doctl-action/blob/master/.github/main.workflow"}}]},{"type":"heading2","text":"Next Steps","spans":[]},{"type":"paragraph","text":"GitHub Actions allow you to craft powerful workflows integrating multiple Actions to accomplish complicated tasks. In this post, we’ve only scratched the surface of what they can do. With the doctl Action, you can incorporate your DigitalOcean resources into your workflows. Here are a few resources to help you get started building your own:","spans":[]},{"type":"list-item","text":"Check out the source for the DigitalOcean doctl Action on GitHub.","spans":[{"start":29,"end":54,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/action-doctl"}}]},{"type":"list-item","text":"Dig into Actions in more detail with GitHub's Actions Documentation.","spans":[{"start":37,"end":67,"type":"hyperlink","data":{"link_type":"Web","url":"https://developer.github.com/actions/managing-workflows/"}}]},{"type":"list-item","text":"Run your GitHub Actions locally with act. Great for debugging!","spans":[{"start":0,"end":41,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/nektos/act"}}]},{"type":"paragraph","text":"In this post we mostly focused on the GitHub Actions side of the equation. If you’re looking for more info on working with Kubernetes, the DigitalOcean Kubernetes Resource Center is a great place to start.","spans":[{"start":139,"end":178,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/resources/kubernetes/"}}]},{"type":"paragraph","text":"We’d love to know how you are using GitHub Actions. So let us know in the comments below! Are there other Actions for DigitalOcean that you’d like to see? Share your feedback and requests by opening an issue on GitHub.","spans":[{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/action-doctl/issues"}}]}],"blog_post_date":"2019-04-24","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"how-to-deploy-to-digitalocean-kubernetes-with-github-actions"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"TC Currie","author_image":{"dimensions":{"width":1372,"height":1352},"alt":"TC Currie","copyright":null,"url":"https://images.prismic.io/www-static/c97b5e9a80062bc03c460bbd59e8aa8aa45428f6_tc-dangerous-nite1.jpg?auto=compress,format"},"_meta":{"uid":"tc_currie"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Illustration of Male developer on computer","copyright":null,"url":"https://images.prismic.io/www-static/b5fe7883969b168ed4dd40ce8260539595a7d2ab_outlinevpn_social_blog--1-.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"With DigitalOcean, Jigsaw's Private VPN Gives a Line Out to Journalists","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Imagine you’re a journalist covering an uprising against a military regime.  You film a riot on your phone, then quickly send it to your server over the virtual private network (VPN) you found in the Android app store that promised high security.  That night, when you finally make it back to your hotel room and boot up your laptop to write the story, you realize the video is nowhere to be found.","spans":[]},{"type":"paragraph","text":"Unbeknownst to you, this government forced your VPN provider to give them access to all the data streaming through their VPN as a condition for operating in their country. Censors grabbed your video and the pictures worth a thousand words never make it to your server. But that fact was never mentioned anywhere in the Android store’s description of the product.","spans":[]},{"type":"paragraph","text":"This type of scenario isn’t hypothetical. “Journalists should be aware that their online activities might be subject to surveillance either by government agencies, their internet service providers or a hacker with malicious intent,” said Laura Tich, technical evangelist for Code for Africa, a resource for African journalists. This is exactly the problem that the new private VPN Outline was created to solve.","spans":[{"start":275,"end":290,"type":"hyperlink","data":{"link_type":"Web","url":"http://codeforafrica.org"}},{"start":381,"end":388,"type":"hyperlink","data":{"link_type":"Web","url":"https://getoutline.org/en/home"}}]},{"type":"paragraph","text":"Alphabet’s cybersecurity division Jigsaw designed the product for ease of use and maximum data security. Outline, which is open source and audited by the Radically Open Security, is targeted to journalists and activists working for change on a large scale. Those who are disproportionately more valuable to society because they are carriers of societal change, said Santiago Andrigo, Jigsaw’s product manager, who manages Outline.","spans":[{"start":34,"end":40,"type":"hyperlink","data":{"link_type":"Web","url":"https://jigsaw.google.com/"}},{"start":366,"end":382,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.linkedin.com/in/santiagoandrigo/"}}]},{"type":"paragraph","text":"“Their work makes them more vulnerable to attack,” he said.  “It can get really scary when they're outed and you're passing over information.”","spans":[]},{"type":"heading3","text":"The Danger is Real","spans":[]},{"type":"paragraph","text":"Laura Tich, the technical evangelist, is only too aware of this danger. It’s why Code For Africa recommends the use of Outline. The jeopardy is not just for journalists, but for whistleblowers, sources, and the data they provide as proof of corruption.","spans":[]},{"type":"paragraph","text":"“As surveillance becomes ubiquitous in today’s world,” she said, “journalists face an increasing challenge in establishing secure communication in the digital space,” she said. This, along with other online attacks “pose serious threats to journalists who would like to protect not only themselves, but also their sources.”","spans":[]},{"type":"paragraph","text":"One example, said Tich, is the arrest of Nigerian journalist Tony Ezimakor for writing a story about alleged ransom money kickbacks. The State Security Service demanded he disclose his sources.","spans":[]},{"type":"paragraph","text":"Another example she cited is the report from the South African campaign Right2Know, whose mission is centered on freedom of expression and access to information.","spans":[{"start":72,"end":82,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.r2k.org.za"}}]},{"type":"paragraph","text":"Right2Know’s recently-released report \"Spooked: Surveillance of Journalists in South Africa\" [PDF] has 10 specific examples of targeted surveillance by security agencies towards journalists and whistleblowers, especially those who have uncovered government scandals and corruption cases, she said. And that’s just from one country.","spans":[{"start":93,"end":98,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.r2k.org.za/wp-content/uploads/R2K-Surveillance-of-Journalists-Report-2018-web.pdf"}}]},{"type":"paragraph","text":"These are far from isolated incidents. The 2018 World Press Freedom Index report is proof that the world has become a more dangerous place for journalists.","spans":[{"start":43,"end":80,"type":"hyperlink","data":{"link_type":"Web","url":"https://rsf.org/en/ranking"}}]},{"type":"paragraph","text":"“You’re only as safe as your weakest link,” said Dan Keyserling, Head of Communications, Public Affairs, and Operations at Jigsaw.  Data security is always critical, he said, but that is especially true for journalists and activists.","spans":[{"start":49,"end":63,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.linkedin.com/in/dan-keyserling-6b42229/"}}]},{"type":"heading3","text":"How VPNs Really Works","spans":[]},{"type":"paragraph","text":"I was really surprised to find out that companies can reach in and grab data out of a VPN.  I’ve been using them since my early days as a consultant back in the ‘90s.  At every job, I’d VPN into the company network to send over timecards and documentation from Racine, WI; Bentonville, AR; or, whatever exotic local I was flying to that week.  Until researching this article, I thought of VPNs like a transit tube where the data is put into the tube, then pulled out on the other end—like the Chunnel.  I assumed the data was secure and invisible during transit, which was, after all, the whole point of a VPN.","spans":[]},{"type":"paragraph","text":"It turns out, they’re more like a river, where the stream of data flowing by can be seen and fished out.","spans":[]},{"type":"paragraph","text":"Unscrupulous VPN providers can peek in on your data, inject their own ads on non-secure pages, analyze your browsing habits, and sell that information to advertisers, said Keyserling. Or even steal your identity. And you can’t know for sure if you can trust them, regardless of what they say in the app store.","spans":[]},{"type":"paragraph","text":"While it’s true that so much data flows through VPNs that it’s not practical to monitor all the data, the fact remains that it is possible. As seen above, journalists and others working to expose corruption are particularly vulnerable.  This is exactly why companies build their own VPNs.","spans":[]},{"type":"paragraph","text":"But what is a non-technical journalist or social justice activist to do?","spans":[]},{"type":"paragraph","text":"[Related: Check out our Community Tutorials on VPNs]","spans":[{"start":0,"end":52,"type":"strong"},{"start":10,"end":51,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tags/vpn?type=tutorials"}}]},{"type":"heading3","text":"Enter Outline","spans":[]},{"type":"paragraph","text":"The private VPN focuses on security and simplicity. This tech is really innovative and took several years to build, said Keyserling. An innovative layer of security comes under the hood.  “It’s a clever product and very technically advanced, and puts security in the hands of the small innovator.”","spans":[]},{"type":"paragraph","text":"They named the product Outline because it “gives them a line out, from a place where the internet is restricted,” said Keyserling.","spans":[]},{"type":"paragraph","text":"Outline is specifically designed to be resistant to censorship.  Because of the protocols used, Outline is harder to detect as a VPN, and therefore is less likely to be blocked by countries who take measure to block the flow of content out of their country.","spans":[]},{"type":"paragraph","text":"With Outline, said Keyserling, each account uses its own DigitalOcean servers, so you get complete control over your data. In addition, Jigsaw brings that power into the hands of anyone with a phone. Now users can create their own personal VPN to their own personal server, said Keyserling: “It is super simple and very affordable.  They don’t need to trust a third-party VPN company.”","spans":[]},{"type":"heading3","text":"We Found Your Server","spans":[]},{"type":"paragraph","text":"Outline is insanely easy to spin up, which is a critical part of the design.  And because ease of use was the most important feature, DigitalOcean was the obvious choice when Jigsaw started looking for partners.","spans":[]},{"type":"paragraph","text":"While you can build an Outline VPN on a different server, the UI was designed to work with DigitalOcean. “DigitalOcean is the default and what we recommend,” said Keyserling, “because the UI we built with DigitalOcean is nicer, slicker than the rest, and a little bit easier for our users.”","spans":[]},{"type":"paragraph","text":"Users can create their own private VPN in three easy, self-explanatory steps following the prompts at GetOutline.org. Sign up, pick a server location, and add users and boom! You have your own secure VPN feeding into your own server in five to seven minutes.  If you can create an email account, you can set up an Outline VPN.","spans":[{"start":102,"end":116,"type":"hyperlink","data":{"link_type":"Web","url":"http://GetOutline.org"}}]},{"type":"paragraph","text":"It’s just as simple to add users.  For example, a journalist has found a whistleblower source and wants to add them to her VPN to transfer the incriminating files.  The journalist adds the whistleblower to her VPN, then sends them an email from Outline that contains an access code as a link, along with simple instructions. When the whistleblower copies the access code into their browser, an “Add Server” button pops up.  They click the button and the application connects them, and then shows the message, “We found your server.” They’re off and running.","spans":[]},{"type":"paragraph","text":"“It knows which server because they just copied it to the clipboard,” said Andrigo.  “It leads me to installing the right client and upon opening that client, it already knows which server I was invited to so it just automatically adds in.”","spans":[]},{"type":"heading3","text":"Behind the Curtain","spans":[]},{"type":"paragraph","text":"It’s not that simple, of course.  That five-minute magic is hiding a lot of complexity.","spans":[]},{"type":"paragraph","text":"Which was the goal, said Andrigo.  “Outline is about taking something that is very complex and making it simple, making meaningful choices for the user, and hiding the complexity.”","spans":[]},{"type":"paragraph","text":"Once the user chooses a server location, Outline spins up a DigitalOcean server on Ubuntu, installs Docker, and imports an image that has the actual server itself.  Then it installs a component of Watchtower, which makes sure that the server is always up to date so the user doesn’t have to worry about installing a steady stream of security updates.","spans":[]},{"type":"paragraph","text":"Outline relies on the Shadowsocks protocol, which is an open-source project to create an encrypted socks5 proxy to redirect internet traffic.","spans":[{"start":22,"end":33,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Shadowsocks"}}]},{"type":"paragraph","text":"By contrast, a socks5 proxy looks like normal internet traffic. What this means is that your new Outline VPN doesn’t look like a VPN, so your data doesn’t get flagged or monitored by countries that regulate data in and out of their borders.  Which is crazy helpful to journalists and activists who are working in dangerous parts of the world.","spans":[]},{"type":"paragraph","text":"Outline’s ease of use did not come easily. “We did a lot of usability studies,” said Andrigo, “because we are lucky enough to have a very strong design and usability team, and we went through a lot of iterations to figure out what models of user interaction are clear.”","spans":[]},{"type":"paragraph","text":"One surprising result from their usability studies led to actually adding a step in the process.  “Sometimes things happened so fast that some of the users got startled,\" he noted.  They actually slowed the install process to make it easier to use.","spans":[]},{"type":"paragraph","text":"The end result?  A super simple, super safe way to transfer data for people with limited technical ability.","spans":[]},{"type":"paragraph","text":"For Andrigo, that what makes it all worthwhile.  “Those moments,” he said, “where you take something that is very complex and you make it simple and remove all that complexity and you hopefully make wise choices for the user about the things that they don't need to know and that stand in the way of them getting their job done.”","spans":[]},{"type":"paragraph","text":"[Read more TC Currie: How 2,000 Droplets Broke the Enigma Code in 13 Minutes]","spans":[{"start":0,"end":77,"type":"strong"},{"start":22,"end":76,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/how-2000-droplets-broke-the-enigma-code-in-13-minutes/"}}]},{"type":"paragraph","text":"TC Currie is a journalist, storyteller, data geek, poet, body positive activist and occasional lingerie model. After spending 25 years in software development working with data movement and accessibility, she wrote her first novel during National Novel Writing Month and fell in love with writing.","spans":[{"start":0,"end":297,"type":"em"}]}],"blog_post_date":"2018-11-23","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"digitalocean-outline-jigsaw-vpn"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Brett Jones","author_image":null,"_meta":{"uid":"brett_jones"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/bb6388a8-b308-4840-9ebb-21636b4f19e8_ComparingStrings_blog.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to Efficiently Compare Strings in Go","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Comparing strings might not be something you think about when optimizing software. Typically, optimization includes tasks like splitting loops across goroutines, finding a faster hashing algorithm, or something that sounds more scientific. There is a sense of accomplishment we get when making changes like this. However, string comparison can often be the top bottleneck in a pipeline. For example, the snippet below is often used, but it is the worst solution (benchmarks below) and it has caused real problems.","spans":[]},{"type":"paragraph","text":"```[php]{`strings.ToLower(name) == strings.ToLower(othername)`}```","spans":[]},{"type":"paragraph","text":"This appears to be pretty straightforward. Convert each string to lowercase and then compare. To understand why this is a bad solution you have to know what a string represents and how `ToLower` works. ","spans":[]},{"type":"paragraph","text":"But first, let's talk about the primary use-cases for string comparisons. When comparing using the normal `==` operator, we get the quickest and most optimized solution. However, APIs and similar software usually take case into consideration. This is when we drop in `ToLower` and call it feature-complete.","spans":[]},{"type":"paragraph","text":"In Go, a string is an immutable sequence of runes. Rune is a term Go uses to represent a code point. You can read more about strings, bytes, runes and characters at the Go blog. `ToLower` is a standard library function that loops over each rune in a string, converts it to lowercase, and returns the newly formed string. So the above code traverses each string entirely before the comparison. It is tight-bound to the length of the strings. Here is some pseudo code that roughly represents the complexity of the above snippet.","spans":[{"start":89,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Code_point"}},{"start":169,"end":176,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.golang.org/strings"}}]},{"type":"paragraph","text":"Note: Because strings are immutable, ```[php]{`strings.ToLower`}``` allocates new memory space for two new strings. This contributes to the time complexity, but this is not the focus now. For brevity, the pseudo code below assumes that strings are mutable.","spans":[{"start":0,"end":37,"type":"em"},{"start":47,"end":62,"type":"em"},{"start":67,"end":256,"type":"em"}]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    // Pseudo code","spans":[]},{"type":"paragraph","text":"    func CompareInsensitive(a, b string) bool {  ","spans":[]},{"type":"paragraph","text":"        // loop over string a and convert every rune to lowercase","spans":[]},{"type":"paragraph","text":"        for i := 0; i < len(a); i++ {  a[i] = unicode.ToLower(a[i])  }","spans":[]},{"type":"paragraph","text":"        // loop over string b and convert every rune to lowercase","spans":[]},{"type":"paragraph","text":"        for i := 0; i < len(b); i++ {  b[i] = unicode.ToLower(b[i])  }","spans":[]},{"type":"paragraph","text":"        // loop over both a and b and return false if there is a mismatch","spans":[]},{"type":"paragraph","text":"        for i := 0; i < len(a); i++ {","spans":[]},{"type":"paragraph","text":"            if a[i] != b[i] { ","spans":[]},{"type":"paragraph","text":"                return false","spans":[]},{"type":"paragraph","text":"            }","spans":[]},{"type":"paragraph","text":"        }","spans":[]},{"type":"paragraph","text":"        return true","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"The time complexity is O(n) where n is len(a) + len(b) + len(a). Here's an example: ","spans":[]},{"type":"paragraph","text":"```[php]{`CompareInsensitive(\"fizzbuzz\", \"buzzfizz\")`}```","spans":[]},{"type":"paragraph","text":"That means we will loop up to 24 times to discover that two *completely distinct strings* do not match. This is highly inefficient. We could tell these strings were distinct by comparing ```[php]{`unicode.ToLower(a[0])`}``` and ```[php]{`unicode.ToLower(b[0])`}``` (pseudo code). So let’s take that into consideration.","spans":[]},{"type":"paragraph","text":"To optimize, We can *remove* the first two loops in `CompareInsensitive` and compare each character in each position. If runes don’t match, we would then convert the runes to lowercase and then compare again. If they still don’t match then we break the loop and consider the two strings a mismatch. If they match we can continue to the next rune until the end is reached or until a mismatch is found. Let’s rewrite this code.","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    // Pseudo code","spans":[]},{"type":"paragraph","text":"    func CompareInsensitive(a, b string) bool {  ","spans":[]},{"type":"paragraph","text":"        // a quick optimization. If the two strings have a different","spans":[]},{"type":"paragraph","text":"        // length then they certainly are not the same","spans":[]},{"type":"paragraph","text":"        if len(a) != len(b) {","spans":[]},{"type":"paragraph","text":"            return false","spans":[]},{"type":"paragraph","text":"        }","spans":[]},{"type":"paragraph","text":"        for i := 0; i < len(a); i++ {","spans":[]},{"type":"paragraph","text":"            // if the characters already match then we don't need to ","spans":[]},{"type":"paragraph","text":"            // alter their case. We can continue to the next rune","spans":[]},{"type":"paragraph","text":"            if a[i] == b[i] { ","spans":[]},{"type":"paragraph","text":"                continue","spans":[]},{"type":"paragraph","text":"            }","spans":[]},{"type":"paragraph","text":"            if unicode.ToLower(a[i]) != unicode.ToLower(b[i]) {","spans":[]},{"type":"paragraph","text":"                // the lowercase characters do not match so these","spans":[]},{"type":"paragraph","text":"                // are considered a mismatch, break and return false","spans":[]},{"type":"paragraph","text":"                return false","spans":[]},{"type":"paragraph","text":"            }","spans":[]},{"type":"paragraph","text":"        }","spans":[]},{"type":"paragraph","text":"        // The string length has been traversed without a mismatch","spans":[]},{"type":"paragraph","text":"        // therefore the two match","spans":[]},{"type":"paragraph","text":"        return true","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"The new function is much more efficient. The upper bounds is the length of one string rather than the sum of the length of both strings. What does this look like with our comparison above? Well, the loop will only ever run eight times at most. However, since the first two runes are not the same this loop only runs once. We have optimized our comparison over twenty fold!","spans":[]},{"type":"paragraph","text":"Fortunately, there is a function in the strings package for this. It’s called ```[php]{`strings.EqualFold`}```.","spans":[]},{"type":"heading3","text":"Benchmarks","spans":[]},{"type":"paragraph","text":"When both strings are equal","spans":[{"start":0,"end":27,"type":"strong"}]},{"type":"paragraph","text":"Operations executedNanoseconds (ns) per operationBenchmarkEqualFoldBothEqual-820000000124 ns/opBenchmarkToLowerBothEqual-810000000339 ns/op","spans":[]},{"type":"paragraph","text":"When both strings are equal until the last rune","spans":[{"start":0,"end":47,"type":"strong"}]},{"type":"paragraph","text":"Operations executedNs per operationBenchmarkEqualFoldLastRuneNotEqual-820000000129 ns/opBenchmarkToLowerLastRuneNotEqual-810000000346 ns/op","spans":[]},{"type":"paragraph","text":"When both strings are distinct","spans":[{"start":0,"end":30,"type":"strong"}]},{"type":"paragraph","text":"Operations executedNs per operationBenchmarkEqualFoldFirstRuneNotEqual-830000000011.2 ns/opBenchmarkToLowerFirstRuneNotEqual-810000000333 ns/op","spans":[]},{"type":"paragraph","text":"When both strings have a different case at rune 0","spans":[{"start":0,"end":49,"type":"strong"}]},{"type":"paragraph","text":"Operations executedNs per operationBenchmarkEqualFoldFirstRuneDifferentCase-820000000125 ns/opBenchmarkToLowerFirstRuneDifferentCase-810000000433 ns/op","spans":[]},{"type":"paragraph","text":"When both strings have a different case in the middle","spans":[{"start":0,"end":53,"type":"strong"}]},{"type":"paragraph","text":"Operations executedNs per operationBenchmarkEqualFoldMiddleRuneDifferentCase-820000000123 ns/opBenchmarkToLowerMiddleRuneDifferentCase-810000000428 ns/op","spans":[]},{"type":"paragraph","text":"There is a staggering difference (by 30 times!) when the first rune of each string does not match. This is because instead of looping over both strings and then comparing, the loop only runs one time and immediately returns false. In every case, `EqualFold` outperforms our initial comparison by orders of magnitude.","spans":[]},{"type":"heading3","text":"Does it Matter?","spans":[]},{"type":"paragraph","text":"You might be thinking that 400 nanoseconds does not matter. In most cases you might be right. However, some micro-optimizations are as simple as any other solution. In this case it is easier than the original solution. ","spans":[]},{"type":"paragraph","text":"Quality engineers have simple optimizations in their normal workflow. They don't wait to optimize software once it becomes an issue, they write optimized software from the beginning. Even for the best engineers, it is unlikely and unrealistic to write the most efficient software from the ground up. It is virtually impossible to think of every edge-case and optimize for it. After all, we rarely know the wild behaviors of our users until we set them loose on our software. ","spans":[]},{"type":"paragraph","text":"However, embedding these simple solutions into your normal workflow will improve the lifespan of applications and prevent unnecessary bottlenecks in the future. Even if that bottleneck never matters, you didn't waste any effort.","spans":[]},{"type":"paragraph","text":"Brett Jones is a Senior Software Engineer and Tech Lead for the Insights team at DigitalOcean. He is happily married with two amazing kids. He loves philosophy, history, fantasy books, and the Dallas Stars. You can find his work at github.com/blockloop.","spans":[{"start":0,"end":253,"type":"em"},{"start":232,"end":252,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/blockloop"}}]}],"blog_post_date":"2018-11-07","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"how-to-efficiently-compare-strings-in-go"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Mentorship illustration","copyright":null,"url":"https://images.prismic.io/www-static/c46ee9ed2681cc2f82facb789cb95fd7f05de1b0_mentoringengineers_blog-1.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Diving into Düsseldorf for SREcon EMEA","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"SREcon EMEA is on now in Düsseldorf, Germany. If you're attending, make sure to check out our talks.","spans":[{"start":0,"end":11,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.usenix.org/conference/srecon18europe/"}}]},{"type":"paragraph","text":"Tom Spiegelman will share how he fosters mentorship at DigitalOcean, and Jaime Woo will talk about post-incident care. In addition, Emil Stolarsky co-chairs the lightning talks.","spans":[]},{"type":"paragraph","text":"On Wednesday, August 29, from 4:00 PM-4:30 PM, Jaime Woo presents \"Your System Has Recovered from an Incident, but Have Your Developers?\" in Rheinlandsaal Ballroom A.","spans":[]},{"type":"paragraph","text":"Mistakes are inevitable, and happen to the best of us. Our industry adopts a blame-free culture, but that doesn't negate the sting that occurs when we're at the heart of a mess-up.","spans":[{"start":0,"end":180,"type":"em"}]},{"type":"paragraph","text":"Developers continually raise the bar on how to prevent errors, mitigate damage for ones that arise, and wring out as many learnings as possible after the damage is done. But much of this work is focused on the products, and not the people. And given the high-stakes in SRE, the range of how a mistake psychologically impacts people can run the gamut from minor to the near-traumatic.","spans":[{"start":0,"end":383,"type":"em"}]},{"type":"paragraph","text":"Where are the game day exercises that simulate how to support a coworker who just caused 3 am pings and 20 hour work days? What resources should we share to help people understand the stages of emotions they'll feel after a major incident?","spans":[{"start":0,"end":239,"type":"em"}]},{"type":"paragraph","text":"The concept of psychological safety is well understood as a key predictor for high-performing teams, but what does that entail? Drawing from original research, and lessons from fields like sports, medicine, and even stand-up comedy, attendees will leave with a series of tangible actions and exercises to help restore team trust and rebuild a developer's confidence.","spans":[{"start":0,"end":366,"type":"em"}]},{"type":"paragraph","text":"On Wednesday, August 29, from 6:00 PM-7:00 PM, lightning talks, co-chaired by Emil Stolarsky, happen in Rheinlandsaal Ballroom A, with nine speakers sharing energetic presentations on a variety of SRE-related topics.","spans":[]},{"type":"paragraph","text":"On Thursday, August 30, from 12:00 PM-12:30 PM, Tom Spiegelman presents \"Building a Fellowship Program to Mentor and Grow Your SRE Team\" in Rheinlandsaal Ballroom A.","spans":[]},{"type":"paragraph","text":"Mentorship is invaluable at any point in your career. At DigitalOcean, we introduced an internal two-week fellowship program pairing any developer interested in learning more about what infrastructure did with a senior engineer. We followed the Tuckman 4-stages of group development of forming, storming, norming, and performing. We believe we create the best performing team when mentors and mentees go through the four stages together as a team. Two weeks may seem brief, but we were able to iterate quickly, and also it meant we could focus our energies on mentoring just one person at a time to limit straining the team’s bandwidth.","spans":[{"start":0,"end":636,"type":"em"}]},{"type":"paragraph","text":"The benefits were manifold: our infrastructure team gained a better perspective of what other teams go through and work on a daily basis which helps us build better tools and workflows to support them. Not only did participants strengthen their skills, but some joined infrastructure, realizing it was right for them. And for those that didn't join it was an excellent way to cross-pollinate ideas and build the infrastructure team's relationships with other teams. In this talk, attendees will hear about the theory, lessons learned, and how to create their own fellowship program.","spans":[{"start":0,"end":582,"type":"em"}]},{"type":"paragraph","text":"Also, in addition to being a bronze sponsor for SREcon EMEA, we're proud sponsors of the Diversity Grant. Visibility and representation matter, and congratulations to all of the successful applicants. See everyone at the conference!","spans":[{"start":67,"end":104,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.usenix.org/conference/srecon18europe/diversity-grant-application"}}]}],"blog_post_date":"2018-08-29","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"diving-into-dusseldorf-for-srecon-emea"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":5000,"height":2500},"alt":"OSCon letters with a dolphin, narwhal, and jellyfish popping out of the letters with the words 'See you there!' underneath illustration","copyright":null,"url":"https://images.prismic.io/www-static/f3efcf4db49e107ee21d7edf28add4c7c9c2fe18_oscon_seeyouthere.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Prepped for Portland and OSCON 2018","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"It's the 20th year of OSCON, held this week in Portland, Oregon, and we will be in attendance!","spans":[{"start":22,"end":27,"type":"hyperlink","data":{"link_type":"Web","url":"https://conferences.oreilly.com/oscon/oscon-or"}}]},{"type":"paragraph","text":"We have two great presentations lined up:","spans":[]},{"type":"list-item","text":"Lauren McCarthy and Tom Spiegelman will share DigitalOcean's approach to tackling the Spectre and Meltdown vulnerabilities, covering what the company chose to move forward with and why, and","spans":[]},{"type":"list-item","text":"Andrew Kim will be sharing a technical deep dive into how DigitalOcean uses anycast IPs, BGP, and Kubernetes to run globally distributed services on containers","spans":[]},{"type":"paragraph","text":"On Wednesday, July 18, from 11:50 AM-12:30 PM, Lauren McCarthy and Tom Spiegelman present \"DigitalOcean’s approach to Spectre and Meltdown\" in E143/144.","spans":[{"start":91,"end":138,"type":"hyperlink","data":{"link_type":"Web","url":"https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/71101"}},{"start":91,"end":138,"type":"strong"}]},{"type":"paragraph","text":"News of the security vulnerabilities Spectre and Meltdown gripped headlines earlier this year, and for good reason: the bugs affected an estimated three billion chips in use. The impact to cloud providers was substantial, and DigitalOcean was no exception.","spans":[{"start":0,"end":256,"type":"em"}]},{"type":"paragraph","text":"Lauren McCarthy and Tom Spiegelman share DigitalOcean’s approach to tackling the Spectre and Meltdown vulnerabilities—dubbed \"Smeltdown”—covering what the company chose to move forward with and why. This was one of the biggest challenges the company has dealt with in terms of complexity and scale. One of the key issues was timeliness: while the big cloud companies received advanced notice, DigitalOcean didn’t have that luxury. But it couldn’t use that as an excuse: it just meant working smarter and harder. Lauren and Tom discuss the hardships faced and how the chosen solution left the company with a more secure cloud infrastructure and ready move forward to work toward new offerings so that developers and their teams can focus on what matters: building software that changes the world.","spans":[{"start":0,"end":795,"type":"em"}]},{"type":"paragraph","text":"On Thursday, July 19, from 4:15 PM-4:55 PM, Andrew Kim presents \"Containers and anycast IPs at DigitalOcean\" in D139/140.","spans":[{"start":65,"end":107,"type":"hyperlink","data":{"link_type":"Web","url":"https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/67422"}},{"start":65,"end":107,"type":"strong"}]},{"type":"paragraph","text":"Today’s container networking technology has made it significantly easier to build distributed systems on top of container orchestrators such as Kubernetes, Mesosphere, and Docker Swarm. Container networking technologies use Linux primitives such as iptables and IPVS to provide load-balancing capabilities for network traffic across containers in a cluster. These simple yet powerful tools are a cornerstone to the success of containerized systems, as they provide highly available environments with little to no effort.","spans":[{"start":0,"end":520,"type":"em"}]},{"type":"paragraph","text":"Despite the many benefits of container networking, running containerized applications that must be latency sensitive and globally distributed is an extremely challenging task. Container networking is mainly scoped for in-cluster traffic, leaving little room to globally distribute an application across multiple clusters. Moreover, extending a container network for external traffic requires many additional layers of abstraction, usually introducing points of failures in a cluster and increasing end-to-end latency.","spans":[{"start":0,"end":517,"type":"em"}]},{"type":"paragraph","text":"Andrew Kim leads a technical deep dive into how DigitalOcean uses anycast IPs, BGP, and Kubernetes to run globally distributed services on containers. Along the way, Andrew discusses design considerations for scalability, architectural trade-offs, data center networking, lessons learned in production, and challenges to adopting containers for latency sensitive applications.","spans":[{"start":0,"end":376,"type":"em"}]},{"type":"paragraph","text":"You can also catch us at booth #101 at the following times:","spans":[{"start":25,"end":35,"type":"strong"}]},{"type":"list-item","text":"Wednesday, July 18 from 10:20 AM to 7:00 PM, and","spans":[]},{"type":"list-item","text":"Thursday, July 19 from 10:20 AM to 4:15 PM","spans":[]}],"blog_post_date":"2018-07-16","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"_meta":{"uid":"oscon-2018"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"TC Currie","author_image":{"dimensions":{"width":1372,"height":1352},"alt":"TC Currie","copyright":null,"url":"https://images.prismic.io/www-static/c97b5e9a80062bc03c460bbd59e8aa8aa45428f6_tc-dangerous-nite1.jpg?auto=compress,format"},"_meta":{"uid":"tc_currie"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"pipes with letters ENIGMA on them as keys illustration","copyright":null,"url":"https://images.prismic.io/www-static/b805d650985c40095317a5edf80625f243902058_do_enigma_blog.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How 2,000 Droplets Broke the Enigma Code in 13 Minutes","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"In late 2017, at the Imperial War Museum in London, developers applied modern artificial intelligence (AI) techniques to break the “unbreakable” Enigma machine used by the Nazis to encrypt their correspondences in World War II.  Using AI processes across 2,000 DigitalOcean servers, engineers at Enigma Pattern accomplished in 13 minutes what took Alan Turing years to do—and at a cost of just $7.","spans":[{"start":145,"end":159,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Enigma_Machine"}}]},{"type":"paragraph","text":"I have long been fascinated by the Enigma machine and its impact on World War II.  Aside from being a huge history geek, my father-in-law went over to Normandy on D+3 (three days after the Omaha beachhead was established). He served in an advance corps, finding ways for the army to move across the country, and as such, they were the first to come across one of the concentration camps and liberate it.  None of that would have been possible without Enigma.","spans":[]},{"type":"heading2","text":"The Enigma Machine","spans":[]},{"type":"paragraph","text":"The Enigma machine is a complicated apparatus consisting of a keyboard, a set of rotors, an alphabet ring, and plug connections, all configurable by the operator. For the message to be both encrypted and decrypted, both operators had to know two sets of codes. A daily base code, changed every 24 hours, was published monthly by the Germans. Then, each operator created an individual setting used only for that message.  The key to the individual code was sent in the first characters of the message, coded in the base code.  This created over 53 billion possible combinations, changing every 24 hours.  Because of this, the machine was widely considered unbreakable.","spans":[]},{"type":"paragraph","text":"Marian Rejewsky, working with other mathematicians at the Polish Cipher Bureau, cracked an early version of the Enigma machine in 1932 by the tried-and-true method of stealing a few machines and reverse engineering the mechanism. It took him just under a year to figure out the general principle of the German military’s double message setting and the wiring of the rotors, and another year to catalog the settings. After all of that, daily keys could be obtained in under 20 minutes.","spans":[{"start":0,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Marian_Rejewski"}}]},{"type":"paragraph","text":"But as Germany revved up its war machine, the Nazi navy made the machine more complex with the addition of plugs and more rotors, making it impossible for humans to work through the billions of possible combinations. Enter Bletchley Park in rural England, where Alan Turing, a brilliant English mathematician, gathered a team of cryptographers, puzzle solvers, linguists, and mathematicians in 1939 with the mission of breaking the German codes.","spans":[{"start":262,"end":273,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Alan_Turing"}}]},{"type":"paragraph","text":"“Enigma gave the foundation to Alan Turing to develop the computer,” explained Rafal Janczyk, a Polish mathematician and CEO and co-founder of Enigma Pattern.","spans":[{"start":143,"end":157,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.enigmapattern.com/"}}]},{"type":"paragraph","text":"Rejewsky and his team smuggled their cracked Enigma machines out of Poland, and worked their way to Bletchley Park where they donated the machines and their expertise to Turing. Building on Rejewsky’s work, Turing was able to automate the cryptography that could crack the daily code. It took the better part of a year to decrypt their first message. They called their work the Bombe, and it’s widely considered to be the first computer.","spans":[]},{"type":"paragraph","text":"But it was more elaborate than simply breaking the code. Because the Nazis changed the rotor settings every 24 hours, each new day brought a new set of 15,354,393,600 password variants that had to be decrypted.  Many times they worked through the night only to fail to break the code and have to start over the next day.","spans":[]},{"type":"paragraph","text":"It was an exhausting, near-impossible task. And, seven decades later, Enigma Pattern wondered how modern technology like AI could change things, and if they could break the code in a fraction of the time.","spans":[]},{"type":"heading2","text":"Geeking out: Breaking Enigma with Modern AI","spans":[]},{"type":"paragraph","text":"“The project started from the question, ‘What would Alan Turing be able to do nowadays if he had the current computing power and all the development around AI,’” said Janczyk. Since AI is still such a new discipline, the company allows their employees to spend 20 percent of their time on side projects of their choice that encourage out-of-the-box uses of AI.","spans":[]},{"type":"paragraph","text":"Retracing Turing’s footsteps was a pet project of Lukasz Kuncewicz, Enigma’s Head of Data Science (and another Polish mathematician co-founder). Kuncewicz chose this project to refer to the common history of Brits and Poles using human intelligence to overcome the biggest obstacles of the Second World War. (Their third co-founder, Mike Gibbons, is British).","spans":[]},{"type":"paragraph","text":"Kuncewicz decided to recreate the Nazi navy’s version of the machine, which was the most sophisticated. His team started by recreating the machine, rotors, and plugs in Python. Initially, they tried to teach their AI to decode the Enigma code itself, but it didn’t work. Neither did Lambda functions from Amazon.","spans":[]},{"type":"paragraph","text":"The problem, he said, was with the amount of computations. “Since the Lambda function from AWS is not very quick, and has some limits regarding execution time, the number of concurrent Lambda calculations was very high. So high that we actually spent more than a week going from one AWS department to another, trying to squeeze a decision from them regarding extending our limit.”","spans":[]},{"type":"paragraph","text":"Enter DigitalOcean. “We only use [DigitalOcean] for quick ‘bish bash bosh’ needs—they are very good when we need to have a bigger server run for a few hours,” he said. Enigma Pattern uses DigitalOcean for a variety of things—from learning environments, to quick compute tasks where results will be stored on their internal computers, to prototyping projects when they're not sure yet how many machines will be needed.","spans":[]},{"type":"paragraph","text":"When Enigma mentioned the project, DigitalOcean quickly agreed to provide the ML 1-Click Droplets. It fit the company’s developer focus, said Mark Mims, the R&D Engineer who designed the ML 1-Click that launched last year, and demonstrated the ease of use, as an ML 1-Click Droplet can be spun up in a few minutes with (you guessed it) one click. “But if you’re looking to spin up 2,000 servers, you won’t be using the web UI,” said Mims.  “That takes a call to the help desk.” Within half a day, DigitalOcean had hydrated the 1,000 droplets used in the testing phase.","spans":[{"start":203,"end":221,"type":"hyperlink","data":{"link_type":"Web","url":"https://thenewstack.io/digitalocean-adds-object-storage-machine-learning/"}}]},{"type":"paragraph","text":"The next step for Kuncewicz and his team was training an algorithm to recognize German, which they did by using Grimms Fairy Tales, including Hansel & Gretel, Rapunzel, Cinderella, and Rumpelstiltskin; 200 tales in all. Why children’s stories? Well, it’s not like the AI had to decrypt German philosophy, but instead military telegraphs, which use as few words as possible.  Fairy tales are also written in simple language, so it makes sense. And it worked. Interestingly, in the end the AI could not understand German.  But it did what machine learning does best: recognize patterns.","spans":[{"start":202,"end":218,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.gasl.org/refbib/Grimm__Maerchen.pdf"}}]},{"type":"image","url":"https://images.prismic.io/www-static/fd5a6b465db6d23bea5c2f741bb4ab28ac672431_enigmacodesocial_grimms_blog.png?auto=compress,format","alt":"Fairytales","copyright":null,"dimensions":{"width":784,"height":418}},{"type":"paragraph","text":"It took two weeks for the team to train the machines and create the Python code, and another two weeks for the first successful attempt to decrypt a message.  But in order to copy Turing’s success, a successful decryption had to be done in less than 24 hours.","spans":[]},{"type":"paragraph","text":"Then they decided to try to break it by using sheer computing power, adding another 1,000 Droplets. I’ll let Kuncewicz explain the details:","spans":[]},{"type":"paragraph","text":"“First,” he said, “one has to accept the fact, that even if you have 2,000 Droplets, you still have billions of combinations to be checked. And the neural network that we used, however good at spotting the German language, is not a speed demon.","spans":[]},{"type":"paragraph","text":"“It's because it uses recurrence, which gives you this boost when dealing with languages, but you pay with the calculation time. So the idea is, you need to separate the wheat from the chaff, and use the network only to check the best possible candidates.","spans":[]},{"type":"paragraph","text":"“So for the AI to shine, we actually use 2,000 minions that do the tedious work. Everybody praises AI, but it's actually the minions that do the 99% of work. Life, right?”","spans":[]},{"type":"paragraph","text":"“We wrote one minion in Python, and DigitalOcean has this very nice API for storing images. So you create one minion, say ‘DigitalOcean, please save it as an image,’ and then you say ‘DigitalOcean, please create 2,000 copies of it and make them run,’ and you have them.","spans":[]},{"type":"paragraph","text":"“The code is really simple. It connects to the bus and gets a first not-yet-taken assignment. The assignment is a package of the gibberish text (the encoded message) and combinations of passwords to run on it. It checks the gibberish against every password, checks if the decoded message sounds like German, and if so, sends it through the same bus for more detailed inspection by the AI.","spans":[]},{"type":"paragraph","text":"“And this is exactly what the Droplets do. They get their share of password combinations from RabbitMQ, they take a few letters of the gibberish they need to decode, they decode it using the given passwords, and apply a very crude (but very quick) check if at the end of this pipeline we have something that resembles German.”","spans":[]},{"type":"paragraph","text":"If the code looks like German, it’s pushed back to the main server where the AI works its magic.","spans":[]},{"type":"paragraph","text":"“The job is not coordinated in any way, each minion doesn't know anything about others—they are fully autonomic. This is great, because it means that we can have 200, 2,000, or 20,000 of them if we like (and if DigitalOcean allows). The more we have, the less time will pass before breaking the Enigma code.”","spans":[]},{"type":"paragraph","text":"The 2,000 virtual servers ran through 41 million combinations per second.  After 13 minutes of minion work, boom! The new Bombe had broken the code.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/d04e2526f87a9aa495a319e654086e6d51491aca_enigmacodesocial_mostov_v1_blog.png?auto=compress,format","alt":"AI uses","copyright":null,"dimensions":{"width":784,"height":418}},{"type":"heading2","text":"Enigma Pattern: Who are these People?","spans":[]},{"type":"paragraph","text":"“AI is being called the new electricity,” said Janczyk, “because it will be in everything.” Enigma Pattern works with companies that already collect big data but are unsure of the ways to harness its power.  “You would be surprised at how many companies store big data but don’t know how to put it to use,” he said. “For example, a coffee chain would rather throw up a new store than delve through the data to determine how to optimize the stores they already have, because they know how to open a new store and don’t know how to dig through the data.”","spans":[]},{"type":"paragraph","text":"One of their clients has a fleet of over 10,000 cars on which they collect a variety of raw data. Janczyk and his team sat down with the client to discuss the pain points of the business, how they might use the data they already had to help ease the pain, and how AI could help.","spans":[]},{"type":"paragraph","text":"Tires are a significant business cost. In addition to the price of the tires is the cost of maintenance and driver downtime.  If you don’t change the tires in time, you’re endangering the life of your drivers.  Change them too often, and you lose money. It turns out, you can teach a machine to hear the level of wear on a tire.","spans":[]},{"type":"paragraph","text":"“Out of the sound of the spinning tire, we were able to teach the machine the level of wear of the tire,” Janczyk said. “Now the company is able to change tires based on the sound of the wear and automatically schedule downtime to which saves lives and money.”","spans":[]},{"type":"paragraph","text":"“With AI and ML, there is such an unlimited amount of possibility, which is what makes it so exciting,” said Janczyk.  “That’s what makes my work fascinating,” he said, \"finding new uses for AI.”","spans":[]},{"type":"paragraph","text":"Who knows what mysteries AI will solve in the future? By appreciating the problems that Enigma presented to previous generations and applying modern techniques, we can expand our vision for what AI can accomplish in today’s world.","spans":[]},{"type":"paragraph","text":"To see how Enigma functioned, check out this link or watch it in action on YouTube.","spans":[{"start":0,"end":83,"type":"strong"},{"start":40,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Enigma_machine"}},{"start":53,"end":82,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.youtube.com/watch?v=mcX7iO_XCFA"}}]},{"type":"paragraph","text":"To learn more about Alan Turing and the work done at Bletchley Park, check out Andrew Hodges’ acclaimed biography of the computing legend, titled “Alan Turing: The Enigma.”","spans":[{"start":0,"end":172,"type":"strong"},{"start":147,"end":170,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.turing.org.uk/book/"}}]},{"type":"paragraph","text":"You can check out Enigma Pattern's code on GitHub, with a warning from Kuncewicz that it’s a bit messy.","spans":[{"start":0,"end":103,"type":"strong"},{"start":18,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/EnigmaPatternInc/EnigmaCode"}}]},{"type":"paragraph","text":"TC Currie is a journalist, storyteller, data geek, poet, body positive activist and occasional lingerie model. After spending 25 years in software development working with data movement and accessibility, she wrote her first novel during National Novel Writing Month and fell in love with writing.","spans":[{"start":0,"end":9,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.tccurrie.com/"}},{"start":0,"end":297,"type":"em"}]}],"blog_post_date":"2018-06-22","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"how-2000-droplets-broke-the-enigma-code-in-13-minutes"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Jeff Zellner","author_image":null,"_meta":{"uid":"jeff_zellner"}},"blog_header_image":{"dimensions":{"width":1024,"height":512},"alt":"multi-region docker registry","copyright":null,"url":"https://images.prismic.io/www-static/082fce91-94c8-41a3-b7fa-883177bdfed0_multi-region-Docker-Registry_social.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Deploying a Multi-region Docker Registry to Improve Performance","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Over the past several years, containers in general, and Docker specifically, have become quite prevalent across industry. Containerization offers isolated and reproducible build and runtime environments in a simple and developer-friendly form. They make the entire software development process run a bit smoother, from initial development to deploying services in production. Orchestration frameworks like Kubernetes and Mesos offer robust abstractions of service components, which simplifies deployment and management.","spans":[]},{"type":"paragraph","text":"Like many other tech companies, DigitalOcean uses containers internally to run production services. Quite a few of our services run inside Kubernetes, and a large slice of those run on an internal platform that we've built to abstract away some of the pain points for developers new to Kubernetes. We also use containers for CI/CD in our build systems, and locally for development. In this post, I’ll describe how we redesigned our Docker registry architecture for better performance.  (You can find out more about how DigitalOcean used both containers and Kubernetes in a talk by Joonas Bergius, and more about our internal platform, DOCC, in this talk by Mac Browning.)","spans":[{"start":571,"end":577,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.youtube.com/watch?v=Jhfd5FjYimU"}},{"start":644,"end":653,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.youtube.com/watch?v=K5WRJvMx4us"}}]},{"type":"heading2","text":"Simple beginnings and growing pains","spans":[]},{"type":"paragraph","text":"Initially, to host our private Docker images, we set up a single server running the official Docker registry, backed by object storage. This is a common, simple pattern for private registries, and it worked well early on. By relying on a consistent object store for backing storage, the registry itself doesn’t have to worry about consistency. However, with a single registry instance, there are still performance and availability bottlenecks, as well as a dependency on being able to reach the region running the registry.","spans":[]},{"type":"paragraph","text":"As our use of containers grew, we started to experience general performance issues such as slow or failing image pushes. A simple solution for this would be to increase the number of registry instances running, but we’d still have a dependency on the single region being available and reachable from every server.","spans":[]},{"type":"paragraph","text":"Additionally, the default behavior of the official Docker registry is to serve the actual image data via a redirect to the backing store. This means a request from a client arrives at the registry server, which returns a HTTP redirect to object storage (or whatever remote backend you have configured the registry to use). One unique issue that we encountered was a large deployment of large Docker images (~10GB) spiking bandwidth to our storage backend. Hundreds of clients requested a new, large image at the same time, saturating our connection to storage from our data center. Running multiple instances of the registry wouldn’t solve this issue—all the data would still come from the backing store.","spans":[]},{"type":"heading2","text":"Design goals","spans":[]},{"type":"paragraph","text":"We decided it was time to to overhaul our Docker registry architecture, with a few primary goals in mind:","spans":[]},{"type":"list-item","text":"Presence in every region","spans":[]},{"type":"list-item","text":"Regional caching to reduce the overall bandwidth egress from any region","spans":[]},{"type":"list-item","text":"Reduction or elimination of single points of failure","spans":[]},{"type":"heading2","text":"Architecture choices","spans":[]},{"type":"paragraph","text":"We operate relatively large Kubernetes clusters in every DigitalOcean region, so using the fundamental building blocks that Kubernetes and our customizations offer was a logical choice. Kubernetes provided us with great primitives like scaling deployments and simple rolling deploys. Additionally, we have lots of internal tooling for running, monitoring, and managing services running inside Kubernetes.","spans":[]},{"type":"paragraph","text":"For caching, we decided to take advantage of the Docker registry’s ability to disable redirects. Disabling redirection causes the registry server to retrieve image data, and then send it directly to the client, instead of redirecting the request to the backend store. This adds a bit of latency to the initial response, but enables us to put a caching proxy like Squid in front of the registry and serve cached data without transiting to the backing store on subsequent requests.","spans":[{"start":78,"end":95,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.docker.com/registry/configuration/#redirect"}},{"start":363,"end":368,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.squid-cache.org"}}]},{"type":"paragraph","text":"At this point, we had a good idea of how to run multiple caching registries in every region, but we still needed a way to direct clients to request Docker images from the registry in their region, instead of a single global one. To accomplish this, we created a new DNS zone that was not shared between regions, so that clients in each region could resolve the DNS address of our registry to the local region's registry deployment, instead of to a single registry located in a different region.","spans":[]},{"type":"heading2","text":"Implementation details","spans":[]},{"type":"paragraph","text":"The registry configuration we ended up using was rather standard, using a storage backend configured with access key and secret key. The one important bit, as previously mentioned was disabling `redirect`:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    storage:  ","spans":[]},{"type":"paragraph","text":"      redirect:","spans":[]},{"type":"paragraph","text":"        disable: true","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"For caching image data locally with the registry, we chose to use Squid. Each instance of the registry would be deployed with its own Squid instance, with its own cache storage. This approach was simple to set up and configure, but does have drawbacks: notably, that each instance of the registry has its own independent cache. This means that in a deployment of multiple instances, multiple identical requests directed to different backing instances could result in several cache misses, one for each instance of the registry and cache. There's room for future improvement here, setting up a larger, shared cache that all registry instances in a region sit behind. Any local caching at all was a big improvement over our original setup, so it was an okay tradeoff to make in our initial work.","spans":[{"start":66,"end":71,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.squid-cache.org"}}]},{"type":"paragraph","text":"To configure Squid, we wrote a simple configuration to listen for HTTPS connections and to send all cache misses to the local registry:","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    https_port 443 accel defaultsite=dockerregistry no-vhost cert=cert.pem key=key.pem  ","spans":[]},{"type":"paragraph","text":"    ...","spans":[]},{"type":"paragraph","text":"    cache_peer 127.0.0.1 parent 5000 0 no-query originserver no-digest forceddomain=dockerregistry name=upstream login=PASSTHRU ssl  ","spans":[]},{"type":"paragraph","text":"    acl site dstdomain dockerregistry  ","spans":[]},{"type":"paragraph","text":"    http_access allow site  ","spans":[]},{"type":"paragraph","text":"    cache_peer_access upstream allow site  ","spans":[]},{"type":"paragraph","text":"    cache allow site ","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"Once we had written the registry and Squid configuration, we combined the two pieces of software to run together in a Kubernetes deployment. Each pod would run an instance of the registry and an instance of Squid, with its own temporary disk storage. Deploying this across our regional Kubernetes clusters was straightforward.","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    apiVersion: extensions/v1beta1  ","spans":[]},{"type":"paragraph","text":"    kind: Deployment  ","spans":[]},{"type":"paragraph","text":"    metadata:  ","spans":[]},{"type":"paragraph","text":"      name: registry","spans":[]},{"type":"paragraph","text":"    spec:  ","spans":[]},{"type":"paragraph","text":"      replicas: 3","spans":[]},{"type":"paragraph","text":"      template:","spans":[]},{"type":"paragraph","text":"        spec:","spans":[]},{"type":"paragraph","text":"          volumes:","spans":[]},{"type":"paragraph","text":"            - name: registry-config","spans":[]},{"type":"paragraph","text":"              configMap:","spans":[]},{"type":"paragraph","text":"                name: registry-config","spans":[]},{"type":"paragraph","text":"            - name: squid-config","spans":[]},{"type":"paragraph","text":"              configMap:","spans":[]},{"type":"paragraph","text":"                name: squid-config","spans":[]},{"type":"paragraph","text":"            - name: cache","spans":[]},{"type":"paragraph","text":"              emptyDir: {}","spans":[]},{"type":"paragraph","text":"          containers:","spans":[]},{"type":"paragraph","text":"            - name: registry","spans":[]},{"type":"paragraph","text":"              image: registry:2.6.2","spans":[]},{"type":"paragraph","text":"              volumeMounts:","spans":[]},{"type":"paragraph","text":"                - name: registry-config","spans":[]},{"type":"paragraph","text":"                  mountPath: /etc/docker/registry/config.yml","spans":[]},{"type":"paragraph","text":"                  subPath: config.yml","spans":[]},{"type":"paragraph","text":"            - name: squid","spans":[]},{"type":"paragraph","text":"              image: squid:3.5.12","spans":[]},{"type":"paragraph","text":"              ports:","spans":[]},{"type":"paragraph","text":"                - containerPort: 443","spans":[]},{"type":"paragraph","text":"              volumeMounts:","spans":[]},{"type":"paragraph","text":"                - name: squid-config","spans":[]},{"type":"paragraph","text":"                  mountPath: /etc/squid/squid.conf","spans":[]},{"type":"paragraph","text":"                  subPath: squid.conf","spans":[]},{"type":"paragraph","text":"                - name: cache","spans":[]},{"type":"paragraph","text":"                  mountPath: /cache","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"paragraph","text":"The last bit of remaining work was enabling ingress to our new registry, which we did using our existing HAProxy ingress controllers. We terminate TLS with Squid, so HAProxy is only responsible for forwarding TCP traffic to our deployment.","spans":[]},{"type":"preformatted","text":"```[php]{`","spans":[]},{"type":"paragraph","text":"    apiVersion: extensions/v1beta1  ","spans":[]},{"type":"paragraph","text":"    kind: Ingress  ","spans":[]},{"type":"paragraph","text":"    metadata:  ","spans":[]},{"type":"paragraph","text":"      name: docker","spans":[]},{"type":"paragraph","text":"    spec:  ","spans":[]},{"type":"paragraph","text":"      rules:","spans":[]},{"type":"paragraph","text":"        - host: dockerregistry","spans":[]},{"type":"paragraph","text":"          http:","spans":[]},{"type":"paragraph","text":"            paths:","spans":[]},{"type":"paragraph","text":"              - path: /","spans":[]},{"type":"paragraph","text":"                backend:","spans":[]},{"type":"paragraph","text":"                  serviceName: docker","spans":[]},{"type":"paragraph","text":"                  servicePort: 443","spans":[]},{"type":"paragraph","text":"      tls:","spans":[]},{"type":"paragraph","text":"        - hosts:","spans":[]},{"type":"paragraph","text":"            - dockerregistry","spans":[]},{"type":"paragraph","text":"          secretName: not_needed","spans":[]},{"type":"preformatted","text":"`}```","spans":[]},{"type":"heading2","text":"Conclusion","spans":[]},{"type":"paragraph","text":"In conclusion, this registry architecture has been working well, providing much quicker pulls and pushes across all of our data centers. With this setup, we now have Docker registries running in all of our regions, and no region depends on reaching another region to serve data. Each registry instance is now backed by a Squid caching proxy, allowing us to keep many requests for the same data entirely in cache, and entirely local to the region. This has enabled larger deploys and much higher pull performance.","spans":[]},{"type":"paragraph","text":"Future improvements will be made around metrics instrumentation and monitoring. While we currently compute metrics by scraping the registry logs, we're looking forward to the Docker registry including Prometheus metrics natively. Additionally, creating a shared regional cache for our registry deployments should provide a nice performance boost and reduce the number of cache misses we see in operation. ","spans":[{"start":191,"end":219,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/docker/distribution/pull/2466"}}]},{"type":"paragraph","text":"Jeff Zellner is a Senior Software Engineer on the Delivery team, where he works on providing infrastructure and automation around Kubernetes to the DigitalOcean engineering organization at large. He's a long-time remote worker, startup-o-phile, and incredibly good skier.","spans":[{"start":0,"end":271,"type":"em"}]}],"blog_post_date":"2018-06-12","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"deploying-a-multi-region-docker-registry-to-improve-performance"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Anthony D'Atri","author_image":{"dimensions":{"width":400,"height":400},"alt":"Anthony D'Atri","copyright":null,"url":"https://images.prismic.io/www-static/8aa99d000b4c6f257ec399164007b65493e9912f_anthony.jpg?auto=compress,format"},"_meta":{"uid":"anthony_d_atri"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"square abstract illustration","copyright":null,"url":"https://images.prismic.io/www-static/2df56a860ce0ec38090d432b8521ab59512df8a4_ceph-blockstorage_v1.2_twitter---facebook.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Why We Chose Ceph to Build Block Storage","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"In January 2013, DigitalOcean became one of the first cloud providers to offer SSD storage. For several years, a slice of the virtualization hypervisor's local drives provided this storage available to Droplets. This approach worked great but had its limitations, such as:","spans":[{"start":70,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/now-offering-double-the-memory-solid-state-drives-for-all-plans/"}}]},{"type":"list-item","text":"Volume size and growth were limited by the hypervisor's complement of drives, which was shared with other Droplets.","spans":[]},{"type":"list-item","text":"Storage was released once a Droplet was destroyed. The term “ephemeral” is sometimes used to describe this virtualization strategy.","spans":[]},{"type":"list-item","text":"Storage volumes could not be easily moved or reattached to different Droplets.","spans":[]},{"type":"paragraph","text":"For these and other reasons, we introduced Block Storage in July 2016. Since then, we’ve steadily increased capacity and have deployed into all service regions. In this post, we'll explore the underlying technology behind our Block Storage offering.","spans":[{"start":43,"end":56,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/storage/"}}]},{"type":"heading3","text":"Creating Block Storage That Can Scale","spans":[]},{"type":"paragraph","text":"In the past, portable, scalable block storage service was usually provided with a traditional SAN (Storage Area Network). These tended to be expensive and difficult to manage and upgrade. Scaling and upgrading could be difficult, and the architecture was susceptible to considerable vendor lock-in.","spans":[]},{"type":"paragraph","text":"At DigitalOcean, we love and support open-source software. So when the time came to architect our Block Storage service, we used these guiding criteria:","spans":[{"start":20,"end":57,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean"}}]},{"type":"list-item","text":"Open-source software, available to a wide community of users, testers, and developers","spans":[]},{"type":"list-item","text":"Widespread deployment in production at scale","spans":[]},{"type":"list-item","text":"Ease of scaling up and out","spans":[]},{"type":"list-item","text":"Freedom from scalability barriers","spans":[]},{"type":"list-item","text":"Freedom from vendor lock-in and product obsolescence","spans":[]},{"type":"list-item","text":"Fault tolerance","spans":[]},{"type":"list-item","text":"RAS: Redundancy, Availability, Serviceability","spans":[]},{"type":"list-item","text":"Transparent maintenance and upgrade operations","spans":[]},{"type":"list-item","text":"Strong protection of customer data integrity","spans":[]},{"type":"paragraph","text":"The best-of-breed solution for all of these criteria is the leader in open and widely-adopted distributed storage: Ceph.","spans":[{"start":115,"end":119,"type":"hyperlink","data":{"link_type":"Web","url":"https://ceph.com/"}}]},{"type":"heading3","text":"Ceph in Production","spans":[]},{"type":"paragraph","text":"In the 15 years since Ceph began, it has steadily grown in popularity, performance, stability, scalability, and features. As GNU Lesser General Public License (LGPL) open-source software, Ceph enjoys a rich community of users and developers, including multiple DigitalOcean engineers who've contributed upstream code to the core Ceph project.","spans":[]},{"type":"paragraph","text":"The RBD (RADOS Block Device) service provided by Ceph slots right into the popular KVM  QEMU virtualization technology we employ. Droplets enjoy flexible block storage that is presented just like a local drive.","spans":[]},{"type":"paragraph","text":"Our Ceph-backed Block Storage service is also 100% SSD-based. Ceph is built for redundancy, and we carefully ensure that the loss of a single drive, server, or even an entire data center rack does not compromise data integrity or availability.","spans":[]},{"type":"paragraph","text":"Ceph gracefully heals itself when individual components fail, ensuring continuity of service with uncompromised data protection. Additionally, we use sophisticated monitoring systems built around tools including Icinga, Prometheus, and our own open-source ceph_exporter. These help us respond immediately to any issues with our Ceph infrastructure to ensure continuous availability.","spans":[{"start":256,"end":269,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/ceph_exporter"}}]},{"type":"paragraph","text":"Our Block Storage deployment into each new Droplet region brings hundreds of enterprise-class SSDs managed by the Luminous release of Ceph. We keep three copies of your data to ensure the highest data durability and availability. These replicas are carefully distributed across separate servers and racks to eliminate any single point of failure.","spans":[]},{"type":"paragraph","text":"Each Ceph cluster's performance and utilization is carefully monitored so that we can add additional resources as needed. Ceph's flexibility allows us to expand existing storage clusters or even add new ones to a region completely transparently.  We are also able to upgrade Ceph and complete other types of fleet-wide maintenance in a rolling fashion, without downtime or other impacts to our valued customers.","spans":[]},{"type":"paragraph","text":"It is important to note however that this replication is entirely behind-the-scenes. It prevents us losing your Block Storage volume data, but does not protect your Droplet itself, nor does it allow recovery from accidental deletion on your end. Thus, backups of critical data are still important. See these articles for help on Block Storage volume snapshots and data backups:","spans":[]},{"type":"list-item","text":"Introduction to DigitalOcean Backups","spans":[{"start":0,"end":36,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-backups"}}]},{"type":"list-item","text":"Understanding DigitalOcean Droplet Backups","spans":[{"start":0,"end":42,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/understanding-digitalocean-droplet-backups"}}]},{"type":"list-item","text":"Creating a Snapshot from a Block Storage Volume","spans":[{"start":0,"end":47,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-snapshots#creating-a-snapshot-from-a-block-storage-volume"}}]},{"type":"paragraph","text":"And if you haven’t already, create your own Block Storage volume on DigitalOcean.","spans":[{"start":28,"end":64,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/storage/"}}]},{"type":"paragraph","text":"Anthony D’Atri is a veteran sysadmin who's been working with Ceph for four years, starting with the Dumpling release. He is the co-author, along with Vaibhav Bhembre, of Learning Ceph, which outlines architecting, deploying, and managing Ceph at scale. He enjoys photography and a never ending quest for exotic fruit. He lives in Portland, Oregon with his wife and son.","spans":[{"start":0,"end":369,"type":"em"},{"start":170,"end":183,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.amazon.com/Learning-Ceph-Second-Anthony-DAtri-ebook/dp/B01NBP2D9I"}}]}],"blog_post_date":"2018-05-31","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"why-we-chose-ceph-to-build-block-storage"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":788,"height":425},"alt":"boat with fish and a jellyfish on it in the ocean illustration","copyright":null,"url":"https://images.prismic.io/www-static/75564900077400aa98d37b52387720140c8256b4_kubecon-1.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Catch Us in Copenhagen for KubeCon EU","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"UPDATE: Catch the talks, now embedded below!","spans":[{"start":0,"end":44,"type":"strong"}]},{"type":"paragraph","text":"Next week is KubeCon EU in Copenhagen, Denmark. We're already drooling at the idea of diving into smørrebrød, perhaps near the famed Little Mermaid statue.","spans":[{"start":13,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://events.linuxfoundation.org/kubecon-eu-2018/"}}]},{"type":"paragraph","text":"DigitalOcean will have two speakers and a booth at KubeCon EU:","spans":[]},{"type":"paragraph","text":"On Wednesday, May 2, from 2:45 PM-3:20 PM, Matt Layher presents \"How To Export Prometheus Metrics From Just About Anything.\"","spans":[{"start":43,"end":54,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/mdlayher"}},{"start":65,"end":122,"type":"hyperlink","data":{"link_type":"Web","url":"https://kccnceu18.sched.com/event/DquG/how-to-export-prometheus-metrics-from-just-about-anything-matt-layher-digitalocean-intermediate-skill-level"}}]},{"type":"paragraph","text":"Prometheus exporters bridge the gap between Prometheus and systems which cannot export metrics in the Prometheus format. During this talk, you will learn how to gather metrics from a wide variety of data sources, including files, network services, hardware devices, and system calls to the Linux kernel. You will also learn how to build a reliable Prometheus exporter using the Go programming language. This talk is intended for developers who are interested in bridging the gap between Prometheus and other hardware or software.","spans":[{"start":0,"end":529,"type":"em"}]},{"type":"paragraph","text":"Then, on Thursday, May 3, Andrew Kim speaks from 2:45PM-3:20PM on \"Global Container Networks on Kubernetes at DigitalOcean.\"","spans":[{"start":26,"end":36,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/kimandrewsy"}},{"start":67,"end":122,"type":"hyperlink","data":{"link_type":"Web","url":"https://kccnceu18.sched.com/event/Dqv8/global-container-networks-on-kubernetes-at-digitalocean-andrew-sy-kim-digitalocean-intermediate-skill-level"}}]},{"type":"paragraph","text":"Building a container network that is reliable, fast and easy to operate has become increasingly important in DigitalOcean’s distributed systems running on Kubernetes. Today’s container networking technologies can be restrictive as Pod and Service IPs are not reachable externally which forces cluster administrators to operate load balancers. The addition of load balancers introduces new points of failure in a cluster and hinders observability since source IPs are either NAT’d or masqueraded.","spans":[{"start":0,"end":495,"type":"em"}]},{"type":"paragraph","text":"This talk will be a deep dive of how DigitalOcean uses BGP, Anycast and a variety of open source technologies (kube-router, CNI, etc) to achieve a fast and reliable container network where Pod and Service IPs are reachable from anywhere on DigitalOcean’s global network. Design considerations for scalability, lessons learned in production and advanced use cases will also be discussed.","spans":[{"start":0,"end":386,"type":"em"}]},{"type":"paragraph","text":"You can also catch us in Hall C, at booth number G-C06. We’ll be tending the booth, where we'll be giving demos and answering questions:","spans":[]},{"type":"list-item","text":"Wednesday, May 2 from 10:30 AM-8:30 PM","spans":[]},{"type":"list-item","text":"Thursday, May 3 from 10:30 AM-5:30 PM, and","spans":[]},{"type":"list-item","text":"Friday, May 4 from 10:30 AM-4:00 PM","spans":[]},{"type":"paragraph","text":"Vi snakkes ved!","spans":[]}],"blog_post_date":"2018-04-27","tags":[{"tag1":{"tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"kubecon-eu-2018"}}}]}}}