{"componentChunkName":"component---src-templates-tag-jsx","path":"/blog/tag/engineering/","result":{"data":{"prismic":{"allFeaturedblogs":{"edges":[{"node":{"featured_blogs_enabled":true,"heading":[{"type":"paragraph","text":"Featured posts","spans":[]}],"featured_blog_1":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/6d8d81b1-971a-4313-b033-b4e125cb14a0_MondoDB-blog-header-790x395.PNG?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing DigitalOcean Managed MongoDB – a fully managed, database as a service for modern apps","spans":[]}],"blog_post_date":"2021-06-29","blog_post_content":[{"type":"paragraph","text":"MongoDB is one of the most popular databases, and it’s ideal for apps that evolve rapidly and need to handle huge volumes of data and traffic. It offers advantages like flexible document schemas, code-native data access, change-friendly design, and easy horizontal scale-out.","spans":[{"start":22,"end":44,"type":"hyperlink","data":{"link_type":"Web","url":"https://db-engines.com/en/ranking","target":"_blank"}}]},{"type":"paragraph","text":"However, building and maintaining MongoDB clusters from the ground up can be a huge undertaking. Developers often complain that they have to spend their valuable time and resources on database management. Well, we’ve been listening and have some great news: accessing and managing MongoDB on DigitalOcean just got a lot simpler!","spans":[]},{"type":"paragraph","text":"We are excited to announce that DigitalOcean Managed MongoDB is now in General Availability. Managed MongoDB is a fully managed, database as a service (DBaaS) offering from DigitalOcean, built in partnership with and certified by MongoDB Inc. It provides you all the technical capabilities that make MongoDB so beloved in the developer community. Together we have ensured that you will get access to all the latest releases of the MongoDB document database as they become available.","spans":[{"start":32,"end":91,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases-mongodb/"}},{"start":230,"end":241,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/","target":"_blank"}}]},{"type":"paragraph","text":"Managed MongoDB simplifies the MongoDB administration. Developers of all skill levels, even those who do not have prior experience in databases, can spin up MongoDB clusters in just a few minutes. We handle the provisioning, managing, scaling, updates, backups, and security of your MongoDB clusters, allowing you to offload the complex, time consuming –yet critical – database administration tasks to us. This empowers you to focus on what really matters: building awesome apps.","spans":[]},{"type":"embed","oembed":{"height":113,"width":200,"embed_url":"https://www.youtube.com/watch?v=NvHQSV7jnKA","type":"video","version":"1.0","title":"Create a MongoDB Database on DigitalOcean","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","provider_name":"YouTube","provider_url":"https://www.youtube.com/","cache_age":null,"thumbnail_url":"https://i.ytimg.com/vi/NvHQSV7jnKA/hqdefault.jpg","thumbnail_width":480,"thumbnail_height":360,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/NvHQSV7jnKA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"heading2","text":"Benefits of Managed MongoDB","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Easy set up and maintenance: We create the database clusters for you. Simply choose the cluster configuration (e.g., memory, disk size, number of nodes, etc.), and the data center in which you want to host the database. Follow a few simple steps and your database cluster will be up and running in a matter of minutes. You can spin up clusters using the cloud control panel, CLI, or API.\n\n","spans":[{"start":0,"end":28,"type":"strong"}]},{"type":"list-item","text":"Automatic daily backups with point in time recovery: Data is one of the most important assets of an app, so it’s critical to backup your database. We take backups of your entire clusters automatically on a daily basis, for free. We also provide a point in time recovery for 7 days, that way if things go wrong due to human error, machine error, or some combination of both, you can easily restore the database as it was at any point in the previous 7 days. \n\n","spans":[{"start":0,"end":52,"type":"strong"}]},{"type":"list-item","text":"Automatic updates and access to latest MongoDB releases: You get access to MongoDB 4.4. This is the latest release of MongoDB and comes packed with numerous enhancements like hedged reads, rust, and swift drivers. Since we have developed Managed MongoDB in partnership with MongoDB Inc, you will always get access to new releases as they become available. With Managed MongoDB, the updates happen automatically. Just select a date and time for the updates and we take care of the rest. This makes it easy to stay up to date with MongoDB releases without disrupting your business.\n\n","spans":[{"start":0,"end":56,"type":"strong"},{"start":148,"end":169,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/new","target":"_blank"}}]},{"type":"list-item","text":"High availability with automated failover: If your database goes down, it can take down the entire app, leading to bad customer experiences. With Managed MongoDB, you can easily minimize the downtime for your database and make it highly available with standby nodes. Standby nodes add redundancy, so if for example the primary node fails, the standby node is immediately promoted to primary and begins serving requests while we provision a replacement standby node in the background.\n\n","spans":[{"start":0,"end":42,"type":"strong"}]},{"type":"list-item","text":"Scale up easily to handle traffic spikes: As your app gains traction and the usage grows, it’s important to have a database that can keep up with the increased demand. With Managed MongoDB, you can easily scale up the size of database nodes when needed.\n\n","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Secure by default: Since data is critical, it also needs to be secure. We encrypt data at rest with LUKS and in transit with SSL. When you create a new cluster, it’s placed in a VPC network by default that provides a more secure connection between resources. You can also restrict access to your nodes to prevent brute-force password and denial-of-service attacks.","spans":[{"start":0,"end":18,"type":"strong"},{"start":178,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/vpc/"}}]},{"type":"heading2","text":"The need for Managed Databases","spans":[]},{"type":"paragraph","text":"DigitalOcean’s mission is to simplify cloud computing so developers, startups, and SMBs can spend more time building software that changes the world. While databases are a critical component to any application, building, maintaining, and scaling them can be complex and time consuming. For developers that are building apps for their business, database administration is often not a core focus area. But it’s quite common to find developers that write the code and then also roll up their sleeves to maintain databases. Such users would rather offload the tedious database administration and focus their limited time and energy on building and enhancing their apps. ","spans":[]},{"type":"paragraph","text":"With this in mind, we introduced Managed Databases a couple of years ago and are excited to add Managed MongoDB to our portfolio. With this release, DigitalOcean Managed Databases now supports the following engines:","spans":[{"start":33,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases/"}}]},{"type":"image","url":"https://images.prismic.io/www-static/87745cc1-1c5f-4463-b104-104b7fc30dc7_managed-databases-logos.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":849,"height":104}},{"type":"paragraph","text":"Managed MongoDB launch comes on the heels of DigitalOcean App Platform, a modern, reimagined PaaS (Platform as a Service) that we released a few months ago. App Platform makes it very easy to build, deploy, and scale apps and static sites. You can deploy code by simply pointing to your GitHub and GitLab repos, and App Platform will do all the heavy lifting of managing infrastructure, app runtimes, and dependencies. App Platform, along with Managed Databases, helps fulfill DigitalOcean’s mission by empowering developers, startups, and SMBs to focus more on their apps, and less on the underlying infrastructure and databases.","spans":[{"start":45,"end":70,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"heading2","text":"How Managed MongoDB works","spans":[]},{"type":"paragraph","text":"DigitalOcean provides you with various compute options to build your apps like:","spans":[]},{"type":"list-item","text":"Droplets: On-demand, Linux virtual machines suitable for production business applications and personal passion projects.","spans":[{"start":0,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/droplets/"}}]},{"type":"list-item","text":"DigitalOcean Kubernetes: Managed Kubernetes with automatic scaling, upgrades, and a free control plane.","spans":[{"start":0,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"list-item","text":"DigitalOcean App Platform: A fully managed Platform as a Service.","spans":[{"start":0,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"paragraph","text":"No matter which compute option you choose to build your apps, you can easily add Managed MongoDB to it. In addition to this, Managed MongoDB also integrates with the Node.js 1-Click App from DigitalOcean Marketplace making it a lot easier to build Node.js apps.","spans":[{"start":166,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/nodejs"}}]},{"type":"heading2","text":"Simple, predictable pricing","spans":[]},{"type":"paragraph","text":"Just like all DigitalOcean products, Managed MongoDB provides simple, predictable pricing that allows you to control costs and prevent any surprise bills. You can spin up a database cluster for just $15/month, or a highly available three-node replica set for $45/month. Click here for more information.","spans":[{"start":270,"end":301,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/#managed-databases"}}]},{"type":"heading2","text":"Regional availability","spans":[]},{"type":"paragraph","text":"Managed MongoDB is currently available in the following regions:","spans":[]},{"type":"list-item","text":"NYC3 (New York, USA)","spans":[]},{"type":"list-item","text":"FRA1 (Frankfurt, Germany)","spans":[]},{"type":"list-item","text":"AMS3 (Amsterdam, Netherlands)","spans":[]},{"type":"paragraph","text":"We will be making Managed Mongo available in other regions soon. Please check out the release notes for most up to date information on regional availability.","spans":[{"start":86,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/release-notes/"}}]},{"type":"heading2","text":"Join us at deploy, DigitalOcean’s virtual user conference","spans":[]},{"type":"paragraph","text":"Today we have deploy, DigitalOcean’s signature user conference, which focuses on celebrating, educating, and connecting awesome builders from all over the world.","spans":[{"start":14,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/home"}}]},{"type":"paragraph","text":"Check out the keynote session from DigitalOcean's CEO, Yancey Spruill, in which he talks about where we're headed as a company and shares some exciting product updates. His keynote will be followed by sessions from community members, engineers, customers, and other experts that are building technologies and businesses powered by the cloud. With live Q&A and an active Discord server, there’s ample opportunity to engage and learn something new. Click here to attend the deploy conference.","spans":[{"start":14,"end":69,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/agenda/session/552806"}},{"start":347,"end":384,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy-discord"}},{"start":461,"end":489,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy"}}]},{"type":"paragraph","text":"We are also launching a hackathon for DigitalOcean Managed MongoDB. Learn how you can participate, submit an app and get a t-shirt.","spans":[{"start":24,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/mongodb-hackathon"}}]},{"type":"paragraph","text":"We hope you will give Managed MongoDB a try. Here are some sample datasets and sample apps that you can use to kick the tires. Check out the docs and let us know what you think!","spans":[{"start":22,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/databases/new?engine=mongodb"}},{"start":59,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/do-community/mongodb-resources","target":"_blank"}},{"start":141,"end":145,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/databases/mongodb/"}}]},{"type":"paragraph","text":"If you’d like to have a conversation about using DigitalOcean and Managed MongoDB in your business, please feel free to contact our sales team.","spans":[{"start":120,"end":142,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"André Bearfield","spans":[]},{"type":"paragraph","text":"Director of Product Management","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"_meta":{"uid":"introducing-digitalocean-managed-mongodb"}},"featured_blog_2":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":"Droplet Console","copyright":null,"url":"https://images.prismic.io/www-static/710499ae-78cc-4179-afc1-15793637b200_DODX3727-790x400-logo-2.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Securely connect to Droplets with SSH key pairs using a new Droplet Console","spans":[]}],"blog_post_date":"2021-08-10","blog_post_content":[{"type":"paragraph","text":"The famous author Ken Blanchard once said, “Feedback is the breakfast of champions.\" This is something we truly believe at DigitalOcean, and we always strive to enhance our products based on customer feedback.","spans":[]},{"type":"paragraph","text":"With this goal in mind, we are excited to introduce a new Droplet Console that will make it much easier to connect to your Droplets securely. The new Droplet Console provides one-click SSH access to your Droplets through a native-like SSH/Terminal experience. It also eliminates the need for a password or manual configuration of SSH keys. Starting today, we’re pleased to announce that the new Droplet Console is now available to all Droplet users.","spans":[]},{"type":"heading2","text":"Why you should be using Secure Shell (SSH) ","spans":[]},{"type":"paragraph","text":"Password-based security is notoriously insecure due to password fatigue and the overuse of passwords such as ‘123456’. Secure Shell or SSH is a network communication protocol that solves this by using passwordless solutions for encryption, enabling two computers to communicate and securely share data. At a high level, SSH works by creating cryptographic key pairs consisting of a public and private key, which are computer generated and stored separately to ensure their security. ","spans":[{"start":80,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://cybernews.com/best-password-managers/most-common-passwords/"}}]},{"type":"paragraph","text":"SSH has become the default encryption protocol for many industries, but it was difficult to use SSH keys with DigitalOcean’s current Recovery (VNC) console, which is why we developed our new Droplet Console. The new Droplet Console is backed by an agent that security supervises the key pair, while also providing one-click SSH access to our users. You can see the full list of features below.","spans":[]},{"type":"heading2","text":"The new Droplet Console: More time saving, less time wasting ","spans":[]},{"type":"paragraph","text":"The new Droplet Console is for everyone who is looking to build fast, secure apps and avoid hassles with SSH access & usability issues.","spans":[]},{"type":"paragraph","text":"In addition to easier SSH access, the new Droplet Console comes with:","spans":[]},{"type":"list-item","text":"Copy/paste text: Instead of typing lengthy key pairs and text manually, you can use copy/paste to save time. ","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Multi-color support: Multi-color support makes the console more useful and intuitive, and breaks the conventional standard appearance which is black text on a white background. ","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Multi-language support: DigitalOcean’s new Droplet Console supports multiple languages, meaning you can now type and view any content in any language that is supported by UTF-8","spans":[{"start":0,"end":24,"type":"strong"}]},{"type":"list-item","text":"OS/images supported: Linux distributions (Ubuntu(16.04 - 20.04), Fedora (32 & 33), Debian (9), CentOS (7.6 & 8.3), CentOS 8 Stream, Rocky Linux and Marketplace images.","spans":[{"start":0,"end":20,"type":"strong"},{"start":148,"end":159,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/"}}]},{"type":"paragraph","text":"The new Droplet Console is available by default on any new Droplets you spin up. You can also enable it manually on older Droplets. Click here to learn more!","spans":[{"start":132,"end":157,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/droplets/how-to/connect-with-console/"}}]},{"type":"paragraph","text":"Check out this short walkthrough video that shows the new Droplet Console in action: ","spans":[]},{"type":"embed","oembed":{"type":"video","embed_url":"https://www.youtube.com/watch?v=Qt7QihVuxiE","title":"Access Your Droplet Terminal Through the Web Console","provider_name":"YouTube","thumbnail_url":"https://i.ytimg.com/vi/Qt7QihVuxiE/hqdefault.jpg","provider_url":"https://www.youtube.com/","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","height":113,"width":200,"version":"1.0","thumbnail_height":360,"thumbnail_width":480,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/Qt7QihVuxiE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"paragraph","text":"We hope you’re excited about the new Droplet Console. You’re welcome to spin some Droplets up right now, and try out the new Droplet Console – why wait?","spans":[{"start":72,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/droplets/new"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"Harsh Banwait, Senior Product Manager","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Harsh Banwait","author_image":{"dimensions":{"width":600,"height":399},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/e83ff690-b20c-4d88-a2b6-57e562558cd6_download.png?auto=compress,format"},"_meta":{"uid":"harsh-banwait"}},"_meta":{"uid":"new-droplet-console-ssh-support"}},"featured_blog_3":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/588e28d3-d41e-480b-937b-8c3b19201f6e_DODX3568-790x400-Blog.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to scale your SaaS product without breaking the bank","spans":[]}],"blog_post_date":"2021-06-22","blog_post_content":[{"type":"paragraph","text":"These days, if you are in the business of software, chances are you are delivering or plan to deliver your services using a Software-as-a-Service (SaaS) model. A combination of internet-based delivery, subscription-based pricing, and low-friction product experiences have made SaaS solutions valuable tools for their users, and an excellent vehicle for software builders looking to distribute their products.","spans":[]},{"type":"paragraph","text":"These factors have made SaaS solutions ubiquitous; SaaS is the largest segment in the public cloud market, and is used to provide functionality ranging from personal finance apps for consumers, to productivity software for businesses, and even tools and services for software developers themselves to compose their applications and simplify their workflows. It is also not uncommon to find micro-SaaS applications being built for specific industries such as retail, job functions such as accounting or marketing, or tasks such as event management. ","spans":[]},{"type":"paragraph","text":"The best thing about this SaaS wave has been that it has allowed a new generation of software builders to build and monetize applications and participate in the digital economy. Previously, you had to be a big company with lots of resources, name recognition and distribution networks to successfully sell software products. Now, irrespective of whether you are a single person working on a passion project, a small team of developers in a startup, or a small and medium-sized business (SMB), the SaaS model enables you to express your ideas in the form of software and deliver them to customers anywhere in the world.","spans":[]},{"type":"heading2","text":"The unique challenges of building SaaS solutions","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Despite the opportunities that come with the widespread adoption of SaaS products, software builders still have to answer key questions in their journey to building successful SaaS products. Understanding what customers to target, features to prioritize, how to price your product, and how to acquire customers are all critical questions to figure out while you are also doing the important job of actually building and operating the product. ","spans":[]},{"type":"paragraph","text":"Writing the code, testing, deployment, monitoring the usage in production, and ensuring that your apps are able to handle the additional demand when customer base and usage grows are all essential and time-consuming tasks.","spans":[]},{"type":"paragraph","text":"Additionally, being able to test multiple ideas, pivot, and double down on the ideas that actually work is critical in early stages of SaaS development. Once growth comes, it is equally important to scale up without compromising on performance or reliability. Needless to say, all of this needs to be economically viable as well, since not everyone has the resources of large SaaS providers like Salesforce or Adobe.","spans":[]},{"type":"heading2","text":"Cloud Computing enables builders but also poses challenges","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Fortunately, for the act of building and operating your apps, cloud computing can help take some load off your shoulders. Unless you have the scale and resources of Facebook, chances are you are not going to set up your own data centers to host the computing infrastructure that powers your SaaS company. Public cloud infrastructure providers can bring great value to SaaS builders by providing on-demand computing services with usage-based pricing. However, just like how the legacy software companies weren't built for the SaaS model, the early (and big) cloud computing services were not optimized for the unique needs of small SaaS building teams. ","spans":[]},{"type":"paragraph","text":"Smaller SaaS teams face challenges with large cloud computing providers, including:","spans":[]},{"type":"heading4","text":"Too many technology options","spans":[]},{"type":"paragraph","text":"There are just too many options for tech stacks on which to build your SaaS - programming languages, application development frameworks, libraries, runtime environments, architectural patterns, and deployment models - and the list is growing by the day.","spans":[]},{"type":"heading4","text":"Complexity of cloud computing services","spans":[]},{"type":"paragraph","text":"Even when you have decided on a technology stack, there is a lot of cloud vendor-specific terminology you need to learn and heavy lifting you need to do to build on the cloud, not all of which contributes to making your SaaS applications successful.","spans":[]},{"type":"heading4","text":"Unpredictable costs","spans":[]},{"type":"paragraph","text":"The experimentation necessary in early stages of SaaS development, as well as the scaling of applications required during the growth phase, call for affordable and predictable pricing from your cloud provider. The last thing SaaS teams want is surprising and indecipherable bills from your cloud provider. Unfortunately, smaller businesses often experience unpredictable costs with cloud providers who are busy serving only the large enterprises.","spans":[]},{"type":"heading2","text":"DigitalOcean provides a simple, cost effective solution for SaaS builders","spans":[]},{"type":"paragraph","text":"Fortunately, at DigitalOcean we have a laser focus on small software development teams, who are trying to build the next generation of applications. Today, DigitalOcean customers are already building SaaS applications which serve all kinds of customers.","spans":[{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/solutions/saas/"}}]},{"type":"paragraph","text":"We believe SaaS builders should focus on building apps that power their business, and not spend their valuable time on managing infrastructure. That is exactly what we have been able to enable through our intuitive products that are built for scale and reliability.","spans":[{"start":205,"end":223,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/"}}]},{"type":"list-item","text":"Vidazoo is an advertising technology company specializing in video streaming and serving. It serves video ads to thousands of websites and handles close to 10 billion requests per day. \n\n“We are as much a data company as an adtech company. Our business relies on speedy and accurate data processing at massive scale. DigitalOcean provides us the perfect set of tools to operate our SaaS business profitably, while not making us feel the need to become full time system administrators. We plan to move a lot of our apps to DigitalOcean App Platform and other fully managed products.” - Roman Svichar, CTO of Vidazoo","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://vidazoo.com/"}},{"start":187,"end":583,"type":"em"}]},{"type":"paragraph","text":"We believe in meeting customers where they are. If they already have an understanding of cloud infrastructure technologies, they should be able to leverage that knowledge and get started with our products without any further ramp up.","spans":[]},{"type":"list-item","text":"Whatfix is an enterprise SaaS provider that offers a digital adoption platform to businesses. The company helps enterprises gain the full value of their investments in enterprise applications by providing real-time, interactive, and contextual guidance to users of those applications. \n\n“What we really love about the DigitalOcean platform is the ease of use. We feel like we know infrastructure and can handle most of the configuration and management. What we needed from a cloud was not bells and whistles but efficiency and reliability. DigitalOcean provides us a platform to build our apps and then gets out of the way. Just how we like it.” - Achyuth Krishna, Director of Engineering of Whatfix","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://whatfix.com/blog/driving-the-future-now-were-excited-to-announce-our-90-million-series-d-funding/"}},{"start":287,"end":648,"type":"em"}]},{"type":"paragraph","text":"We understand that scaling while maintaining reliability of applications and profitability of business is important, so we provide robust solutions which minimize downtime.","spans":[]},{"type":"list-item","text":"Centra is a SaaS-based e-commerce platform for global direct-to-consumer and wholesale e-commerce brands. Centra provides a powerful e-commerce backend that lets brands build pixel-perfect, custom designed, online flagship stores. \n\n“How do we enable our customers to create differentiated online experiences? How do we ensure their e-commerce apps stay up and running at all times? How do we scale on-demand when traffic grows or new customers come in? These are the questions that we ask ourselves every day. Thankfully, we have a partner in DigitalOcean that provides just the platform to answer those questions enabling us to guarantee 99.9% uptime for our clients.” - Martin Jensen, CEO of Centra","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"https://centra.com/"}},{"start":233,"end":673,"type":"em"}]},{"type":"paragraph","text":"These are just a few examples of SaaS businesses finding success on DigitalOcean. We are constantly amazed by the creativity and innovation that software builders are utilizing our platform for. If you are interested in learning more about product updates, technical deep-dives and best practices for building SaaS products and businesses, please contact us to learn how we can help you get started. ","spans":[{"start":340,"end":357,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"Come build with DigitalOcean!","spans":[]},{"type":"paragraph","text":"Looking to migrate your SaaS to DigitalOcean? Leverage free infrastructure credits, robust training, and technical support to ensure a worry-free migration.","spans":[{"start":0,"end":156,"type":"strong"},{"start":0,"end":156,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Raman Sharma","spans":[]},{"type":"paragraph","text":"Vice President, Product & Programs Marketing","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Raman Sharma","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/497b4b14-d192-493a-8b66-7ae176ba99f3_raman.png?auto=compress,format"},"_meta":{"uid":"raman-sharma"}},"_meta":{"uid":"how-to-scale-your-saas-product-without-breaking-the-bank"}}}}]}}},"pageContext":{"limit":12,"skip":0,"numTagPages":5,"currentPage":1,"uid":"engineering","data":[{"node":{"author":{"_linkType":"Link.document","author_name":"Jeremy Morris","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/5427feac-4d20-4ad9-b006-de1b6ba56b70_jeremy+morris.jpeg?auto=compress,format"},"_meta":{"uid":"jeremy-morris"}},"blog_header_image":{"dimensions":{"width":1200,"height":600},"alt":"contributing to kubernetes beginner","copyright":null,"url":"https://images.prismic.io/www-static/722b4cb0-1550-403a-841d-34c650886001_83603309-5cc1-4ac8-b282-020370af345d_kubernetes-made-for-you-hero-bg.jpeg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Contributing to open source software: Kubernetes","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Are you interested in getting involved with the Kubernetes community, but aren't sure where to start? This blog post aims to help remove the ambiguity associated with contributing to an open source project as big as Kubernetes while providing some anecdotal experience to give you an idea of what contributing to an open source project such as Kubernetes can look like as a beginner. ","spans":[]},{"type":"paragraph","text":"By detailing my experience as a contributor, I hope to inspire you to take that first step to begin your path as an open source contributor. You can contribute to Kubernetes regardless of your background or years of experience. Everyone's contributions are an important and valued part of the open source community. Below, I detail my experience as a contributor and outline key steps anyone can take to become involved.","spans":[]},{"type":"heading2","text":"Why I started contributing to Kubernetes ","spans":[]},{"type":"paragraph","text":"Contributing to an open source project such as Kubernetes takes many forms: submitting code PRs, updating documentation, triaging issues, reporting bugs, improving tests, reviewing code, reviewing Kubernetes Enhancement Proposals (KEPs), and participating in Kubernetes release management. Kubernetes exists and thrives thanks to the countless hours spent by current contributors and future contributors like you. ","spans":[]},{"type":"paragraph","text":"I was first introduced to Kubernetes when I was asked to write a trade study on solutions a company I worked for could use to containerize our services and manage them with a container orchestrator. With the knowledge of what Kubernetes could do, at my next job I was able to start actually using Kubernetes when I noticed inconsistencies in the way some apps were maintained and operated, and suggested containerization as a solution. This allowed me to explore the Kubernetes repository, and while doing so I came across an issue that seemed like a good first contribution. ","spans":[]},{"type":"paragraph","text":"I had always been interested in contributing to open source projects, and felt that if I started contributing to Kubernetes I’d get more knowledgeable about distributed systems. Nothing gets you more experience with something than writing the code for it, which is how I became involved in Kubernetes and why it’s so valuable for others to do the same. ","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/7553cf1c-a221-42e9-a6dd-7d0bdcf52d8d_kubecon2017LinuxFoundationScholarshipGroup.jpg?auto=compress,format","alt":"KubeCon ","copyright":null,"dimensions":{"width":1024,"height":684}},{"type":"heading2","text":"How to contribute to Kubernetes as a beginner","spans":[]},{"type":"paragraph","text":"Here are some of the first steps to take to start contributing to Kubernetes. ","spans":[]},{"type":"heading3","text":"1. Look for relevant documentation available for contributors","spans":[]},{"type":"paragraph","text":"Typically, when you are a new contributor to any open source project, you should look for any relevant documentation for contributors. Usually, this is in the form of a CONTRIBUTING.md file or something similar. The README at the root of a repo is also a good place to start. Any project looking to foster a community of contributors should have this information easily accessible to new contributors. Another thing to consider is the means of communication that the developers on that particular project or sub-project use. For example, Kubernetes relies heavily on Slack and mailing lists: subscribe to the slack channels and email lists that interest you most, especially for the areas of Kubernetes you plan on contributing to.","spans":[{"start":556,"end":590,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/community/blob/master/sig-list.md"}}]},{"type":"paragraph","text":"As a complete beginner to Kubernetes in general, as well as a person with no experience contributing to the Kubernetes codebase, I jumped right to the CONTRIBUTING.md file. It’s well documented and pointed me right to the necessary documentation to set up my environment to begin development.","spans":[]},{"type":"heading3","text":"2. Search and filter for issues that interest you","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Finding issues to work on: Once you have your dev environment setup you’ll want to find something to work on. When I first started, after I got my environment set up, I proceeded to look at the issues in the Kubernetes repo. In GitHub, when you search for issues for a given project, there is a filtering label that can be applied to filter for “good first issue” which indicates the issue can be worked on by a beginner. For example, is:open is:issue label:\"good first issue\" in the GitHub issue search bar would provide you with a list of all open issues labeled as “good first issue”. To filter even further on for a specific Special Interest Group (SIG) such as sig/network, you’d search for :open is:issue label:\"good first issue\" label:sig/network. From here I was able to find my first issue.\n\n","spans":[{"start":0,"end":27,"type":"strong"},{"start":194,"end":200,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/issues"}},{"start":208,"end":218,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes"}},{"start":435,"end":475,"type":"strong"},{"start":435,"end":476,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22+label"}},{"start":696,"end":753,"type":"strong"},{"start":696,"end":753,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22+label%3Asig%2Fnetwork"}},{"start":784,"end":798,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/issues/57102"}}]},{"type":"list-item","text":"Making fixes for the issues you find: After finding an issue, you should let the maintainers know your intentions of working on it. The first thing I did was ask on the GitHub issue if I could work on the issue I found, which was a PR to remove all redundant new lines being passed into Logf() functions. This involved going through code in both test/e2e and test/e2e_node, finding the newline redundancies in calls to Logf() and removing them. This task was a good exercise in learning how to make a contribution to the Kubernetes codebase as it involved me making the changes locally, making a PR up against the main branch, and addressing review feedback. \n\nThroughout the process, it was important to ask for clarification on the feedback that I didn’t understand. For example, the phrase “find and fix offenders” was confusing because I didn’t know what an offender was. But once I asked, I got a simple answer telling me that it means to remove all trailing lines throughout the e2e code. \n\n","spans":[{"start":0,"end":37,"type":"strong"},{"start":158,"end":181,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/issues/57102#issuecomment-351133997"}},{"start":232,"end":234,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/pull/57583"}},{"start":521,"end":531,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes"}},{"start":773,"end":780,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/pull/57583#discussion_r159582267"}}]},{"type":"list-item","text":"Communication is key: Communication throughout the contribution process is extremely important and is critical to open source development. If you are stuck on a PR, not sure how to address feedback, or don’t understand the logic, ask questions. It’s best to over-communicate when it comes to being a contributor, including communicating when you need to step back from the project. Taking breaks is expected and accounted for in open source development, and communication allows for transparency and faster iteration on the work being done. By communicating clearly, you ensure everyone working on a project around the world is informed, saving time and minimizing items that get lost in translation.\n\nAs a new contributor, I felt unsure and confused when working on my first task. I find that becoming comfortable with not knowing the solution immediately is valuable in addressing issues you’ve been assigned and driving the solution forward. This doesn’t mean sitting in a dark room and solving it yourself, it means asking for help when you need clarification of the problem, when you aren’t familiar with terminology as I pointed out in my example earlier, or even when you just want some eyes on your proposed solution. This attitude of relying on teamwork and collaboration when needed will help you go a long way in open source contribution and in tech in general. Keep in mind, as a driver of a solution you’re supposed to be a collection of all the debugging and information you’ve gained in attempting to solve the problem while collaborating with others. There is typically no real deadline (unless communicated otherwise) associated with these “good first issues”, so feel free to take as long as you need making sure to constantly communicate progress on the GitHub issue and keep people in the loop, pulling others in to help as needed. If you end up discovering you no longer have the bandwidth to work on it, communicate that on the related issue and someone else will pick it up.","spans":[{"start":0,"end":21,"type":"strong"},{"start":1145,"end":1152,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/pull/57583#discussion_r159582267"}}]},{"type":"heading3","text":"3. Stuck finding issues? Try becoming a Kubernetes release shadow","spans":[]},{"type":"paragraph","text":"Another way to contribute is through documentation updates, which is a common way for new contributors to get in an open source project other than contributing code changes. One interesting path that I’ve participated in recently is the Kubernetes release shadow program, a program in which those new to Kubernetes release management can take part in to work on one of many different sections of the release. I worked on the Kubernetes Enhancements for 1.20. The task I was given was to review and track all Kubernetes Enhancements Proposals (KEPs), with the help of a few other shadows and a lead. This gave me a lot of insight into the KEP process and allowed me to work with quite a few contributors in the process. I highly recommend this path to anyone looking to jumpstart their network and impact within the Kubernetes community.","spans":[]},{"type":"heading2","text":"The benefits of becoming a Kubernetes member","spans":[]},{"type":"paragraph","text":"Becoming a Kubernetes member is a consequence of contributing frequently and working closely with at least 2 different existing members with reviewing capabilities for a particular project within the Kubernetes repo. In other words, contribute a lot to 2 different areas within the Kubernetes project, and find a person you can work with closely from each. Over time, as you gain experience and have some PRs under your belt, ask these people if they could sponsor you to become a member.","spans":[{"start":200,"end":210,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes"}}]},{"type":"paragraph","text":"A major benefit of becoming a member is being able to assign yourself issues and have more influence over certain areas of the code you’re working on. Another tangible benefit of being a member is receiving Kubernetes Common Vulnerabilities and Exposures (CVEs) as soon as they are recognized by the community. This was valuable to the DigitalOcean Kubernetes team as we receive information on these security vulnerabilities before the general public, allowing us to thwart undesired attempts at compromising our platform and ensuring our customers stay protected while using DOKS and other Kubernetes based products on the DigitalOcean platform.","spans":[]},{"type":"paragraph","text":"The membership I possess also presents many opportunities, like being able to co-maintain the kubernetes-sigs/cluster-api-provider-digitalocean project and being able to sponsor a coworker of mine for Kubernetes membership. The value of Kubernetes membership has not only benefited me, but also my team, DigitalOcean, and the DigitalOcean community at large.","spans":[{"start":94,"end":151,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean"}}]},{"type":"heading2","text":"Common obstacles on your path to becoming a Kubernetes contributor","spans":[]},{"type":"paragraph","text":"There are many obstacles that can prevent progress in contributing to Kubernetes or slow you down. The main hurdle is getting your first code contribution in. Due to the tremendous number of contributions on a daily basis and the limited number of reviewers, some PRs can sit for months or longer. The way to deal with this is to make sure you are contributing code that is actually needed. You can ensure this by tying it to an existing issue or creating an issue to get consensus from the owners of the code you want to contribute to that it is an actual issue, and keeping the approvers/reviewers of that area of code involved in what you plan to contribute.","spans":[]},{"type":"heading3","text":"1. Contributing code without collaborating with the community","spans":[]},{"type":"paragraph","text":"Remember, drive-by commits may not get reviewed as quickly as you expect. Maintainers of a project can only do so much and hold so much context. It’s your job as a contributor to make your PR as reviewable as possible. Provide well-written, thoughtful descriptions for your PRs. If it’s a huge change, make sure there’s agreement on the change and break it out into multiple PRs as needed. Respectfully ping the maintainers in the appropriate slack channel if time has lapsed since you’ve made the PR and respond to comments on your PR in a timely fashion.","spans":[]},{"type":"heading3","text":"2. Not being humble or respectful as a contributor","spans":[]},{"type":"paragraph","text":"Another obstacle I see in new contributors is ego. A lot of the time, an issue that is new to you isn’t new to others, so it is important to hear them out and proceed in civil discourse. Don’t go into any situation thinking you have all of the necessary information to proceed with a PR, and listen to others' input and take it into consideration when providing/updating your solution for whatever issue you’re working on. If people ask for updates and you don’t agree with them, ask for clarification on the suggestions until you both are satisfied with the outcome. The beauty of open source is the ability to collaborate with others and make iterations on a product to lead to a net positive for the project being worked on. Remember we’re all on the same team, and don’t take things personally!","spans":[]},{"type":"heading2","text":"Recap: How to become a Kubernetes open source contributor as a beginner","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"o-list-item","text":"Find an area of K8s that interests you and find a “good first issue” labeled issues to work on right away.","spans":[{"start":50,"end":83,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22"}}]},{"type":"o-list-item","text":"Remember, over-communication is good.","spans":[]},{"type":"o-list-item","text":"Follow through on the task in a positive way.","spans":[]},{"type":"o-list-item","text":"Help others along the way, just like you would have wanted when starting out.","spans":[]},{"type":"heading3","text":"Key takeaways","spans":[]},{"type":"paragraph","text":"After reading this blog post, you should walk away knowing how to start contributing to Kubernetes, how it benefits you and your career, how to become a Kubernetes member, and how to overcome obstacles you may encounter. In a world where cloud is increasingly popular, companies like DigitalOcean are always in need of people in tune with the cloud native community. Kubernetes is a very accessible way of getting to be a part of this amazing ecosystem built on love, respect, and collaboration with one another!","spans":[]}],"blog_post_date":"2021-06-15","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"open-source-contributing-kubernetes-beginners"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Andrew Starr-Bochicchio","author_image":null,"_meta":{"uid":"asb"}},"blog_header_image":{"dimensions":{"width":1200,"height":628},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/5054199c-0f75-4879-8a8b-d845ff634d96_OpenAPI-v2-01.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing the DigitalOcean OpenAPI Specification","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"When v2 of our API first entered general availability in April of 2015, it consisted mainly of features supporting Droplets and domains. Since then, DigitalOcean’s product portfolio has grown, and the surface area of our API has greatly expanded along with it. Today our API supports App Platform, databases, firewalls, Kubernetes, load balancers, and more. Providing over 200 operations, our API enables you to do just about anything you can do in our control panel programmatically.","spans":[{"start":19,"end":53,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/blog/apiv2-officially-leaves-beta/"}}]},{"type":"paragraph","text":"Keeping up with all these changes can be challenging. That’s why we’re excited to announce the release of a new tool to give you confidence when developing against our API: the DigitalOcean OpenAPI Specification.","spans":[]},{"type":"heading2","text":"What Is OpenAPI?","spans":[]},{"type":"paragraph","text":"OpenAPI is an open standard for describing APIs led by the OpenAPI Initiative. As the specification itself reads:","spans":[{"start":59,"end":77,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.openapis.org/"}},{"start":82,"end":106,"type":"hyperlink","data":{"link_type":"Web","url":"http://spec.openapis.org/oas/v3.0.3#introduction"}}]},{"type":"list-item","text":"The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.","spans":[]},{"type":"list-item","text":"An OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases.","spans":[]},{"type":"paragraph","text":"Internally, an OpenAPI specification provides engineering teams at DigitalOcean a common language to define and collaborate on API design. It also defines a formal contract that can be tested and monitored, ensuring that our API remains stable. By publicly releasing the specification, it provides customers with new ways to interact with our API.","spans":[]},{"type":"heading2","text":"Open Source","spans":[]},{"type":"paragraph","text":"The source files for our specification are now available on GitHub. The repository also includes tooling to work with the files. For example, to check out the repository and compile the specification into a single file, run:","spans":[{"start":47,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/openapi"}}]},{"type":"preformatted","text":"git clone https://github.com/digitalocean/openapi.git","spans":[]},{"type":"preformatted","text":"cd openapi/","spans":[]},{"type":"preformatted","text":"make bundle","spans":[]},{"type":"paragraph","text":"You can use the specification to generate Postman Collections, mock servers, and API clients in languages we do not yet officially support.","spans":[]},{"type":"heading2","text":"Feedback","spans":[]},{"type":"paragraph","text":"The specification is currently in Early Availability. While the specification is accurate, it is still under active development. The structure of this repository may continue to evolve. If you encounter any inaccuracies or have feedback on how it can better suit your use case, please let us know by opening a GitHub issue.","spans":[{"start":300,"end":322,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/apiv2-openapi/issues/new"}}]},{"type":"paragraph","text":"How do you hope to use the specification? What kind of tools would like to see for working with the DigitalOcean API? Let us know in the comments below!","spans":[]}],"blog_post_date":"2021-03-30","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"introducing-the-digitalocean-openapi-specification"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Armando Migliaccio","author_image":null,"_meta":{"uid":"armando_migliaccio"}},"blog_header_image":{"dimensions":{"width":1200,"height":600},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/ebe1bdb1-55ad-4b26-a4ca-b3429aa2855b_DODX-1941-header-option-4.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"A glimpse into network availability","spans":[]}],"blog_post_content":[{"type":"heading2","text":"A simple yet effective approach to network monitoring","spans":[]},{"type":"paragraph","text":"As a Cloud Service provider, DigitalOcean takes a lot of care in designing and implementing infrastructure and services that are both fault tolerant and highly available. We make sure that services are well monitored so that when failures do occur, we can anticipate and minimize the impact to our customers. ","spans":[{"start":207,"end":216,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/blog/observability-and-metrics/"}}]},{"type":"paragraph","text":"The same guiding principles apply to monitoring Droplet networking: while we pride ourselves for the simplicity of the solutions we offer to our customers, simplicity is a core design principle we take seriously, especially when looking at the state of networking infrastructure (which is known to be complex and multi-dimensional). ","spans":[]},{"type":"paragraph","text":"In this blog post, we share the journey that took us from realization to revelation: we will go through the steps that have taken us from an incomplete picture of the network state experienced by customer Droplets to a near real-time EKG-like signal for each and every single Droplet that runs on our infrastructure.","spans":[{"start":233,"end":234,"type":"em"}]},{"type":"heading2","text":"The premise","spans":[]},{"type":"paragraph","text":"When it comes to our ability to look into the state of our global network, we realized that a preliminary step towards a more scalable and manageable architecture was a necessary prerequisite to having a solid strategy in place for monitoring the state of our network. It is common knowledge that layer-2 topologies, especially large ones, are inherently hard to monitor, and as we recently transformed our data center networking to more closely resemble layer-3 fabrics, that meant that it suddenly became easier to understand packets as they flow through our physical and virtual pipes to and from their targets. ","spans":[{"start":391,"end":402,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/blog/scaling-droplet-public-networking/"}}]},{"type":"paragraph","text":"When we say easier, we do not necessarily mean that there were no challenges left for us to tackle; network traffic takes place at many layers of the well-known ISO/OSI stack, involving multiple application protocols and distributed endpoints. There are literally hundreds (if not more) of companies out there whose core business is to provide networking monitoring solutions for small and large enterprises, so why did we believe that none of them could help us in having a crisp picture of the state of our network? Because, as cloud providers, we typically have additional challenges given the scale, and the level of customization employed to achieve such scale. ","spans":[]},{"type":"paragraph","text":"It is noteworthy that we do leverage a number of such solutions already, but the cost of acquiring and operating a monitoring solution to achieve high fidelity are just as important to us. As a cloud provider, we have an intimate knowledge of how our network operates: we are the ones in charge of deploying and maintaining the hardware, software, and the automation required to literally stitch the logical path to and from our droplets as they come to life in our infrastructure. That puts us into an incredibly compelling vantage point when it comes to instrumenting the network.","spans":[]},{"type":"paragraph","text":"To understand what we mean by that, let us consider our attempt at capturing the cloud networking universe the way we see it. One could say that when it comes to delivering packets in such a world, there can be an awful lot of things that can go wrong. As each dimension is not independent from another, this only makes matters more complicated. ","spans":[{"start":81,"end":106,"type":"em"},{"start":81,"end":106,"type":"strong"}]},{"type":"image","url":"https://images.prismic.io/www-static/31cbe501-729f-4e6b-96f5-a65bc1075c7c_The+Universe+of+Cloud+Networking.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1048,"height":1193}},{"type":"paragraph","text":"We initially focus on how Droplets, one of our core product offerings, connect to the internet via their public address (it being IPv4, IPv6 or floating IP). That means that packets have to traverse a number of stacks: the Droplet virtualization stack, the Droplet’s OS networking stack itself, and so on. As packets flow through a layer-3 fabric, there are routing decisions involved at each step, and the forwarding plane must be programmed in advance for these decisions to be taken correctly. ","spans":[{"start":136,"end":140,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/ipv6/"}},{"start":144,"end":155,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/floating-ips/"}}]},{"type":"paragraph","text":"If we manage to efficiently introduce instrumentation points in each of these steps, and we can do that without overhead and without perturbing the path under instrumentation, we can then coalesce the collected data into something succinct to verify our customers are experiencing  their expectations being met in best-of-class cloud infrastructure.","spans":[]},{"type":"heading2","text":"The journey toward our solution","spans":[]},{"type":"paragraph","text":"To articulate how we went on with our journey towards a solution, let us take a step back and attempt to define what we mean by the status of the network. Also, as you go deeper in this section, you may see it gets heavy on the math side: do not let that scare you! All we have attempted to do was to break down the complexity into smaller more tractable problems that are easier to reason about.","spans":[]},{"type":"heading3","text":"Reliability and Availability","spans":[]},{"type":"paragraph","text":"Reliability, according to the ANSI Standard Glossary of Software Engineering Terminology, is defined as the ability of a system or component to perform its required functions under stated conditions for a specified period of time. Availability is defined as the degree to which a system or component is operational and accessible when required for use. ","spans":[{"start":0,"end":12,"type":"em"},{"start":103,"end":104,"type":"em"},{"start":104,"end":229,"type":"strong"},{"start":230,"end":243,"type":"em"},{"start":257,"end":258,"type":"em"},{"start":258,"end":351,"type":"strong"}]},{"type":"paragraph","text":"While both indicators can be expressed in the form of a probability function, the subtle difference between the two is that reliability factors in the aspect of specification while availability does not. In other words, one can say that:","spans":[]},{"type":"paragraph","text":"A reliable system is also available, but an available system is not necessarily reliable. ","spans":[{"start":0,"end":90,"type":"em"},{"start":0,"end":90,"type":"strong"}]},{"type":"paragraph","text":"Now, if we look at these definitions in the context of networking, and in particular in the context of cloud networking at DigitalOcean, we could say that in order to measure network reliability we may need to have the specification of what we consider the correct conditions under which the network is deemed reliable. As that implies the aspect of performance (latency, throughput, and jitter), that is a much bigger problem in itself and best left for another blog post.","spans":[]},{"type":"heading3","text":"Mastering the meaning of Availability","spans":[]},{"type":"paragraph","text":"As it made sense for us to focus on measuring network availability first we know that availability is commonly defined as the probability that a status function X(t) is 1 at time t > 0:","spans":[{"start":46,"end":66,"type":"em"},{"start":46,"end":66,"type":"strong"},{"start":111,"end":118,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Availability"}},{"start":160,"end":165,"type":"em"}]},{"type":"image","url":"https://images.prismic.io/www-static/7625df20-6352-4b7d-aa19-4f07eb390b86_Image1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":592,"height":202}},{"type":"paragraph","text":"The evaluation that “the system functions at time t” is the result of the execution of a number of finite and deterministic steps performed on the system under observation at time t. X(t) is therefore a boolean function, and boolean functions are easy to compute, right? The hard part then is to measure such a status function taking into account the complexity presented in the cloud universe shown before. In abstract terms, one attempt at defining such status function can be the following:","spans":[{"start":21,"end":51,"type":"em"}]},{"type":"image","url":"https://images.prismic.io/www-static/cf8cc118-59ad-47b7-b1d3-c46c6887e5a8_CodeCogsEqn.gif?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":103,"height":39}},{"type":"paragraph","text":"𝞪 is the subsystem in which the overall cloud networking universe can be broken down into. But, what does the above formula mean exactly?","spans":[]},{"type":"paragraph","text":"The next paragraph will make that clear.","spans":[]},{"type":"heading3","text":"Plain English","spans":[]},{"type":"paragraph","text":"The status function X is the combination of status functions for each of the elements 𝞪 that make up the cloud networking universe.","spans":[{"start":0,"end":131,"type":"em"},{"start":0,"end":131,"type":"strong"}]},{"type":"paragraph","text":"This product formula can be more or less accurate depending on how many independent elements of the cloud networking universe are known and efficiently computable in near real-time. For instance, when focusing on DigitalOcean’s Droplet network connectivity, there are a number of status functions that we looked to implement:","spans":[]},{"type":"list-item","text":"𝚪(𝞪) = Software Networking, namely the active presence of OpenFlow rules (Open vSwitch is a foundational open source component in use at DigitalOcean) that are the result of the combination of services aimed at providing connectivity to the droplet public interface, as well as the operation of service daemons involved in the processing of flow rules. For instance, these may entail flows that enable all the use cases associated to public networking, namely v4 connectivity, v6 connectivity and FLIP connectivity (optionally), and all the access-level services that make v4 and v6 connectivity functional like DHCP, ARP, NDP, ICMP, or metadata access. If this status function is 0, then there is no way that anything can pass through the droplet OS networking stack correctly.","spans":[{"start":8,"end":28,"type":"strong"},{"start":76,"end":88,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.openvswitch.org/"}}]},{"type":"list-item","text":"𝚪(𝞪) = Hypervisor vSwitch, namely that the datapath is operational, i.e. the vSwitch kernel module is indeed passing packets to and from userspace. For example, this may require that periodic probing be performed to validate that well-known traffic above all else is processed correctly (e.g. ARP, NDP, DHCP, ICMP, etc).","spans":[{"start":9,"end":27,"type":"strong"}]},{"type":"list-item","text":"𝚪(𝞪) = Hypervisor OS networking stack, namely that the hypervisor is connected to the networking fabric: for Layer-3 enabled data centers, this means that the HV-as-a-router is reachable for both the IPv4 and IPv6 protocol families on the respective data center VLANs.","spans":[{"start":9,"end":39,"type":"strong"}]},{"type":"list-item","text":"𝚪(𝞪) = Host Route advertisement, namely the existence of host route advertisement(s) for the droplet in the region’s RIB (routing information base), with the next-hop pointing to the hypervisor where the droplet is running, implies that packets are meant be routed to the HV (barred from other networking misconfiguration/failures).","spans":[{"start":9,"end":42,"type":"strong"}]},{"type":"paragraph","text":"As we found ways to measure, record, and export telemetry data associated with each of these functions, we were then able to distill that into a simple indicator that over time plots the level of availability experienced by each customer Droplet. The snapshot below shows the network availability for the public IPv4 path of a real customer droplet that has experienced some downtime due to a failed software upgrade on our hypervisors. As the Droplet was evacuated, its network availability was promptly restored. The monitoring solution we put in place was able to catch the failure in the act, and supply our support team with near real-time data to assess and mitigate the outage.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/1fff9f2f-6d2b-4754-8348-1d9ca1818c41_Droplet+availability.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1600,"height":795}},{"type":"paragraph","text":"The versatility of this tool, built on standard and open source technologies, not only helped our support team in having better visibility during an outage, but it helps across the entire organization as we get more and more aware of the true potential of underlying the data which can be drilled down by region, hypervisor, Droplet, as well as rolled up globally.","spans":[]},{"type":"heading2","text":"Final considerations","spans":[]},{"type":"paragraph","text":"The proposed formalization can be seen as an attempted factorization of the complex problem of network availability in the cloud: rather than looking at networking end-to-end, the proposed approach aimed at breaking down the various elements that affect network availability into smaller more tractable problems that are addressed individually. This has a number of positive implications: i) it helped limit the engineering effort to deliver a minimum viable solution, ii) it allowed us to iterate through consecutive levels of refinement, iii) it helped us handle the scale at which our infrastructure operates, and last but not least, iv) it helped us deliver value to our customers fast!","spans":[]}],"blog_post_date":"2021-02-11","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"a-glimpse-into-network-availability"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Billie Cleek","author_image":null,"_meta":{"uid":"billie_cleek"}},"blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/31b4724c-9cfe-4d0c-ad21-ca8e10bb7925_GTA+Blog+image.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"GTA: Detecting affected dependent Go packages","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Today we are announcing the open sourcing of gta, which we use to understand the downstream dependencies of Go packages changed in pull requests to our monorepo, cthulhu. Technically, gta stands for Go Test Auto, but a more proper name might be Go Transitive Analysis. ","spans":[{"start":28,"end":48,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitalocean/gta"}}]},{"type":"paragraph","text":"In this article, we'll go through the primary use case for gta, its options, and how it can improve build times on feature branches by targeting only packages impacted by the changes on the feature branch.","spans":[]},{"type":"paragraph","text":"Matt Layher first introduced gta in his blog article about cthulhu, where he discussed the motivation and positive impact that gta had on DigitalOcean's build times of monorepo pull requests. In short, gta compares the current branch with its merge base of the destination branch to determine what's been changed in the branch. It then calculates all dependencies of those changes and outputs the import paths of all the affected packages.","spans":[{"start":40,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/cthulhu-organizing-go-code-in-a-scalable-repo/"}}]},{"type":"paragraph","text":"After experiencing slow and unreliable builds, one of our engineers, Justin Hines, set out to solve the problem once and for all. After a few hours of work, he authored a build tool called gta, designed to inspect the git history to determine which files changed between the merge base of the destination branch and a feature branch. It uses this information to determine which packages must be tested for a given build, including packages that import the changed package.","spans":[]},{"type":"paragraph","text":"As an example, suppose a change is committed which modifies a package, such as: do/teams/example/droplet. Suppose this package is imported by another package: do/teams/example/hypervisor. Gta is used to inspect the git history and determine that both of these packages must be tested, although only the first package was changed.","spans":[{"start":80,"end":104,"type":"em"},{"start":80,"end":104,"type":"strong"},{"start":159,"end":186,"type":"em"},{"start":159,"end":186,"type":"strong"}]},{"type":"paragraph","text":"The introduction of gta into our CI build process dramatically reduced the amount of time taken by builds. When gta was introduced in early 2016, the average build time dropped from 20 minutes to 2-3 minutes! This tool is now used almost everywhere in our build pipeline, including static analysis checks, code compilation and testing, and artifact builds and deployment.","spans":[{"start":182,"end":208,"type":"em"}]},{"type":"paragraph","text":"There are cases where building everything is still useful regardless of which files have actually changed. To support that use case, our build pipelines will bypass gta when either the name of the branch being tested has -force-test anywhere in its name or the pull request has a force-test label, restoring the old default behavior of “build everything for every change.”","spans":[{"start":221,"end":232,"type":"em"},{"start":221,"end":232,"type":"strong"},{"start":280,"end":290,"type":"em"},{"start":280,"end":290,"type":"strong"}]},{"type":"heading3","text":"input","spans":[]},{"type":"paragraph","text":"Gta needs a list of changed files. In the usual case, gta uses git to determine which files have changed by running git diff --name-only --no-renames It also supports a flag, --changed-files, to provide a file that contains a newline separated list of absolute paths of changed files for cases where the file list needs to be prefiltered.","spans":[{"start":116,"end":149,"type":"em"},{"start":116,"end":149,"type":"strong"},{"start":175,"end":190,"type":"em"},{"start":175,"end":190,"type":"strong"}]},{"type":"heading3","text":"analysis","spans":[]},{"type":"paragraph","text":"The --tags flag is a comma separated list of build tags to consider satisfied while analyzing the changes. You will be familiar with this if you rely on build constraints to build your packages.","spans":[{"start":4,"end":10,"type":"em"},{"start":4,"end":10,"type":"strong"}]},{"type":"heading3","text":"output","spans":[]},{"type":"paragraph","text":"The --include flag is used as a filter to control which packages are output. Its value is expected to be a comma separated list of package path prefixes that must match on an affected package's import path. Go developers will be familiar with this concept; gta essentially appends ... to each of the entries in the comma separated list.  A value of net/ would cause gta to output any affected package whose import path begins with net/ (e.g. net/http, net/httputil, or net/url).","spans":[{"start":4,"end":13,"type":"em"},{"start":4,"end":13,"type":"strong"},{"start":431,"end":435,"type":"em"},{"start":431,"end":435,"type":"strong"},{"start":442,"end":450,"type":"em"},{"start":442,"end":450,"type":"strong"},{"start":452,"end":464,"type":"em"},{"start":452,"end":464,"type":"strong"},{"start":469,"end":476,"type":"em"},{"start":469,"end":476,"type":"strong"}]},{"type":"paragraph","text":"Two flags, --buildable-only and --json are used to control the output. The former, \n--buildable-only, is a boolean flag that cannot be on when --json is on. Because --buildable-only is on by default, it must be explicitly set to false when -json is used.","spans":[{"start":11,"end":27,"type":"em"},{"start":11,"end":27,"type":"strong"},{"start":32,"end":38,"type":"em"},{"start":32,"end":38,"type":"strong"},{"start":84,"end":100,"type":"em"},{"start":84,"end":100,"type":"strong"},{"start":143,"end":149,"type":"em"},{"start":143,"end":149,"type":"strong"},{"start":165,"end":181,"type":"em"},{"start":165,"end":181,"type":"strong"},{"start":240,"end":245,"type":"em"},{"start":240,"end":245,"type":"strong"}]},{"type":"paragraph","text":"The --buildable-only flag causes gta to output a newline-separated list of buildable packages that were affected. This flag will elide any packages that were fully deleted or that were fully excluded by build constraints (i.e. --tags). The latter, --json, outputs a JSON object that fully describes the changes, including deleted packages.","spans":[{"start":4,"end":20,"type":"em"},{"start":4,"end":20,"type":"strong"},{"start":248,"end":254,"type":"em"},{"start":248,"end":254,"type":"strong"}]},{"type":"paragraph","text":"When --json is used, the output will be a JSON object with three properties: dependencies, changes, and all_changes. When piped to jq, gta's JSON output can be transformed as needed.","spans":[{"start":5,"end":11,"type":"em"},{"start":5,"end":11,"type":"strong"},{"start":77,"end":89,"type":"em"},{"start":77,"end":89,"type":"strong"},{"start":91,"end":98,"type":"em"},{"start":91,"end":98,"type":"strong"},{"start":104,"end":115,"type":"em"},{"start":104,"end":115,"type":"strong"},{"start":131,"end":133,"type":"hyperlink","data":{"link_type":"Web","url":"https://stedolan.github.io/jq/"}}]},{"type":"paragraph","text":"The dependencies property is a JSON object whose keys are the import paths of packages that have changed. Each key's value is a JSON array of strings whose values are import paths of the packages dependent on the package identified in the key. The changes property is a JSON array of strings whose values are the import paths of the packages that have changed. The final property, all_changes, is a JSON array of strings whose values are the import paths of all packages affected by changes.","spans":[{"start":4,"end":16,"type":"em"},{"start":4,"end":16,"type":"strong"},{"start":248,"end":255,"type":"em"},{"start":248,"end":255,"type":"strong"},{"start":381,"end":392,"type":"em"},{"start":381,"end":392,"type":"strong"}]},{"type":"paragraph","text":"The --merge and -base flags are used to control the left-hand side of the git diff operation. The former, --merge, will cause gta to use the most recent merge commit on the current branch as the left-hand side. The latter, -base, will cause gta to use the provided git revision as the left-hand side.","spans":[{"start":4,"end":11,"type":"em"},{"start":4,"end":11,"type":"strong"},{"start":16,"end":21,"type":"em"},{"start":16,"end":21,"type":"strong"},{"start":74,"end":82,"type":"em"},{"start":74,"end":82,"type":"strong"},{"start":106,"end":113,"type":"em"},{"start":106,"end":113,"type":"strong"},{"start":223,"end":228,"type":"em"},{"start":223,"end":228,"type":"strong"}]},{"type":"heading3","text":"gotchas","spans":[]},{"type":"paragraph","text":"Gta assumes that the source control system is git. It is unlikely that other systems will be supported. The --changed-files flag can be used to provide a list of files to inspect and completely skip the git operations in gta.","spans":[{"start":108,"end":123,"type":"em"},{"start":108,"end":123,"type":"strong"}]},{"type":"paragraph","text":"Gta will consider a package to have changed even when none of the changed files in the directory are Go files; as long as there is a valid Go package in the directory, gta will consider that package to have been changed. This is intentional: it helps ensure that if tests use those files or go generate needs to be run, that the tests or build scripts can be informed of the package change. We believe the tradeoff of sometimes being overly aggressive is worth the practical guarantee that it provides.","spans":[{"start":291,"end":302,"type":"em"},{"start":291,"end":302,"type":"strong"}]},{"type":"paragraph","text":"Gta does not report a package as having changed if files in its testdata directory have changed. For consistency with how non-Go files in a package directory are handled, we are reconsidering how changed files in a testdata directory should affect gta's output.","spans":[{"start":64,"end":72,"type":"em"},{"start":64,"end":72,"type":"strong"},{"start":215,"end":223,"type":"em"},{"start":215,"end":223,"type":"strong"}]},{"type":"paragraph","text":"To get the full benefit of using gta to reduce build times, it is important to structure your Go packages efficiently. When possible, put interface definitions in a separate package from implementations, program against the interfaces, and reference the implementations of those interfaces in main packages. This is not always practical or desirable; the important thing is to design your package layout thoughtfully and be aware that some package changes will necessarily affect a large number of dependents.","spans":[]},{"type":"heading3","text":"Conclusion","spans":[]},{"type":"paragraph","text":"DigitalOcean was able to dramatically reduce the time required to build and test a pull request while still ensuring complete analysis and testing by focusing only on the packages that are affected by the changes in the pull request. Thanks to Go's excellent support for static analysis, gta is able to determine which packages are affected by those changed packages with a high degree of confidence. We hope gta will be able to streamline your builds, too.","spans":[]},{"type":"paragraph","text":"Billie Cleek is a Staff Engineer in the PaaS group where he supports teams building DigitalOcean's PaaS product line and  internal tools to provide a consistent deployment surface for DigitalOcean's microservices. In his spare time, Billie is the maintainer of vim-go, infrequent contributor to other open source projects, and can be found working on his 100 year old house, sailing, or in the forests of the Pacific Northwest regardless of the weather. You may also find Billie on GitHub and Twitter.","spans":[{"start":0,"end":501,"type":"em"},{"start":482,"end":488,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/bhcleek"}},{"start":493,"end":500,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/bhcleek"}}]}],"blog_post_date":"2021-01-12","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"gta-detecting-affected-dependent-go-packages"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Phil Dougherty","author_image":{"dimensions":{"width":573,"height":557},"alt":"Phil Dougherty","copyright":null,"url":"https://images.prismic.io/www-static/ef89c36114b5e1872e8de0b79eb679b9be5b3765_phil.png?auto=compress,format"},"_meta":{"uid":"phil_dougherty"}},"blog_header_image":{"dimensions":{"width":2880,"height":1200},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/499858ce-04e8-4c5b-9329-a735976b3cf5_App-Platform-bg.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Build component-based apps with DigitalOcean App Platform","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Software development and deployment best practices continue to evolve at a rapid pace. It can be challenging to understand whether you are making the right choices to ensure that you’re going to deliver a great experience for your end users while maintaining a workflow that keeps your team unblocked and productive. Traditional Platform as a Service (PaaS) offerings make it easy and cost effective to get started, but as your application begins to grow in complexity and scale, and your needs from the offering become more diverse, it can become difficult to manage. Taking a component-based approach to your architecture can go a long way to ease this burden.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"The Existing Approaches","spans":[]},{"type":"paragraph","text":"As a developer working on a new project, you’ll face the decision of picking an application architecture and deciding on the frameworks, tooling, and services you’ll utilize to be successful. It can often feel overwhelming to make sense of all of the options and decide on the right approach. Should you be building a 12 factor app? Should you be focused on building with the JAM stack? Are you backing yourself into a corner if you decide on one over the other? What if your needs change in six months, or business takes you in a new direction with a new set of requirements? ","spans":[]},{"type":"paragraph","text":"With component-based applications you can layer on the building blocks of all of these approaches in a succinct specification that is easy to understand, manage, and expand upon. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"The Power of Component-Based Apps","spans":[]},{"type":"paragraph","text":"As your applications evolve, their structure often needs to as well. Component-based apps encourage the building of modular and loosely coupled parts that enable independent scaling, management, and maintenance of the various pieces of software needed to deliver your application. Modern apps are typically made up of a static single page application  (SPA) hosted via a content delivery network (CDN), a backing set of dynamic APIs, and a database. Adopting a component-based design lets members of your team focus on the parts of the app that are most important at that moment, while retaining flexibility and accelerating how quickly they can iterate.","spans":[]},{"type":"paragraph","text":"An example of a platform that enables the creation and management of component-based apps is DigitalOcean App Platform. The app specification allows for adding component building blocks as they are needed. ","spans":[]},{"type":"paragraph","text":"An example of a simple application that only contains a CDN-backed static site. ","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/3dda9a43-e1a7-4d2f-a6c5-6a1dc012b07c_Phil_blog1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":934,"height":288}},{"type":"paragraph","text":"Perhaps I also want to deploy my golang based API that my static site can utilize to serve up dynamic content. If I was hosting on most platforms, I would be forced to take a look at potentially adding another hosting platform in order to run my API. In a component-based platform, I can simply expand on my app specification declaration to define my dynamic service. ","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/c9a425fc-e16a-4274-a78f-22d06d99b290_Phil_blog2.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":936,"height":615}},{"type":"paragraph","text":"Let’s take a look at some of the various components that make up a modern app and how you might leverage them as your application evolves. ","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/42453250-63ca-498a-8077-42b00eb5f543_Phil_blog3.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":919,"height":448}},{"type":"heading2","text":"Static Sites","spans":[]},{"type":"paragraph","text":"As you’re just getting started building your application, it often makes sense to deploy a static website. Perhaps this is a landing page or marketing website that you are using to gauge interest in your idea before you fully commit to building it out, or maybe you’re already well underway when it comes to writing code, and you’re looking for a cost effective way to deploy your SPA. This type of component is typically delivered to end users via a CDN to ensure the fastest response time no matter where the user is located globally. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Services","spans":[]},{"type":"paragraph","text":"As your needs evolve and you need to move from a strictly static website to something more dynamic, you will want to deploy an API that your static site can interact with. With services, you are able to deploy long running internet facing web services. Services can be written in many different programming languages and frameworks. Once your service is built and running, resizing the service vertically to add more resources, or scaling out horizontally is a simple way to increase the amount of horsepower and capacity available to serve your end users. Services are a great way to deploy an API or any other supporting service that your SPA running as a static site can leverage.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Databases","spans":[]},{"type":"paragraph","text":"Services are meant to handle the task of acting as a backing API that your static SPA can consume to provide dynamic functionality. In order to provide that dynamic content, services will need to be hooked up to a database where all of that data is held. There are many options available for databases, some of the most commonly used being MongoDB, PostgreSQL, and MySQL. Being able to scale out your database as your application grows is important to ensure that your services can quickly get access to the data needed to support your end users.","spans":[]},{"type":"paragraph","text":"Just as we saw in the previous sections, If I need a database to back my API, I do not need to go hunting for a managed database provider to host it for me; I can simply update my app specification. Environment variables will be automatically injected into my other components so that I can easily access it.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/b1891cf7-2bac-46be-a62b-68899188b3e0_Phil_blog4.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":934,"height":261}},{"type":"heading2","text":"Workers","spans":[]},{"type":"paragraph","text":"Now that you have a SPA running, dynamic services powering an API, and a database backing it, it often becomes a necessity to have a type of service that is not internet accessible, but runs in the background doing processing or handling interactions with the database. This is where workers come in very handy. Workers can be scaled vertically and horizontally just like a regular service, but cannot be accessed by the public internet and run internally to your app. Workers can be used to process various records in your database, or to handle populating your database with data fetched from third party APIs or from records that have been queued for batch processing.","spans":[]},{"type":"paragraph","text":"Workers can be added to your app specification as well quite easily!","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/71be639b-fbad-41cc-abe2-d13d39bdd793_Phil_blog5.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":930,"height":313}},{"type":"heading2","text":"Jobs","spans":[]},{"type":"paragraph","text":"Sometimes it is necessary to kick off a one-off task, or a scheduled task, to run a script or make a change to some part of an application. Jobs are designed for just that purpose. Jobs can typically support both pre and post-deploy hooks, which are great for making a change before or after a new deployment of your code is rolled out. For example, these types of jobs can be used to handle a database migration during your software deployment. Scheduled jobs are similar to a Linux cron job and are configured to run on a schedule you set to handle housekeeping or other common repeatable tasks.","spans":[]},{"type":"paragraph","text":"Jobs are defined as follows in the app specification.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/b65ad262-d43f-43d5-9d14-4ef7670c69b4_Phil_blog6.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":933,"height":346}},{"type":"paragraph","text":"With these sets of components it’s possible to go from an extremely simple static site, to a highly complex and scalable environment designed to help you bring your ideas to market faster and delight your users. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Component-Based Apps with App Platform","spans":[]},{"type":"paragraph","text":"App Platform was developed from the ground up to be as open as possible. That is why when we were designing the product, we made sure to leverage as many industry standard cloud-native technologies as possible. The core of App Platform is built on top of a fleet of multi-tenant DigitalOcean Kubernetes clusters with gVisor for isolation, utilizes DigitalOcean Container Registry, and layers on tools such as Kaniko for image builds, Fluent Bit for logging, Prometheus for metrics and alerting, and so much more. App Platform has been designed to be simple to start with but able to grow with you as your business scales by fully adhering to building modern component based apps.","spans":[{"start":0,"end":12,"type":"hyperlink","data":{"link_type":"Web","url":"https://try.digitalocean.com/app-platform/?utm_medium=sponsorship&utm_source=stackshare&utm_campaign=global_app-platform_featured-post_en&utm_content=fto_100"}},{"start":279,"end":302,"type":"hyperlink","data":{"link_type":"Web","url":"https://try.digitalocean.com/kubernetes-in-minutes/?utm_medium=sponsorship&utm_source=stackshare&utm_campaign=global_app-platform_featured-post_en&utm_content=fto_100"}},{"start":348,"end":379,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/container-registry/?utm_medium=sponsorship&utm_source=stackshare&utm_campaign=global_app-platform_featured-post_en&utm_content=product"}}]},{"type":"paragraph","text":"Everything in the App Platform is built around a declarative spec that defines the applications’ desired state. This makes it simple to clone, reuse, and share apps easily, and also makes it easy to maintain parity between staging and production environments. The application spec can be either created from scratch, or can be downloaded from the Setting tab within your app. Once you have your app spec available, you have all of the features that are present in the UI available to you from the command line interface, as well as the API. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"App Platform is live now","spans":[]},{"type":"paragraph","text":"App Platform is now generally available, and we cannot wait to see the awesome things that users create with the building blocks that we have made available. Make sure to let us know what you think!","spans":[{"start":0,"end":12,"type":"hyperlink","data":{"link_type":"Web","url":"https://try.digitalocean.com/app-platform/?utm_medium=sponsorship&utm_source=stackshare&utm_campaign=global_app-platform_featured-post_en&utm_content=fto_100"}},{"start":171,"end":197,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/questions/new?tags=Digitalocean%20App%20Platform"}}]},{"type":"paragraph","text":"We are also running an App Platform Hackathon in partnership with DEV.to. Build an app using App Platform and get an opportunity to win some seriously sweet prizes (e.g. $2,000 USD gift card or equivalent, a Zoom meet-and-greet with our CEO, Yancey Spruill, and of course some cool swag!) The hackathon ends on Jan 10th and we hope you participate in it. ","spans":[{"start":23,"end":72,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/hackathon"}},{"start":233,"end":256,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/yanceyspruill"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"Phil Dougherty,","spans":[]},{"type":"paragraph","text":"Senior Product Manager","spans":[]}],"blog_post_date":"2020-12-15","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"build-component-based-apps-with-digitalocean-app-platform"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Kevin Wei","author_image":null,"_meta":{"uid":"kevin-wei"}},"blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/4e011bdf-a783-4241-9a73-76a2b3d8996d_DODX-1690-Blog-image.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How startups can overcome obstacles in their cloud journey","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Today’s startups are confronting a host of unique technical and business challenges, which have only been exacerbated by the COVID-19 crisis. Over the past few months, we’ve set out to learn more about the barriers that startups are currently facing. And, because we know how hard it is to build a business from scratch – after all, we did it ourselves – we want to share some of our findings in an effort to make building your business easier.","spans":[]},{"type":"paragraph","text":"In August, we released our Currents report, which surveyed 500 small- and medium-sized businesses (SMBs). Through Currents and other conversations we’ve had with company founders, we’ve identified technology challenges facing new and growing businesses that include the high costs of cloud infrastructure, a lack of technical expertise, and cloud security.","spans":[{"start":27,"end":42,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/currents/august-2020/"}}]},{"type":"paragraph","text":"New businesses have fewer resources. The cost of maintaining IT infrastructure presents a formidable obstacle to founders. According to Currents, startups with fewer than 100 employees spend an average of 52% of their budgets on infrastructure. Bootstrapped startups face additional financial barriers and may not be eligible for startup accelerator programs like DigitalOcean Hatch. Transparent cloud service pricing, specifically regarding bandwidth costs, is critical to supporting startups’ success.","spans":[{"start":377,"end":382,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/hatch/"}},{"start":410,"end":417,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/calculator/"}},{"start":442,"end":451,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/bandwidth/"}}]},{"type":"paragraph","text":"In addition to billing, founders struggle with the depth of technical expertise required to manage cloud infrastructure. To quantify exactly how big the expertise problem has become, we asked more than 280 participants in DigitalOcean’s Hatch startup program about their experiences in the cloud in a separate survey this past April. Seventy-two percent of these users were startups with fewer than 10 employees. ","spans":[{"start":237,"end":242,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/hatch/"}}]},{"type":"paragraph","text":"We also found that early-stage startups are more likely to lack the technical knowledge needed to maintain cloud infrastructure. Almost 20% of startup founders and engineers had less than one year of experience managing cloud infrastructure, and 63% had less than five years of experience. In spite of this skills gap, our results indicated that early-stage startups are still aggressively expanding their cloud usage during the COVID-19 crisis. ","spans":[]},{"type":"paragraph","text":"This expertise gap is why our team has invested even more in our tutorials and forums in the DigitalOcean Community, in addition to championing open source software. It’s also one reason why we’ve built App Platform, DigitalOcean’s new offering that helps minimize the amount of infrastructure management required to launch applications in the cloud. We know that moving forward startups will find it even more critical to choose plug-and-play cloud service providers that are cost-effective, secure, and easy to use for those with limited cloud development experience. ","spans":[{"start":93,"end":115,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community"}},{"start":144,"end":164,"type":"hyperlink","data":{"link_type":"Web","url":"https://hacktoberfest.digitalocean.com/"}},{"start":203,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/app-platform/"}}]},{"type":"paragraph","text":"Finally, no discussion of the startup business environment would be complete without mentioning the importance of security. Although 59% of leaders in our Currents survey indicated that IT security was their number one priority, startups predictably lagged behind other SMBs in implementing security protocols. Of companies with fewer than 100 employees, a full third deploy no security for their cloud infrastructure.","spans":[{"start":375,"end":378,"type":"em"}]},{"type":"paragraph","text":"Startups’ needs for security are why we’ve been dedicated to transparency and educating cloud users about best practices. Within the DigitalOcean Community, we currently have over 1,200 tutorials on improving your cloud security – and that number continues to grow. Not only are these resources platform agnostic, they cover a wide range of topics from server setup and building VPNs to mitigating DDoS attacks and monitoring Linux system logs. We’ve even incorporated free security offerings into our Droplet offerings like cloud firewalls and VPC.","spans":[{"start":61,"end":73,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/trust/"}},{"start":186,"end":195,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tags/security"}},{"start":525,"end":540,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/firewalls/"}},{"start":545,"end":548,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/vpc/"}}]},{"type":"paragraph","text":"We expect these challenges to continue to impact today’s startups. Startups can overcome many of these obstacles with the correct approach to cloud infrastructure. Our team is committed to helping the next generation of startups build, scale, and enable their dreams with DigitalOcean. If you have any questions or need help getting started, we’re here to help.","spans":[]},{"type":"paragraph","text":"Ready to learn more about startups at DigitalOcean? Click here","spans":[{"start":0,"end":62,"type":"strong"},{"start":52,"end":62,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/solutions/startups/"}}]}],"blog_post_date":"2020-10-19","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"how-startups-can-overcome-obstacles-in-their-cloud"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Sunny Beatteay","author_image":null,"_meta":{"uid":"sunny_beatteay"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"From 15,000 database connections to under 100: DigitalOcean's tale of tech debt","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"A new hire recently asked me over lunch, “What does DigitalOcean’s tech debt look like?”","spans":[]},{"type":"paragraph","text":"I could not help but smile when I heard the question. Software engineers asking about a company’s tech debt is the equivalent of asking about a credit score. It’s their way of gauging a company’s questionable past and what baggage they’re carrying. And DigitalOcean is no stranger to technical baggage.","spans":[]},{"type":"paragraph","text":"As a cloud provider that manages our own servers and hardware, we have faced complications that many other startups have not encountered in this new era of cloud computing. These tough situations ultimately led to tradeoffs we had to make early in our existence. And as any quickly growing company knows, the technical decisions you make early on tend to catch up with you later.","spans":[]},{"type":"paragraph","text":"Staring at the new hire from across the table, I took a deep breath and began. “Let me tell you about the time we had 15,000 direct connections to our database….”","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/41779f66-60db-4208-912e-f9d13c0e5e5e_tale-of-tech-debt-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":666,"height":500}},{"type":"paragraph","text":"The story I told our new recruit is the story of DigitalOcean’s largest technical rearchitecture to date. It was a companywide effort that extended over multiple years and taught us many lessons. I hope that telling it will be helpful for future DigitalOcean developers – or any developers who find themselves in a tricky tech-debt conundrum.","spans":[]},{"type":"heading3","text":"Where it all started","spans":[]},{"type":"paragraph","text":"DigitalOcean has been obsessed with simplicity from its inception. It’s one of our core values: Strive for simple and elegant solutions. This applies not only to our products, but to our technical decisions as well. Nowhere is that more visible than in our initial system design.","spans":[{"start":96,"end":135,"type":"em"}]},{"type":"paragraph","text":"Like GitHub, Shopify, and Airbnb, DigitalOcean began as a Rails application in 2011. The Rails application, internally known as Cloud, managed all user interactions in both the UI and public API. Aiding the Rails service were two Perl services: Scheduler and DOBE (DigitalOcean BackEnd). Scheduler scheduled and assigned Droplets to hypervisors, while DOBE was in charge of creating the actual Droplet virtual machines. While the Cloud and Scheduler ran as stand-alone services, DOBE ran on every server in the fleet.","spans":[{"start":128,"end":133,"type":"strong"},{"start":245,"end":254,"type":"strong"},{"start":259,"end":263,"type":"strong"}]},{"type":"paragraph","text":"Neither Cloud, Scheduler, nor DOBE talked directly to one another. They communicated via a MySQL database. This database served two roles: storing data and brokering communication. All three services used a single database table as a message queue to relay information.","spans":[]},{"type":"paragraph","text":"Whenever a user created a new Droplet, Cloud inserted a new event record into the queue. Scheduler continuously polled the database every second for new Droplet events and scheduled their creation on an available hypervisor. Finally, each DOBE instance would wait for new scheduled Droplets to be created and fulfilled the task. In order for these servers to detect any new changes, they would each need to poll the database for new records in the table.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/ef14910e-7804-415d-8ede-7aed315fdd4c_tale-of-tech-debt-2.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1056,"height":562}},{"type":"paragraph","text":"While infinite loops and giving each server a direct connection to the database may have been rudimentary in terms of system design, it was simple and it worked – especially for a short-staffed technical team facing tight deadlines and a rapidly increasing user base.","spans":[]},{"type":"paragraph","text":"For four years, the database message queue formed the backbone of DigitalOcean’s technology stack. During this period, we adopted a microservice architecture, replaced HTTPS with gRPC for internal traffic, and ousted Perl in favor of Golang for the backend services. However, all roads still led to that MySQL database.","spans":[]},{"type":"paragraph","text":"It’s important to note that simply because something is “legacy” does not mean it is dysfunctional and should be replaced. Bloomberg and IBM have legacy services written in Fortran and COBOL that generate more revenue than entire companies. On the other hand, every system has a scaling limit. And we were about to hit ours.","spans":[]},{"type":"paragraph","text":"From 2012 to 2016, DigitalOcean’s user traffic grew over 10,000%. We added more products to our catalog and services to our infrastructure. This increased the ingress of events on the database message queue. More demand for Droplets meant that Scheduler was working overtime to assign them all to servers. And unfortunately for Scheduler, the number of available servers was not static.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/ec3fcc36-df90-4a63-9bdf-0a0bfd602e12_tale-of-tech-debt-3.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":800,"height":534}},{"type":"paragraph","text":"To keep up with the increased Droplet demand, we were adding more and more servers to handle the traffic. Each new hypervisor meant another persistent connection to the database. By the start of 2016, the database had over 15,000 direct connections, each one querying for new events every one to five seconds. If that was not bad enough, the SQL query that each hypervisor used to fetch new Droplet events had also grown in complexity. It had become a colossus over 150 lines long and JOINed across 18 tables. It was as impressive as it was precarious and difficult to maintain.","spans":[]},{"type":"paragraph","text":"Unsurprisingly, it was around this period that the cracks began to show. A single point of failure with thousands of dependencies grabbling over shared resources, inevitably led to periods of chaos. Table locks and query backlogs led to outages and performance drops.","spans":[]},{"type":"paragraph","text":"And due to the tight coupling in the system, there was not a clear or simple solution to resolving the issues. Cloud, Scheduler, and DOBE all served as bottlenecks. Patching only one or two components would only shift the load to the remaining bottlenecks. So after a lot of deliberation, the engineering staff came up with a three-pronged plan for rectifying the situation:","spans":[]},{"type":"o-list-item","text":"Decrease the number of direct connections on the database","spans":[]},{"type":"o-list-item","text":"Refactor Scheduler’s ranking algorithm to improve availability","spans":[]},{"type":"o-list-item","text":"Absolve the database of its message queue responsibilities","spans":[]},{"type":"heading3","text":"The refactoring begins","spans":[]},{"type":"paragraph","text":"To tackle the database dependencies, DigitalOcean engineers created Event Router. Event Router served as a regional proxy that polled the database on behalf of each DOBE instance in each data center. Instead of thousands of servers each querying the database, there would only be a handful of proxies doing the querying. Each Event Router proxy would fetch all the active events in a specific region and delegate each event to the appropriate hypervisor. Event Router also broke up the mammoth polling query into ones that were smaller and easier to maintain.","spans":[{"start":68,"end":80,"type":"strong"}]},{"type":"image","url":"https://images.prismic.io/www-static/2e31d55f-803b-4a39-a904-a216e627bd51_tale-of-tech-debt-4.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":985,"height":655}},{"type":"paragraph","text":"When Event Router went live, it slashed the number of database connections from over 15,000 to less than 100.","spans":[]},{"type":"paragraph","text":"Next, the engineers set their sights on the next target: Scheduler. As mentioned before, Scheduler was a Perl script that determined which hypervisor would host a created Droplet. It did this by using a series of queries to rank and sort the servers. Whenever a user created a Droplet, Scheduler updated the table row with the best machine.","spans":[]},{"type":"paragraph","text":"While it sounds simple enough, Scheduler had a few flaws. Its logic was complex and challenging to work with. It was single threaded and its performance suffered during peak traffic. Finally, there was only one instance of Scheduler – and it had to serve the entire fleet. It was an unavoidable bottleneck. To tackle these issues, the engineering team created Scheduler V2.","spans":[{"start":360,"end":372,"type":"strong"}]},{"type":"paragraph","text":"The updated Scheduler completely revamped the ranking system. Instead of querying the database for the server metrics, it aggregated them from the hypervisors and stored it in its own database. Additionally, the Scheduler team used concurrency and replication to make their new service performant under load.","spans":[]},{"type":"paragraph","text":"Event Router and Scheduler v2 were all great achievements that addressed many of the architectural weaknesses. Even so, there was a glaring obstacle. The centralized MySQL message queue was still in use – bustling even – by early 2017. It was handling up to 400,000 new records per day, and 20 updates per second.","spans":[]},{"type":"paragraph","text":"Unfortunately, removing the database's message queue was not an easy feat. The first step was preventing services from having direct access to it. The database needed an abstraction layer. And it needed an API to aggregate requests and perform queries on its behalf. If any service wanted to create a new event, it would need to do so through the API. And so, Harpoon was born.","spans":[{"start":360,"end":367,"type":"strong"}]},{"type":"paragraph","text":"However, building the interface for the Event queue was the easy part. Getting buy-in from the other teams proved more difficult. Integrating with Harpoon meant teams would have to give up their database access, rewrite portions of their codebase, and ultimately change how they had always done things. That wasn’t an easy sell.","spans":[]},{"type":"paragraph","text":"Team by team and service by service, the Harpoon engineers were able to migrate the entire codebase onto their new platform. It took the better part of a year, but by the end of 2017, Harpoon became the sole publisher to the database message queue.","spans":[]},{"type":"paragraph","text":"Now the real work began. Having complete control of the event system meant that Harpoon had the freedom to reinvent the Droplet workflow.","spans":[]},{"type":"paragraph","text":"Harpoon's first task was to extract the message queue responsibilities from the database into itself. To do this, Harpoon created an internal messaging queue of its own that was made up of RabbitMQ and asynchronous workers. As Harpoon pushed new events to the queue on one side, the workers pulled them from the other. And since RabbitMQ replaced the database's queue, the workers were free to communicate directly with Scheduler and Event Router. Thus, instead of Scheduler V2 and Event Router polling for new changes from the database, Harpoon pushed the updates to them directly. As of this writing in 2019, this is where the Droplet event architecture stands.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/de90af9e-09a6-490a-a6fd-7dad2093f54f_tale-of-tech-debt-5.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1120,"height":572}},{"type":"heading3","text":"Onward","spans":[]},{"type":"paragraph","text":"In the past seven years, DigitalOcean has grown from garage-band roots into the established cloud provider it is today. Like other transitioning tech companies, DigitalOcean deals with legacy code and tech debt on a regular basis. Whether breaking apart monoliths, creating multiregional services, or removing single points of failure, we DigitalOcean engineers are always working to craft elegant and simple solutions.","spans":[]},{"type":"paragraph","text":"I hope this story of how our infrastructure scaled with our user base has been interesting and illuminating. I'd love to hear your thoughts in the comments below!","spans":[]}],"blog_post_date":"2020-01-08","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"from-15-000-database-connections-to-under-100-digitaloceans-tale-of-tech-debt"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Jamon Camisso","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/8269b2a0-30d3-44c4-be2b-c6bc9307a671_jamon.jpeg?auto=compress,format"},"_meta":{"uid":"jamon_camisso"}},"blog_header_image":{"dimensions":{"width":1200,"height":628},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/bd5f2a4c-71ff-45bc-b0be-560210369fe3_MLH_Header_PR.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"App Deployment & Security with DigitalOcean & Major League Hacking","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Self-sufficiency. Participation. Collaboration. On their own, these terms may seem simple. But when used together, they become something more. These core values are the principles of digital inclusion that promote technical literacy and empower developers to build software that can change the world.","spans":[{"start":169,"end":200,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalinclusion.org/definitions/"}}]},{"type":"paragraph","text":"Whether it’s in a classroom, at home, at work, or with friends, learning about technology with a supportive community is not only engaging and interesting, it can be highly rewarding. That’s why we are very excited to announce our new partnership with MLH Localhost – a program that helps students self-organize, while teaching them how to lead workshops with their peers on a variety of technical topics that will help them level-up professionally.","spans":[{"start":252,"end":265,"type":"hyperlink","data":{"link_type":"Web","url":"https://localhost.mlh.io/"}}]},{"type":"paragraph","text":"We’ve worked closely with MLH to design a two-hour workshop called App Deployment & Security with DigitalOcean that covers the fundamentals of DevOps, as well as how to run apps and code on a DigitalOcean Droplet. After all, if you’re at a hackathon and build something that only runs locally, how can you encourage others to try it, provide feedback, and collaborate with you?","spans":[{"start":67,"end":110,"type":"hyperlink","data":{"link_type":"Web","url":"https://localhost.mlh.io/activities/digitalocean/"}}]},{"type":"heading2","text":"The Challenge & Lessons: Using Technology to Clean Up Polluted Beaches","spans":[]},{"type":"paragraph","text":"Workshop hosts and participants will collaborate and deploy a sample application that helps find beach cleanup days in a given area. By deploying the app, participants will learn how to:","spans":[]},{"type":"list-item","text":"Create a DigitalOcean Droplet (a virtual server) to host an application","spans":[]},{"type":"list-item","text":"Install Node.js and npm remotely on a Droplet","spans":[]},{"type":"list-item","text":"Deploy an app to a Droplet using Git","spans":[]},{"type":"list-item","text":"Restrict access to an app using a firewall","spans":[]},{"type":"list-item","text":"Monitor application health and create alerts for app downtime","spans":[]},{"type":"list-item","text":"Set up load balancing to ensure app availability and help with scaling","spans":[]},{"type":"paragraph","text":"By the end of the workshop, attendees will be familiar with fundamental DevOps principles, and will know how to create, monitor, and deploy highly available servers to host their hackathon apps. Students who are new to DigitalOcean will also receive a $50 credit to keep their app running (or to build something new on their own Droplets).","spans":[]},{"type":"paragraph","text":"We are really looking forward to hearing about the amazing things that participants and organizers will create after the workshop. We hope you’ll organize or join an MLH Localhost workshop in your area.","spans":[]},{"type":"heading2","text":"Ready to host a workshop of your own?","spans":[]},{"type":"paragraph","text":"Visit the DigitalOcean MLH page and fill out the pre-registration form. An MLH Localhost representative will contact you with presentation material and send you assorted stickers and swag from Major League Hacking for your event.","spans":[{"start":0,"end":31,"type":"hyperlink","data":{"link_type":"Web","url":"https://localhost.mlh.io/activities/digitalocean/"}}]},{"type":"paragraph","text":"You can register for the workshop here, and share a link to your app or code repo with us on Instagram or Twitter!","spans":[{"start":34,"end":38,"type":"hyperlink","data":{"link_type":"Web","url":"https://localhost.mlh.io/activities/digitalocean/"}},{"start":93,"end":102,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.instagram.com/thedigitalocean/"}},{"start":106,"end":113,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/digitalocean"}}]}],"blog_post_date":"2019-11-07","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"digitalocean-partners-with-major-league-hacking"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Phil Dougherty","author_image":{"dimensions":{"width":573,"height":557},"alt":"Phil Dougherty","copyright":null,"url":"https://images.prismic.io/www-static/ef89c36114b5e1872e8de0b79eb679b9be5b3765_phil.png?auto=compress,format"},"_meta":{"uid":"phil_dougherty"}},"blog_header_image":{"dimensions":{"width":1200,"height":640},"alt":"Kubernetes illustration","copyright":null,"url":"https://images.prismic.io/www-static/2eab4b7f7d2151828cb671bca7a9fb03d683b7cc_image7.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"New on DigitalOcean Kubernetes: Fresh Features & 1-Click Apps","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"It’s our privilege to help you run your containerized apps with DigitalOcean Kubernetes, and we’re always eager to hear your feedback about the product. To that end, we thought we’d provide an update on some of our projects that address common customer comments.","spans":[{"start":64,"end":87,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"paragraph","text":"As of today, DigitalOcean Kubernetes, which we affectionally call \"DOKS,\" now supports cluster autoscaling, tokenized authentication, minor version upgrades, and the latest Kubernetes release (version 1.15). In addition, you can now install the first Kubernetes 1-Click Apps from the DigitalOcean Marketplace.","spans":[{"start":251,"end":274,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/category/kubernetes"}},{"start":284,"end":308,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/"}}]},{"type":"heading2","text":"Automatically scale your cluster to ensure fast performance while controlling costs","spans":[]},{"type":"paragraph","text":"It’s common to use Kubernetes to run your app as a collection of loosely coupled services, with each service being scalable independently of others. Each service typically corresponds to a pool of identically sized nodes (Droplets on DOKS), with each node executing an instance of the same containerized service. One challenge, then, becomes provisioning and deprovisioning nodes so that you have an appropriate number – enough that your service runs quickly, but not so many that you’re wasting lots of money.","spans":[]},{"type":"paragraph","text":"That’s why we’ve enhanced DOKS to support automatic horizontal scaling based on CPU and memory usage triggers. When you enable autoscaling, DOKS continuously monitors CPU and memory usage within your node pools. The service then automatically adds nodes when your application requires more resources. DOKS will, conversely, deactivate nodes when your application’s load declines, saving you money.","spans":[{"start":120,"end":138,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/kubernetes/how-to/configure-autoscaling/"}}]},{"type":"paragraph","text":"At present, you can enable autoscaling through the CLI and API. The UI is coming soon.","spans":[{"start":20,"end":38,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/kubernetes/how-to/configure-autoscaling/"}},{"start":51,"end":54,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/tools-and-integrations/#tools-and-integrations-cli/"}},{"start":59,"end":62,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/documentation/"}}]},{"type":"heading2","text":"Connect to your Kubernetes clusters with an access token (or with certificates)","spans":[]},{"type":"paragraph","text":"We know that many of you who have used DOKS have felt the pinprick of disappointment each time you’ve had to download a new certificate to connect to your clusters.","spans":[]},{"type":"paragraph","text":"With today’s release, you can now connect to your DigitalOcean Kubernetes clusters using your DigitalOcean API access token, in addition to the previously supported certificates. Unlike certificates that expire weekly and cannot be revoked by project administrators, access tokens are owned by individual users, do not expire, and can be revoked instantly by admins. We hope that you enjoy this easier, more manageable method of connecting to your clusters.","spans":[{"start":34,"end":123,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/kubernetes/how-to/connect-to-cluster/"}}]},{"type":"heading2","text":"Seamlessly upgrade your Kubernetes clusters to new minor versions, including 1.15","spans":[]},{"type":"paragraph","text":"The Kubernetes project continues to evolve quickly with the recent release of 1.15, introducing 25 new enhancements focused on continuous improvement and extensibility.","spans":[{"start":60,"end":82,"type":"hyperlink","data":{"link_type":"Web","url":"https://kubernetes.io/blog/2019/06/19/kubernetes-1-15-release-announcement/"}}]},{"type":"paragraph","text":"We enhanced DigitalOcean Kubernetes to support 1.15 a few weeks ago. As of today, you can upgrade your cluster to the latest minor version via the DigitalOcean control panel or API. Note that in order to upgrade minor releases (eg 1.14 to 1.15), you must first apply the latest patches to your cluster.","spans":[]},{"type":"heading2","text":"Easily deploy software to your cluster with the first of our Kubernetes 1-Click Apps","spans":[]},{"type":"paragraph","text":"Manually setting up software on Kubernetes clusters can be a time-consuming and tricky process as you need to install and configure your application across several nodes.","spans":[]},{"type":"paragraph","text":"That’s why we’re pleased to introduce the first of our Kubernetes 1-Click Apps in DigitalOcean Marketplace. With Kubernetes 1-Click Apps, you can easily create clusters that run preconfigured container images, as specified by a kubectl configuration or a Helm chart – all in a single click.","spans":[{"start":55,"end":106,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/category/kubernetes"}},{"start":228,"end":235,"type":"em"},{"start":255,"end":265,"type":"hyperlink","data":{"link_type":"Web","url":"https://helm.sh/"}}]},{"type":"paragraph","text":"DigitalOcean Marketplace now includes seven Kubernetes 1-Click Apps specifically built for deployment in Kubernetes clusters:","spans":[]},{"type":"paragraph","text":"table, tr, th, td {\n\n  border: none!important;\n  word-break: break-word!important;\n}\nLinkerd","spans":[{"start":85,"end":92,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/linkerd"}}]},{"type":"image","url":"https://images.prismic.io/www-static/d92c2bb435f076ae498db5197948afd7e0e925a8_image6.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":90,"height":90}},{"type":"paragraph","text":"An ultralight service mesh for Kubernetes that gives you observability, metrics, reliability, and security without requiring any code changes.\n\nMonitoring Stack","spans":[{"start":144,"end":160,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/kubernetes-monitoring-stack"}}]},{"type":"image","url":"https://images.prismic.io/www-static/ada91de7ad50db5ef2416aa015d5eafb031bcac6_image1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":90,"height":90}},{"type":"paragraph","text":"An integrated stack – composed of Prometheus, Grafana, and metrics-server – for Kubernetes cluster monitoring.\n\nOpenFaaS","spans":[{"start":112,"end":120,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/openfaas-kubernetes"}}]},{"type":"image","url":"https://images.prismic.io/www-static/728808551edf8fa6c744d5df5cc9f5efeaf6a089_image5.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":120,"height":120}},{"type":"paragraph","text":"A Functions as a Service framework for building serverless functions with Docker and Kubernetes.\n\nMetrics Server","spans":[{"start":98,"end":112,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/kubernetes-metrics-server"}}]},{"type":"image","url":"https://images.prismic.io/www-static/ee3c23d6a643b577e2b612b381fb3f0727e35722_image1-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":90,"height":90}},{"type":"paragraph","text":"An open source stack that gives you fast, simple access to cluster resource usage data, such as CPU and memory usage.\n\nMoon","spans":[{"start":119,"end":123,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/moon"}}]},{"type":"image","url":"https://images.prismic.io/www-static/c410ed3fe7e77689c01c6b3a8804b6c512b540a4_image3.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":90,"height":90}},{"type":"paragraph","text":"An enterprise Selenium WebDriver browser automation solution for Kubernetes.\n\n1Password SCIM Bridge","spans":[{"start":14,"end":40,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.w3.org/TR/webdriver/"}},{"start":78,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/1password-scim-bridge"}}]},{"type":"image","url":"https://images.prismic.io/www-static/d6de951f229722bd2c4cbf303a02c1ed5fc93b4a_image2.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":46,"height":46}},{"type":"paragraph","text":"A service that automates common administrative tasks using the 1Password SCIM protocol to connect with existing identity providers\n\nNetdata","spans":[{"start":132,"end":139,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/netdata"}}]},{"type":"image","url":"https://images.prismic.io/www-static/cc2a8775ac3ab1d46cc25d2f18f543a089ce0465_image4.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":150,"height":150}},{"type":"paragraph","text":"A highly optimized monitoring agent that provides real-time insights using highly interactive web dashboards.","spans":[]},{"type":"paragraph","text":"If you’re a software vendor interested in listing your application in the DigitalOcean Marketplace, see our instructions for submitting your app.","spans":[{"start":104,"end":144,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/vendors"}}]},{"type":"heading2","text":"Spin up your Kubernetes clusters today","spans":[]},{"type":"paragraph","text":"We hope that you’re excited by the enhancements we’re announcing today, and we promise we’ve got much more in store.","spans":[]},{"type":"paragraph","text":"In the meantime, we hope you’ll give DOKS a try. Or, if you’re a business interested in learning more about how DOKS can help you achieve your goals, we invite you to contact our sales team.","spans":[{"start":32,"end":47,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/kubernetes/clusters/new"}},{"start":167,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]}],"blog_post_date":"2019-10-03","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"new-on-digitalocean-kubernetes"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Managed Databases illustration with dolphin and cool octopus ","copyright":null,"url":"https://images.prismic.io/www-static/9397e9af87dcfd94b12cb315f03dc525621df4bd_image3.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Metrics for Managed Redis are now available","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"We recently launched Managed Databases for MySQL and Redis to further give developers the ability to focus on building apps while spending less time on managing their infrastructure. Our Managed Databases allow you to spin up clusters with just a few clicks without having to worry about configuring, managing, scaling, updating, and securing your databases.","spans":[{"start":21,"end":58,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/take-the-worry-out-of-managing-your-mysql-redis-databases/"}}]},{"type":"paragraph","text":"There’s been a lot of excitement for Managed MySQL and Redis in our community, and we’re really thankful to our users who have shared positive feedback for these offerings.","spans":[]},{"type":"paragraph","text":"Here are some of our favorite responses:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/4bebf7a2c3bcf0e42a514beccdb01e35ad017c57_image10.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":886,"height":292}},{"type":"image","url":"https://images.prismic.io/www-static/de2392aac2251dd3795feecc0bc6a39330ab9667_image7.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":892,"height":295}},{"type":"image","url":"https://images.prismic.io/www-static/7db4db356d09c9acb1cd31f67a3dd975ff68df92_image4.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":889,"height":388}},{"type":"paragraph","text":"With this response in mind, we're very excited to announce that Managed Redis is now generally available and provides metrics to monitor performance and the health of your clusters. The following metrics are available for Managed Redis clusters:","spans":[]},{"type":"list-item","text":"CPU usage: Shows the minimum, maximum, and average percentage of processing power being used across all cores\n","spans":[{"start":0,"end":9,"type":"strong"}]},{"type":"list-item","text":"Load average: Displays 1-, 5-, and 15-minute load averages, averaged across all primary and standby nodes. It measures the processes that are either being handled by the processor or are waiting for processor time.\n","spans":[{"start":0,"end":12,"type":"strong"}]},{"type":"list-item","text":"Memory usage: Presents the minimum, maximum, and average percentage of memory consumption across all nodes\n","spans":[{"start":0,"end":12,"type":"strong"}]},{"type":"list-item","text":"Disk usage: Shows the minimum, maximum, and average percentage of disk space consumed across all primary and standby nodes. It's best practice to maintain disk usage below 90%.\n","spans":[{"start":0,"end":10,"type":"strong"}]},{"type":"image","url":"https://images.prismic.io/www-static/00ceddbddbe3120ca7eea697939076075748801c_image8.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1678,"height":643}},{"type":"image","url":"https://images.prismic.io/www-static/133ddc926a360fcfba4270f6da01cd9f4aa6900b_image12.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1689,"height":646}},{"type":"image","url":"https://images.prismic.io/www-static/4a413d4b3f48c71107c4e60d59ba50898b6b87e7_image5.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1672,"height":639}},{"type":"image","url":"https://images.prismic.io/www-static/cf35941993712747484c169cc36ef9ba6e0d9615_image1-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1698,"height":654}},{"type":"paragraph","text":"In addition, we also provide metrics to monitor the performance of the database itself. This data can help assess the health of the database, pinpoint performance bottlenecks, and identify unusual use patterns that may indicate an application bug or security breach.","spans":[]},{"type":"list-item","text":"Connection status: The number of successful and rejected client connections in relation to the connection limit\n","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Throughput: The rate of commands processed per second\n","spans":[{"start":0,"end":10,"type":"strong"}]},{"type":"list-item","text":"Key evictions: The number of keys removed by Redis due to memory constraints\n","spans":[{"start":0,"end":13,"type":"strong"}]},{"type":"list-item","text":"Memory fragmentation: The ratio of the memory allocated by the operating system to Redis to the memory used by Redis\n","spans":[{"start":0,"end":20,"type":"strong"}]},{"type":"list-item","text":"Cache hit ratio: The ratio of keyspace hits to the number of keyspace hits and misses, which is a measure of cache usage efficiency\n","spans":[{"start":0,"end":15,"type":"strong"}]},{"type":"list-item","text":"Replication status: The number of connected standby nodes\n","spans":[{"start":0,"end":18,"type":"strong"}]},{"type":"image","url":"https://images.prismic.io/www-static/6a5cdd0e55f5aa496f0b3aee3da0ddb664f4dc29_image2-1.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1684,"height":643}},{"type":"image","url":"https://images.prismic.io/www-static/728cf6404d4a571125c9c8f4ed21d9748c7e8724_image15.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1642,"height":631}},{"type":"image","url":"https://images.prismic.io/www-static/22b812c17d923286ef80372dec69b119eb74b750_image11.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1711,"height":658}},{"type":"image","url":"https://images.prismic.io/www-static/da4ce527831d6c571174e497ca09c62a47c92a8c_image13.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1636,"height":627}},{"type":"image","url":"https://images.prismic.io/www-static/f44c2ba1b87b7769fc711b69320d8e38b9bab04c_image9.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1696,"height":652}},{"type":"image","url":"https://images.prismic.io/www-static/68504c7f04c4bf3d6d72ba0f82690f4a4c0b5587_replication_status_new.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1620,"height":551}},{"type":"heading3","text":"Availability in all regions","spans":[]},{"type":"paragraph","text":"There is huge demand for Managed MySQL and Redis among developers. In order to provide the best user experience, we did a phased rollout of these engines. At the time of launch, only three data centers were supported. Today, all nine data centers now support Managed MySQL and Redis. \n","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/86baf2caa76729dd1834b7fa22bb28774b084b24_image6.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1891,"height":484}},{"type":"paragraph","text":"We hope you are excited about Managed Databases and will give the service a try. If you’re ready to get started, spin up your first database cluster! If you have any questions about using DigitalOcean and Managed Databases in your business, please feel free to contact our sales team.","spans":[{"start":113,"end":148,"type":"hyperlink","data":{"link_type":"Web","url":"http://cloud.digitalocean.com/databases"}},{"start":261,"end":283,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]}],"blog_post_date":"2019-09-24","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"metrics-for-managed-redis-are-now-available"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"Helping Remote Developers Avoid Burnout","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"This is a guest post from Debbie Chew of Arc.","spans":[{"start":0,"end":45,"type":"em"},{"start":41,"end":44,"type":"hyperlink","data":{"link_type":"Web","url":"https://arc.dev/"}}]},{"type":"paragraph","text":"Ever feel like the code you write is never good enough? Or that you’re constantly tired from working, but your workload doesn’t seem to ever decrease?","spans":[]},{"type":"paragraph","text":"You’re not alone! Being a developer can be exhausting. To help rebuild your willpower and rediscover your sense of identity, there are lots of ways you can manage, overcome, and avoid burnout.","spans":[]},{"type":"paragraph","text":"Burnout is a reality for thousands of developers, and it also affects those working remotely. In fact, DigitalOcean's recently published report, Currents: A Seasonal Report on Developer Trends in the Cloud – Remote Work Edition, revealed that 66% of remote developers suffer from burnout symptoms. And the percentage is even higher (82%) for developers in the United States.","spans":[{"start":0,"end":48,"type":"hyperlink","data":{"link_type":"Web","url":"https://hn.algolia.com/?query=burnout&amp;sort=byPopularity&amp;prefix&amp;page=1&amp;dateRange=all&amp;type=story"}},{"start":145,"end":227,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/currents/july-2019/"}}]},{"type":"heading2","text":"The burnout problem is real","spans":[]},{"type":"paragraph","text":"The report is worrying. It reveals that burnout is a slightly higher risk (+2%) for remote developers than for in-house developers. Fortunately, working remotely can improve work-life balance, with remote developers rating their work-life balance at 7.02 out of 10 on average (as opposed to on-site developers, who score lower at 6.95).","spans":[]},{"type":"paragraph","text":"So what are the biggest contributors to burnout?","spans":[]},{"type":"list-item","text":"Working longer hours than expected","spans":[]},{"type":"list-item","text":"Feeling like management expects you to contribute more than in-house developers","spans":[]},{"type":"list-item","text":"Increased levels of stress and anxiety","spans":[]},{"type":"paragraph","text":"The most significant danger is that burnout can creep in slowly and unannounced. You find yourself working longer hours, spending more time on work, feeling more stressed, and not knowing when (or how) to stop.","spans":[]},{"type":"paragraph","text":"If you think this might be happening to you or someone you know, there is help available. This guide will help you understand burnout and give you practical tips that will allow you to prevent or overcome it.","spans":[]},{"type":"heading2","text":"Solving the problem of burnout","spans":[]},{"type":"paragraph","text":"These tips will help you protect your passion for coding, be more productive, and avoid burnout:","spans":[]},{"type":"heading4","text":"1. Assume responsibility for your time","spans":[]},{"type":"paragraph","text":"When working in-house, someone else is often responsible for directly managing you. But when you work remotely, this responsibility falls on you. If you don't manage your time, no one will (at least until it’s time for your performance review). Don't be your own worst enemy!","spans":[]},{"type":"paragraph","text":"Everything your manager previously did for you, you must now do for yourself. This includes setting your schedule – deciding when you work and for how long, when you take breaks, and more. What's most important is sticking to the decisions you make: without being disciplined, you will create additional stress for yourself.","spans":[]},{"type":"heading4","text":"2. Set clear boundaries","spans":[]},{"type":"paragraph","text":"You must understand your nonnegotiables. What are the things that, as a remote developer, you would not be happy doing? Maybe working in the middle of the night is one of them. Or perhaps you're not happy with your employer demanding that you work during specific hours.","spans":[]},{"type":"paragraph","text":"In \"A Programmer Burnout Story,\" Lorenzo Pasqualis recommends active communication to help remote developers remain on the same page as the rest of their team. This will help combat any potential expectation that you have to contribute more than you physically can.","spans":[{"start":3,"end":32,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.coderhood.com/a-programmer-burnout-story-how-to-recognize-it-and-avoid-it/"}}]},{"type":"paragraph","text":"Communicating nonnegotiables with your team will help set boundaries in regards to your availability and what you're willing to do.","spans":[]},{"type":"heading4","text":"3. Set a fixed working schedule & stick to it","spans":[]},{"type":"paragraph","text":"DigitalOcean's report also reveals that 52% of remote developers find themselves working longer hours than they thought they would. One of the reasons may be a lack of time management skills.","spans":[]},{"type":"paragraph","text":"The best thing to do to start learning how to manage your time better is to begin setting a fixed schedule. After you communicate your working times to your remote team, diligently stick to them. By doing so, you will avoid straying from what you need to do.","spans":[]},{"type":"paragraph","text":"Also resist the urge to check email or lurk on Slack outside your working schedule. You may feel that doing so means you're contributing more, but usually this isn't the case.","spans":[]},{"type":"heading4","text":"4. Create a routine","spans":[]},{"type":"paragraph","text":"If you don't establish a routine to help reduce the amount of information you have to process, your stress levels may increase. A routine helps you always know what you need to be doing next.","spans":[]},{"type":"paragraph","text":"It's good to have a routine in the early morning when you wake up, and also before going to bed. This helps your mind separate work from other activities, while helping you maintain work-life balance.","spans":[{"start":13,"end":27,"type":"hyperlink","data":{"link_type":"Web","url":"https://zapier.com/blog/daily-routines/"}}]},{"type":"paragraph","text":"Another good practice is not checking your email first thing in the morning. It's better to wake up and prepare your breakfast, and only then check email. You can also use this time to prioritize your tasks for the day.","spans":[]},{"type":"heading4","text":"5. Take multiple breaks","spans":[]},{"type":"paragraph","text":"Making time for multiple scheduled breaks from coding during the day is essential to increase productivity and reduce stress levels. Planning these breaks will help you develop the discipline to actually sign off when the time comes. Even going for a short walk around the block or doing a small task in the home can help.","spans":[]},{"type":"heading4","text":"6. Exercise daily","spans":[]},{"type":"paragraph","text":"Daily exercise is phenomenal for your health. You should set aside 30 minutes to an hour every day for exercise. It's a great way to de-stress and unplug – 61% of developers find that physical activity lowers their stress levels. Science backs this up. So take advantage of those endorphins!","spans":[{"start":230,"end":252,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mayoclinic.org/healthy-lifestyle/stress-management/in-depth/exercise-and-stress/art-20044469"}}]},{"type":"heading4","text":"7. Don't eat while you're working","spans":[]},{"type":"paragraph","text":"Cooking and eating are great activities to save for your breaks. Taking the time to eat will help your mind unplug from work and relax. Enjoy the process of making your food, and take a moment to savor it. You'll find that you return to your monitor more refreshed and ready to take on the challenges that await.","spans":[]},{"type":"paragraph","text":"It's also a good idea to prepare healthy food, which gives you an energy boost and keeps your mind sharp.","spans":[]},{"type":"heading4","text":"8. Don't forget about friends & family","spans":[]},{"type":"paragraph","text":"When you're in work mode, it's easy to forget to set time aside for friends and family. To avoid this, try to schedule social events ahead of time. They will help you disconnect from work and make your life about more than just what pays the bills.","spans":[]},{"type":"paragraph","text":"It’s important to remember that keeping in touch with loved ones will make you more fulfilled, help prevent stress, and ultimately make you happier and more productive at work. According to the DigitalOcean Currents report, 67% of developers say spending time with friends and family is the best way for developers to de-stress. (And let’s not forget pets too!)","spans":[]},{"type":"heading4","text":"9. Make time for yourself","spans":[]},{"type":"paragraph","text":"Don’t forget to have some \"me time\" too. Leaving some time in the day for yourself will help you do other things that you enjoy. Playing video games, reading, or listening to music are all great de-stressors.","spans":[]},{"type":"paragraph","text":"Another thing to consider is pursuing a hobby or other creative endeavor and learn more in this time. You can even study a different tech stack and improve your skills – even if you’re an experienced developer.","spans":[{"start":170,"end":209,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.codementor.io/blog/updating-your-best-practices-7gzzfh3vrx"}}]},{"type":"heading4","text":"10. Take vacations","spans":[]},{"type":"paragraph","text":"Lastly, taking a vacation is a very effective way to disconnect from work and recharge your batteries.","spans":[]},{"type":"paragraph","text":"Unfortunately, most remote workers take limited vacation, often out of fear that they are not working enough compared to their counterparts.","spans":[]},{"type":"paragraph","text":"You should take a vacation if you feel like you need one. Your productivity will actually increase after taking necessary time off, making it a win-win for you and your employer.","spans":[]},{"type":"heading2","text":"Remote work should be enjoyable","spans":[]},{"type":"paragraph","text":"By establishing healthy routines and boundaries, along with prioritizing your wellness, health, and both personal and professional relationships, you’ll learn to manage and overcome burnout – which will help you become a happier, more productive developer. You’ll get to truly enjoy remote work and all its benefits (flexible schedule, no commuting, ability to work from anywhere, and more) without the downside.","spans":[]},{"type":"paragraph","text":"Arc (formerly CodementorX) is a platform that connects developers with top companies hiring great developer talent. If you're a remote developer looking for your next opportunity, consider joining the Arc network.","spans":[{"start":0,"end":213,"type":"em"},{"start":180,"end":212,"type":"hyperlink","data":{"link_type":"Web","url":"http://bit.ly/2lYPQUQ"}}]}],"blog_post_date":"2019-09-18","tags":[{"tag1":{"tag":"Culture","_linkType":"Link.document","_meta":{"uid":"culture"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"avoiding-burnout"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"Dolphin and cool sunglasses octopus illustration ","copyright":null,"url":"https://images.prismic.io/www-static/7c9f7b73a465dfe2513468f11776703999b00736_mysql_redis_blogheader_lockup.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Take the worry out of managing your MySQL & Redis databases","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Our mission at DigitalOcean is to simplify the cloud so you can focus more on building apps and less on managing the underlying infrastructure. To that end, we introduced Managed Databases for PostgreSQL earlier this year, which removes many of the hassles in maintaining PostgreSQL databases. Our team has been hard at work these past few months, and we are so excited to finally launch Managed Databases for MySQL and Redis! You can now spin up MySQL and Redis database clusters with just a few clicks, without having to worry about configuring, managing, scaling, updating, and securing your databases.","spans":[{"start":171,"end":203,"type":"hyperlink","data":{"link_type":"Web","url":"https://blog.digitalocean.com/announcing-managed-databases-for-postgresql/"}},{"start":410,"end":415,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/mysql"}},{"start":420,"end":425,"type":"hyperlink","data":{"link_type":"Web","url":"https://do.co/redis"}}]},{"type":"paragraph","text":"Managed Databases for MySQL & Redis now available. (PostgreSQL support launched February 2019.)","spans":[{"start":0,"end":95,"type":"em"}]},{"type":"image","url":"https://images.prismic.io/www-static/7dd6999b71da769c6f500aa56fe468107932ab43_3_engines--1-.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":859,"height":183}},{"type":"heading2","text":"Why you need Managed Databases","spans":[]},{"type":"paragraph","text":"If you are building a modern app or website, it’s very likely you will need a database. Databases are one of the most critical components of an application. They should provide terabytes of storage, be able to process thousands of I/O operations per second, and allow data access with minimum latency. If your app usage grows, the database needs to scale easily and quickly to support millions of users.","spans":[]},{"type":"paragraph","text":"Relational databases such as MySQL and PostgreSQL are widely used in the market. Typical use cases include traditional CRUD websites that need persistent storage and an ability to quickly retrieve data from the database.","spans":[{"start":0,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://en.wikipedia.org/wiki/Relational_database"}},{"start":29,"end":34,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mysql.com/"}},{"start":39,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.postgresql.org/"}}]},{"type":"paragraph","text":"Redis has gained a lot of momentum in the last few years as an open source, in-memory data structure store, used as a database, cache, and message broker. Typical use cases include apps with real-time analytics, high-speed transactions, and machine learning.","spans":[{"start":0,"end":5,"type":"hyperlink","data":{"link_type":"Web","url":"https://redis.io/"}}]},{"type":"paragraph","text":"Whether you are using MySQL, Redis, or PostgreSQL, building and managing database clusters from the ground up is a herculean task. Developers often spend valuable time and resources on database management, which prevents them from focusing on building and enhancing apps.","spans":[]},{"type":"paragraph","text":"We introduced Managed Databases to simplify the lives of developers by addressing these common challenges:","spans":[]},{"type":"list-item","text":"Determining the optimal infrastructure needed to host your databases is time-intensive","spans":[]},{"type":"list-item","text":"Scaling the infrastructure that supports your database is often a slow and expensive task","spans":[]},{"type":"list-item","text":"Implementing reliable failover processes is difficult","spans":[]},{"type":"list-item","text":"Over-provisioning of underlying infrastructure leads to increased costs","spans":[]},{"type":"list-item","text":"Setting up a complete and reliable backup and recovery process requires a lot of effort","spans":[]},{"type":"list-item","text":"Maintaining and updating databases often needs dedicated personnel","spans":[]},{"type":"heading3","text":"How Managed Databases work","spans":[]},{"type":"paragraph","text":"We're proud to extend the simplicity that DigitalOcean is known for to Managed Databases. Developers of all skill levels, even those with no prior experience in databases, can spin up database clusters with just a few clicks. Select the database engine, storage, vCPU, memory, and standby nodes and we take care of the rest. The following database engines are currently supported:","spans":[]},{"type":"list-item","text":"MySQL (version 8)NEW","spans":[]},{"type":"list-item","text":"Redis (version 5)NEW","spans":[]},{"type":"list-item","text":"PostgreSQL (version 10 and 11)","spans":[]},{"type":"paragraph","text":"Managed Databases are built on top of our core compute platform and use local SSD storage, which makes them lightning fast. In addition to a simple dashboard, you can manage your database clusters programmatically with the DigitalOcean API.","spans":[{"start":223,"end":239,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/documentation/v2/#databases"}}]},{"type":"image","url":"https://images.prismic.io/www-static/12a097782406d270de32d27f5236859e5d6a3dd0_swatch?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":100,"height":54}},{"type":"heading2","text":"Simple, predictable pricing","spans":[]},{"type":"paragraph","text":"Just like all DigitalOcean products, Managed Databases provide simple, predictable pricing that allows you to control costs and prevent any surprise bills. You can spin up a database cluster for just $15/month, or a high-availability cluster with a standby node for $50/ month. Pricing is the same for MySQL, PostgreSQL, and Redis engines. Backups are free and included as part of the service. Ingress bandwidth is always free, and egress fees ($0.01/GB per month) will be waived for 2019.","spans":[{"start":63,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/#anchor--Databases"}}]},{"type":"heading2","text":"Benefits of Managed Databases","spans":[]},{"type":"paragraph","text":"Worry-free setup & maintenance: Save time by launching a database cluster with just a few clicks. Never worry again about security patches to the OS or database engine – once a new version or patch is available, simply click a button to enable it.","spans":[{"start":0,"end":30,"type":"strong"}]},{"type":"paragraph","text":"High scalability to support your growth: You can scale up at any time with no impact to your application. You have flexibility, so you can spin up read-only nodes to scale read operations or remove compute overhead from reporting requirements. This also keeps expenses in check as you reduce overprovisioning of infrastructure.","spans":[{"start":0,"end":39,"type":"strong"}]},{"type":"paragraph","text":"Free daily backups with point-in-time recovery: We automatically back up your databases every day. If things go wrong, you can easily restore data to any point within the past seven days.","spans":[{"start":0,"end":46,"type":"strong"}]},{"type":"paragraph","text":"Automated failover to maximize availability: In the event of a failure, Managed Databases will automatically fail over to a standby node and minimize downtime for your customers.","spans":[{"start":0,"end":43,"type":"strong"}]},{"type":"paragraph","text":"End-to-end security: Databases run in your account’s private network, which isolates communication at the account or team level. You can restrict requests to your database from the public internet by whitelisting specific inbound sources. Data is encrypted when at rest and in transit to prevent cyberattacks.","spans":[{"start":0,"end":19,"type":"strong"}]},{"type":"heading2","text":"Regional availability","spans":[]},{"type":"paragraph","text":"There is a huge demand for Managed MySQL and Redis among customers. In order to provide the best user experience, we plan to do a phased roll out of these engines. The table below provides tentative timeline for the data center availability. Please refer to our release notes for the most up-to-date information. \nAugust 20August 27September 4NYC1AMS3SGP1FRA1LON1BLR1SFO2NYC3TOR1\n## What’s next","spans":[{"start":258,"end":275,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/platform/release-notes/"}}]},{"type":"paragraph","text":"We hope that you are excited about this release and will give the service a try. Managed Databases for MySQL and Redis are currently in Limited Availability (LA) and will move to General Availability (GA) in a few weeks. Managed Redis will include database-level metrics to monitor performance, usage, and errors after it moves to GA.","spans":[]},{"type":"paragraph","text":"Ready to create a database? Try Managed Databases now.","spans":[{"start":0,"end":54,"type":"hyperlink","data":{"link_type":"Web","url":"http://cloud.digitalocean.com/databases"}}]},{"type":"paragraph","text":"If you’d like to have a conversation about using DigitalOcean and Managed Databases in your business, please feel free to contact our sales team.","spans":[{"start":122,"end":144,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]}],"blog_post_date":"2019-08-20","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"_meta":{"uid":"take-the-worry-out-of-managing-your-mysql-redis-databases"}}}]}}}