{"componentChunkName":"component---src-templates-tag-jsx","path":"/blog/tag/engineering/5/","result":{"data":{"prismic":{"allFeaturedblogs":{"edges":[{"node":{"featured_blogs_enabled":true,"heading":[{"type":"paragraph","text":"Featured posts","spans":[]}],"featured_blog_1":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/6d8d81b1-971a-4313-b033-b4e125cb14a0_MondoDB-blog-header-790x395.PNG?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing DigitalOcean Managed MongoDB – a fully managed, database as a service for modern apps","spans":[]}],"blog_post_date":"2021-06-29","blog_post_content":[{"type":"paragraph","text":"MongoDB is one of the most popular databases, and it’s ideal for apps that evolve rapidly and need to handle huge volumes of data and traffic. It offers advantages like flexible document schemas, code-native data access, change-friendly design, and easy horizontal scale-out.","spans":[{"start":22,"end":44,"type":"hyperlink","data":{"link_type":"Web","url":"https://db-engines.com/en/ranking","target":"_blank"}}]},{"type":"paragraph","text":"However, building and maintaining MongoDB clusters from the ground up can be a huge undertaking. Developers often complain that they have to spend their valuable time and resources on database management. Well, we’ve been listening and have some great news: accessing and managing MongoDB on DigitalOcean just got a lot simpler!","spans":[]},{"type":"paragraph","text":"We are excited to announce that DigitalOcean Managed MongoDB is now in General Availability. Managed MongoDB is a fully managed, database as a service (DBaaS) offering from DigitalOcean, built in partnership with and certified by MongoDB Inc. It provides you all the technical capabilities that make MongoDB so beloved in the developer community. Together we have ensured that you will get access to all the latest releases of the MongoDB document database as they become available.","spans":[{"start":32,"end":91,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases-mongodb/"}},{"start":230,"end":241,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/","target":"_blank"}}]},{"type":"paragraph","text":"Managed MongoDB simplifies the MongoDB administration. Developers of all skill levels, even those who do not have prior experience in databases, can spin up MongoDB clusters in just a few minutes. We handle the provisioning, managing, scaling, updates, backups, and security of your MongoDB clusters, allowing you to offload the complex, time consuming –yet critical – database administration tasks to us. This empowers you to focus on what really matters: building awesome apps.","spans":[]},{"type":"embed","oembed":{"height":113,"width":200,"embed_url":"https://www.youtube.com/watch?v=NvHQSV7jnKA","type":"video","version":"1.0","title":"Create a MongoDB Database on DigitalOcean","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","provider_name":"YouTube","provider_url":"https://www.youtube.com/","cache_age":null,"thumbnail_url":"https://i.ytimg.com/vi/NvHQSV7jnKA/hqdefault.jpg","thumbnail_width":480,"thumbnail_height":360,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/NvHQSV7jnKA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"heading2","text":"Benefits of Managed MongoDB","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Easy set up and maintenance: We create the database clusters for you. Simply choose the cluster configuration (e.g., memory, disk size, number of nodes, etc.), and the data center in which you want to host the database. Follow a few simple steps and your database cluster will be up and running in a matter of minutes. You can spin up clusters using the cloud control panel, CLI, or API.\n\n","spans":[{"start":0,"end":28,"type":"strong"}]},{"type":"list-item","text":"Automatic daily backups with point in time recovery: Data is one of the most important assets of an app, so it’s critical to backup your database. We take backups of your entire clusters automatically on a daily basis, for free. We also provide a point in time recovery for 7 days, that way if things go wrong due to human error, machine error, or some combination of both, you can easily restore the database as it was at any point in the previous 7 days. \n\n","spans":[{"start":0,"end":52,"type":"strong"}]},{"type":"list-item","text":"Automatic updates and access to latest MongoDB releases: You get access to MongoDB 4.4. This is the latest release of MongoDB and comes packed with numerous enhancements like hedged reads, rust, and swift drivers. Since we have developed Managed MongoDB in partnership with MongoDB Inc, you will always get access to new releases as they become available. With Managed MongoDB, the updates happen automatically. Just select a date and time for the updates and we take care of the rest. This makes it easy to stay up to date with MongoDB releases without disrupting your business.\n\n","spans":[{"start":0,"end":56,"type":"strong"},{"start":148,"end":169,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/new","target":"_blank"}}]},{"type":"list-item","text":"High availability with automated failover: If your database goes down, it can take down the entire app, leading to bad customer experiences. With Managed MongoDB, you can easily minimize the downtime for your database and make it highly available with standby nodes. Standby nodes add redundancy, so if for example the primary node fails, the standby node is immediately promoted to primary and begins serving requests while we provision a replacement standby node in the background.\n\n","spans":[{"start":0,"end":42,"type":"strong"}]},{"type":"list-item","text":"Scale up easily to handle traffic spikes: As your app gains traction and the usage grows, it’s important to have a database that can keep up with the increased demand. With Managed MongoDB, you can easily scale up the size of database nodes when needed.\n\n","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Secure by default: Since data is critical, it also needs to be secure. We encrypt data at rest with LUKS and in transit with SSL. When you create a new cluster, it’s placed in a VPC network by default that provides a more secure connection between resources. You can also restrict access to your nodes to prevent brute-force password and denial-of-service attacks.","spans":[{"start":0,"end":18,"type":"strong"},{"start":178,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/vpc/"}}]},{"type":"heading2","text":"The need for Managed Databases","spans":[]},{"type":"paragraph","text":"DigitalOcean’s mission is to simplify cloud computing so developers, startups, and SMBs can spend more time building software that changes the world. While databases are a critical component to any application, building, maintaining, and scaling them can be complex and time consuming. For developers that are building apps for their business, database administration is often not a core focus area. But it’s quite common to find developers that write the code and then also roll up their sleeves to maintain databases. Such users would rather offload the tedious database administration and focus their limited time and energy on building and enhancing their apps. ","spans":[]},{"type":"paragraph","text":"With this in mind, we introduced Managed Databases a couple of years ago and are excited to add Managed MongoDB to our portfolio. With this release, DigitalOcean Managed Databases now supports the following engines:","spans":[{"start":33,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases/"}}]},{"type":"image","url":"https://images.prismic.io/www-static/87745cc1-1c5f-4463-b104-104b7fc30dc7_managed-databases-logos.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":849,"height":104}},{"type":"paragraph","text":"Managed MongoDB launch comes on the heels of DigitalOcean App Platform, a modern, reimagined PaaS (Platform as a Service) that we released a few months ago. App Platform makes it very easy to build, deploy, and scale apps and static sites. You can deploy code by simply pointing to your GitHub and GitLab repos, and App Platform will do all the heavy lifting of managing infrastructure, app runtimes, and dependencies. App Platform, along with Managed Databases, helps fulfill DigitalOcean’s mission by empowering developers, startups, and SMBs to focus more on their apps, and less on the underlying infrastructure and databases.","spans":[{"start":45,"end":70,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"heading2","text":"How Managed MongoDB works","spans":[]},{"type":"paragraph","text":"DigitalOcean provides you with various compute options to build your apps like:","spans":[]},{"type":"list-item","text":"Droplets: On-demand, Linux virtual machines suitable for production business applications and personal passion projects.","spans":[{"start":0,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/droplets/"}}]},{"type":"list-item","text":"DigitalOcean Kubernetes: Managed Kubernetes with automatic scaling, upgrades, and a free control plane.","spans":[{"start":0,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"list-item","text":"DigitalOcean App Platform: A fully managed Platform as a Service.","spans":[{"start":0,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"paragraph","text":"No matter which compute option you choose to build your apps, you can easily add Managed MongoDB to it. In addition to this, Managed MongoDB also integrates with the Node.js 1-Click App from DigitalOcean Marketplace making it a lot easier to build Node.js apps.","spans":[{"start":166,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/nodejs"}}]},{"type":"heading2","text":"Simple, predictable pricing","spans":[]},{"type":"paragraph","text":"Just like all DigitalOcean products, Managed MongoDB provides simple, predictable pricing that allows you to control costs and prevent any surprise bills. You can spin up a database cluster for just $15/month, or a highly available three-node replica set for $45/month. Click here for more information.","spans":[{"start":270,"end":301,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/#managed-databases"}}]},{"type":"heading2","text":"Regional availability","spans":[]},{"type":"paragraph","text":"Managed MongoDB is currently available in the following regions:","spans":[]},{"type":"list-item","text":"NYC3 (New York, USA)","spans":[]},{"type":"list-item","text":"FRA1 (Frankfurt, Germany)","spans":[]},{"type":"list-item","text":"AMS3 (Amsterdam, Netherlands)","spans":[]},{"type":"paragraph","text":"We will be making Managed Mongo available in other regions soon. Please check out the release notes for most up to date information on regional availability.","spans":[{"start":86,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/release-notes/"}}]},{"type":"heading2","text":"Join us at deploy, DigitalOcean’s virtual user conference","spans":[]},{"type":"paragraph","text":"Today we have deploy, DigitalOcean’s signature user conference, which focuses on celebrating, educating, and connecting awesome builders from all over the world.","spans":[{"start":14,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/home"}}]},{"type":"paragraph","text":"Check out the keynote session from DigitalOcean's CEO, Yancey Spruill, in which he talks about where we're headed as a company and shares some exciting product updates. His keynote will be followed by sessions from community members, engineers, customers, and other experts that are building technologies and businesses powered by the cloud. With live Q&A and an active Discord server, there’s ample opportunity to engage and learn something new. Click here to attend the deploy conference.","spans":[{"start":14,"end":69,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/agenda/session/552806"}},{"start":347,"end":384,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy-discord"}},{"start":461,"end":489,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy"}}]},{"type":"paragraph","text":"We are also launching a hackathon for DigitalOcean Managed MongoDB. Learn how you can participate, submit an app and get a t-shirt.","spans":[{"start":24,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/mongodb-hackathon"}}]},{"type":"paragraph","text":"We hope you will give Managed MongoDB a try. Here are some sample datasets and sample apps that you can use to kick the tires. Check out the docs and let us know what you think!","spans":[{"start":22,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/databases/new?engine=mongodb"}},{"start":59,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/do-community/mongodb-resources","target":"_blank"}},{"start":141,"end":145,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/databases/mongodb/"}}]},{"type":"paragraph","text":"If you’d like to have a conversation about using DigitalOcean and Managed MongoDB in your business, please feel free to contact our sales team.","spans":[{"start":120,"end":142,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"André Bearfield","spans":[]},{"type":"paragraph","text":"Director of Product Management","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"_meta":{"uid":"introducing-digitalocean-managed-mongodb"}},"featured_blog_2":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":"Droplet Console","copyright":null,"url":"https://images.prismic.io/www-static/710499ae-78cc-4179-afc1-15793637b200_DODX3727-790x400-logo-2.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Securely connect to Droplets with SSH key pairs using a new Droplet Console","spans":[]}],"blog_post_date":"2021-08-10","blog_post_content":[{"type":"paragraph","text":"The famous author Ken Blanchard once said, “Feedback is the breakfast of champions.\" This is something we truly believe at DigitalOcean, and we always strive to enhance our products based on customer feedback.","spans":[]},{"type":"paragraph","text":"With this goal in mind, we are excited to introduce a new Droplet Console that will make it much easier to connect to your Droplets securely. The new Droplet Console provides one-click SSH access to your Droplets through a native-like SSH/Terminal experience. It also eliminates the need for a password or manual configuration of SSH keys. Starting today, we’re pleased to announce that the new Droplet Console is now available to all Droplet users.","spans":[]},{"type":"heading2","text":"Why you should be using Secure Shell (SSH) ","spans":[]},{"type":"paragraph","text":"Password-based security is notoriously insecure due to password fatigue and the overuse of passwords such as ‘123456’. Secure Shell or SSH is a network communication protocol that solves this by using passwordless solutions for encryption, enabling two computers to communicate and securely share data. At a high level, SSH works by creating cryptographic key pairs consisting of a public and private key, which are computer generated and stored separately to ensure their security. ","spans":[{"start":80,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://cybernews.com/best-password-managers/most-common-passwords/"}}]},{"type":"paragraph","text":"SSH has become the default encryption protocol for many industries, but it was difficult to use SSH keys with DigitalOcean’s current Recovery (VNC) console, which is why we developed our new Droplet Console. The new Droplet Console is backed by an agent that security supervises the key pair, while also providing one-click SSH access to our users. You can see the full list of features below.","spans":[]},{"type":"heading2","text":"The new Droplet Console: More time saving, less time wasting ","spans":[]},{"type":"paragraph","text":"The new Droplet Console is for everyone who is looking to build fast, secure apps and avoid hassles with SSH access & usability issues.","spans":[]},{"type":"paragraph","text":"In addition to easier SSH access, the new Droplet Console comes with:","spans":[]},{"type":"list-item","text":"Copy/paste text: Instead of typing lengthy key pairs and text manually, you can use copy/paste to save time. ","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Multi-color support: Multi-color support makes the console more useful and intuitive, and breaks the conventional standard appearance which is black text on a white background. ","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Multi-language support: DigitalOcean’s new Droplet Console supports multiple languages, meaning you can now type and view any content in any language that is supported by UTF-8","spans":[{"start":0,"end":24,"type":"strong"}]},{"type":"list-item","text":"OS/images supported: Linux distributions (Ubuntu(16.04 - 20.04), Fedora (32 & 33), Debian (9), CentOS (7.6 & 8.3), CentOS 8 Stream, Rocky Linux and Marketplace images.","spans":[{"start":0,"end":20,"type":"strong"},{"start":148,"end":159,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/"}}]},{"type":"paragraph","text":"The new Droplet Console is available by default on any new Droplets you spin up. You can also enable it manually on older Droplets. Click here to learn more!","spans":[{"start":132,"end":157,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/droplets/how-to/connect-with-console/"}}]},{"type":"paragraph","text":"Check out this short walkthrough video that shows the new Droplet Console in action: ","spans":[]},{"type":"embed","oembed":{"type":"video","embed_url":"https://www.youtube.com/watch?v=Qt7QihVuxiE","title":"Access Your Droplet Terminal Through the Web Console","provider_name":"YouTube","thumbnail_url":"https://i.ytimg.com/vi/Qt7QihVuxiE/hqdefault.jpg","provider_url":"https://www.youtube.com/","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","height":113,"width":200,"version":"1.0","thumbnail_height":360,"thumbnail_width":480,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/Qt7QihVuxiE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"paragraph","text":"We hope you’re excited about the new Droplet Console. You’re welcome to spin some Droplets up right now, and try out the new Droplet Console – why wait?","spans":[{"start":72,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/droplets/new"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"Harsh Banwait, Senior Product Manager","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Harsh Banwait","author_image":{"dimensions":{"width":600,"height":399},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/e83ff690-b20c-4d88-a2b6-57e562558cd6_download.png?auto=compress,format"},"_meta":{"uid":"harsh-banwait"}},"_meta":{"uid":"new-droplet-console-ssh-support"}},"featured_blog_3":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/588e28d3-d41e-480b-937b-8c3b19201f6e_DODX3568-790x400-Blog.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to scale your SaaS product without breaking the bank","spans":[]}],"blog_post_date":"2021-06-22","blog_post_content":[{"type":"paragraph","text":"These days, if you are in the business of software, chances are you are delivering or plan to deliver your services using a Software-as-a-Service (SaaS) model. A combination of internet-based delivery, subscription-based pricing, and low-friction product experiences have made SaaS solutions valuable tools for their users, and an excellent vehicle for software builders looking to distribute their products.","spans":[]},{"type":"paragraph","text":"These factors have made SaaS solutions ubiquitous; SaaS is the largest segment in the public cloud market, and is used to provide functionality ranging from personal finance apps for consumers, to productivity software for businesses, and even tools and services for software developers themselves to compose their applications and simplify their workflows. It is also not uncommon to find micro-SaaS applications being built for specific industries such as retail, job functions such as accounting or marketing, or tasks such as event management. ","spans":[]},{"type":"paragraph","text":"The best thing about this SaaS wave has been that it has allowed a new generation of software builders to build and monetize applications and participate in the digital economy. Previously, you had to be a big company with lots of resources, name recognition and distribution networks to successfully sell software products. Now, irrespective of whether you are a single person working on a passion project, a small team of developers in a startup, or a small and medium-sized business (SMB), the SaaS model enables you to express your ideas in the form of software and deliver them to customers anywhere in the world.","spans":[]},{"type":"heading2","text":"The unique challenges of building SaaS solutions","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Despite the opportunities that come with the widespread adoption of SaaS products, software builders still have to answer key questions in their journey to building successful SaaS products. Understanding what customers to target, features to prioritize, how to price your product, and how to acquire customers are all critical questions to figure out while you are also doing the important job of actually building and operating the product. ","spans":[]},{"type":"paragraph","text":"Writing the code, testing, deployment, monitoring the usage in production, and ensuring that your apps are able to handle the additional demand when customer base and usage grows are all essential and time-consuming tasks.","spans":[]},{"type":"paragraph","text":"Additionally, being able to test multiple ideas, pivot, and double down on the ideas that actually work is critical in early stages of SaaS development. Once growth comes, it is equally important to scale up without compromising on performance or reliability. Needless to say, all of this needs to be economically viable as well, since not everyone has the resources of large SaaS providers like Salesforce or Adobe.","spans":[]},{"type":"heading2","text":"Cloud Computing enables builders but also poses challenges","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Fortunately, for the act of building and operating your apps, cloud computing can help take some load off your shoulders. Unless you have the scale and resources of Facebook, chances are you are not going to set up your own data centers to host the computing infrastructure that powers your SaaS company. Public cloud infrastructure providers can bring great value to SaaS builders by providing on-demand computing services with usage-based pricing. However, just like how the legacy software companies weren't built for the SaaS model, the early (and big) cloud computing services were not optimized for the unique needs of small SaaS building teams. ","spans":[]},{"type":"paragraph","text":"Smaller SaaS teams face challenges with large cloud computing providers, including:","spans":[]},{"type":"heading4","text":"Too many technology options","spans":[]},{"type":"paragraph","text":"There are just too many options for tech stacks on which to build your SaaS - programming languages, application development frameworks, libraries, runtime environments, architectural patterns, and deployment models - and the list is growing by the day.","spans":[]},{"type":"heading4","text":"Complexity of cloud computing services","spans":[]},{"type":"paragraph","text":"Even when you have decided on a technology stack, there is a lot of cloud vendor-specific terminology you need to learn and heavy lifting you need to do to build on the cloud, not all of which contributes to making your SaaS applications successful.","spans":[]},{"type":"heading4","text":"Unpredictable costs","spans":[]},{"type":"paragraph","text":"The experimentation necessary in early stages of SaaS development, as well as the scaling of applications required during the growth phase, call for affordable and predictable pricing from your cloud provider. The last thing SaaS teams want is surprising and indecipherable bills from your cloud provider. Unfortunately, smaller businesses often experience unpredictable costs with cloud providers who are busy serving only the large enterprises.","spans":[]},{"type":"heading2","text":"DigitalOcean provides a simple, cost effective solution for SaaS builders","spans":[]},{"type":"paragraph","text":"Fortunately, at DigitalOcean we have a laser focus on small software development teams, who are trying to build the next generation of applications. Today, DigitalOcean customers are already building SaaS applications which serve all kinds of customers.","spans":[{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/solutions/saas/"}}]},{"type":"paragraph","text":"We believe SaaS builders should focus on building apps that power their business, and not spend their valuable time on managing infrastructure. That is exactly what we have been able to enable through our intuitive products that are built for scale and reliability.","spans":[{"start":205,"end":223,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/"}}]},{"type":"list-item","text":"Vidazoo is an advertising technology company specializing in video streaming and serving. It serves video ads to thousands of websites and handles close to 10 billion requests per day. \n\n“We are as much a data company as an adtech company. Our business relies on speedy and accurate data processing at massive scale. DigitalOcean provides us the perfect set of tools to operate our SaaS business profitably, while not making us feel the need to become full time system administrators. We plan to move a lot of our apps to DigitalOcean App Platform and other fully managed products.” - Roman Svichar, CTO of Vidazoo","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://vidazoo.com/"}},{"start":187,"end":583,"type":"em"}]},{"type":"paragraph","text":"We believe in meeting customers where they are. If they already have an understanding of cloud infrastructure technologies, they should be able to leverage that knowledge and get started with our products without any further ramp up.","spans":[]},{"type":"list-item","text":"Whatfix is an enterprise SaaS provider that offers a digital adoption platform to businesses. The company helps enterprises gain the full value of their investments in enterprise applications by providing real-time, interactive, and contextual guidance to users of those applications. \n\n“What we really love about the DigitalOcean platform is the ease of use. We feel like we know infrastructure and can handle most of the configuration and management. What we needed from a cloud was not bells and whistles but efficiency and reliability. DigitalOcean provides us a platform to build our apps and then gets out of the way. Just how we like it.” - Achyuth Krishna, Director of Engineering of Whatfix","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://whatfix.com/blog/driving-the-future-now-were-excited-to-announce-our-90-million-series-d-funding/"}},{"start":287,"end":648,"type":"em"}]},{"type":"paragraph","text":"We understand that scaling while maintaining reliability of applications and profitability of business is important, so we provide robust solutions which minimize downtime.","spans":[]},{"type":"list-item","text":"Centra is a SaaS-based e-commerce platform for global direct-to-consumer and wholesale e-commerce brands. Centra provides a powerful e-commerce backend that lets brands build pixel-perfect, custom designed, online flagship stores. \n\n“How do we enable our customers to create differentiated online experiences? How do we ensure their e-commerce apps stay up and running at all times? How do we scale on-demand when traffic grows or new customers come in? These are the questions that we ask ourselves every day. Thankfully, we have a partner in DigitalOcean that provides just the platform to answer those questions enabling us to guarantee 99.9% uptime for our clients.” - Martin Jensen, CEO of Centra","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"https://centra.com/"}},{"start":233,"end":673,"type":"em"}]},{"type":"paragraph","text":"These are just a few examples of SaaS businesses finding success on DigitalOcean. We are constantly amazed by the creativity and innovation that software builders are utilizing our platform for. If you are interested in learning more about product updates, technical deep-dives and best practices for building SaaS products and businesses, please contact us to learn how we can help you get started. ","spans":[{"start":340,"end":357,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"Come build with DigitalOcean!","spans":[]},{"type":"paragraph","text":"Looking to migrate your SaaS to DigitalOcean? Leverage free infrastructure credits, robust training, and technical support to ensure a worry-free migration.","spans":[{"start":0,"end":156,"type":"strong"},{"start":0,"end":156,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Raman Sharma","spans":[]},{"type":"paragraph","text":"Vice President, Product & Programs Marketing","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Raman Sharma","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/497b4b14-d192-493a-8b66-7ae176ba99f3_raman.png?auto=compress,format"},"_meta":{"uid":"raman-sharma"}},"_meta":{"uid":"how-to-scale-your-saas-product-without-breaking-the-bank"}}}}]}}},"pageContext":{"limit":12,"skip":48,"numTagPages":5,"currentPage":5,"uid":"engineering","data":[{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"Update on CVE-2015-3456, aka the VENOM Security Vulnerability","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Earlier today, CVE-2015-3456, a security vulnerability also known as VENOM was publicly announced. This bug in KVM/QEMU, our virtualization environment, could potentially exploit a VM's virtual floppy driver as described in detail here and here. DigitalOcean has conducted a thorough audit of our platform and taken steps to mitigate the issue.","spans":[{"start":231,"end":235,"type":"hyperlink","data":{"link_type":"Web","url":"https://access.redhat.com/articles/1444903"}},{"start":240,"end":244,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.ubuntu.com/usn/usn-2608-1/"}}]},{"type":"paragraph","text":"On hypervisors running the latest version of our cloud, the QEMU process is confined by a mandatory access control profile which would prevent a would-be attacker from accessing the host system or other Droplets. We are rolling out updates across all of our infrastructure to ensure the latest QEMU security patches are applied on each server. In addition, we have implemented a number of other security and monitoring features in order to provide early warning of attempts to exploit similar vulnerabilities.","spans":[]},{"type":"paragraph","text":"In order to complete the process of applying the security patches, a small number of our hypervisors will require a reboot. Our team is currently working to schedule this in the least disruptive manner possible. We will keep you posted on our progress.","spans":[]},{"type":"paragraph","text":"If you have any additional questions, please reach out to our support team:","spans":[]},{"type":"paragraph","text":"https://cloud.digitalocean.com/support","spans":[{"start":0,"end":38,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/support"}}]}],"blog_post_date":"2015-05-12","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"update-on-cve-2015-3456"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Erika Heidi","author_image":null,"_meta":{"uid":"erika_heidi"}},"blog_header_image":{"dimensions":{"width":784,"height":392},"alt":"Horizontally Scaling PHP Applications text on illustration of elephants walking linking trunks and tails","copyright":null,"url":"https://images.prismic.io/www-static/b8ae3d37c96d34df42840479b49ae13601411d0c_hero-2.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Horizontally Scaling PHP Applications","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Shipping a website or application to production has its own challenges, but when it gets the right traction, it's a great accomplishment. It always feels good to see the visitor numbers going up, doesn't it? Except, of course, when your traffic increases so much that it crashes your little LAMP stack. No matter what time or day it is, the cost of your app being offline is just too high, and in many cases it brings irreversible losses to a business.","spans":[]},{"type":"paragraph","text":"But fear not! There are ways to make your PHP application much more reliable and consistent. If the term scalability crossed your mind, you've got the right idea.","spans":[]},{"type":"paragraph","text":"In a nutshell, scalability is the ability of a system to handle an increased amount of traffic or processing and accommodate growth while maintaining a desirable user experience. There are basically two ways of scaling a system: vertically, also known as scaling up, and horizontally, also known as scaling out.","spans":[{"start":15,"end":26,"type":"em"},{"start":255,"end":265,"type":"em"},{"start":299,"end":310,"type":"em"}]},{"type":"paragraph","text":"Vertical scaling is accomplished by increasing system resources, like adding more memory and processing power. Resizing a Droplet, for instance, is vertical scaling. While this can work as an immediate solution, it might be hiding the real problems underneath your application, and there's no guarantee a server twice as powerful will run your app twice as fast.","spans":[]},{"type":"paragraph","text":"Horizontal scaling, on the other hand, is accomplished by adding more servers to an existing cluster. Let's talk about exactly what that means.","spans":[]},{"type":"heading2","text":"What is Horizontal Scaling?","spans":[]},{"type":"paragraph","text":"A cluster is simply a group of servers. A load balancer distributes the workload between the servers in a cluster. At any point, a new web server can be added to the existing cluster to handle more requests from users accessing your application; this is horizontal scaling.","spans":[{"start":2,"end":9,"type":"em"},{"start":42,"end":55,"type":"em"},{"start":42,"end":55,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/load-balancer/"}}]},{"type":"paragraph","text":"Here's an example of horizontal scaling in a diagram:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/76831544fbe08331f8e10fd4bbbf588b62cf2506_horizontal-scaling.png?auto=compress,format","alt":"Horizontal Scaling","copyright":null,"dimensions":{"width":745,"height":290}},{"type":"paragraph","text":"The load balancer has a single responsibility: deciding which server from the cluster will receive a request that was intercepted. It basically acts like a reverse proxy, making the process seamless to the user.","spans":[{"start":156,"end":169,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-for-apache"}}]},{"type":"paragraph","text":"While horizontal scaling is usually the most reliable and efficient method of scalability, it's not as trivial as vertical scaling. In a nutshell, the main challenges of scaling web applications is keeping all the nodes in a cluster updated and synchronized. Consider the following scenario.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/2fe03cbc25037e98fd32549bdbef40378f5868f8_scenario.png?auto=compress,format","alt":"Horizontal Scaling","copyright":null,"dimensions":{"width":745,"height":293}},{"type":"paragraph","text":"When user A makes a request to mydomain.com, the load balancer will forward requests to server1. User B, on the other hand, gets forwarded another node from the cluster, server2.","spans":[]},{"type":"paragraph","text":"What happens when user A makes changes to the application, like uploading files or updating content in the database? How do you maintain consistency across all nodes in the cluster? Further, PHP saves session information in disk by default. If user A logs in, how can we keep that user's session in subsequent requests, considering that the load balancer could send them to another server in the cluster?","spans":[]},{"type":"paragraph","text":"Let's discuss what can be done to overcome these issues and prepare your existing PHP application for horizontal scaling.","spans":[]},{"type":"heading2","text":"Decouple, Decouple, Decouple","spans":[]},{"type":"paragraph","text":"Preparing a system for scalability involves a lot of decoupling, because it's essential to have smaller servers with fewer responsibilities instead of one giant, all-inclusive server. This is really the essence of horizontal scaling. Breaking the application down into parts will also help you measure and identify the real bottlenecks you might have.","spans":[]},{"type":"paragraph","text":"Consider a PHP application where users can log in and upload photos. The app uses a basic LAMP stack, and the photos are stored in disk and referenced in the database. The challenge here is to keep consistency between multiple application servers sharing the same data (user uploaded files and user sessions).","spans":[]},{"type":"paragraph","text":"In order to make this example application scalable, there needs to be a separation between web server and database. This way, we can have multiple application nodes sharing the same database server. It's a first step, and it will give the app a small performance improvement by reducing the load of the web server. This tutorial can help you with that.","spans":[{"start":315,"end":328,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-set-up-a-remote-database-to-optimize-site-performance-with-mysql"}}]},{"type":"paragraph","text":"For further scalability, you should also consider implementing a load balancing environment for the database. This tutorial shows how to set up a load balancer for a MySQL cluster.","spans":[{"start":110,"end":123,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-to-set-up-mysql-load-balancing--3"}}]},{"type":"heading2","text":"User Session Consistency","spans":[]},{"type":"paragraph","text":"Once the application is isolated from the database server, we can focus on issues specific to the PHP implementation. First, we need to figure out a way for handling the user sessions across the nodes. Let's talk about a few approaches.","spans":[]},{"type":"heading3","text":"Relational Databases and Network Filesystems","spans":[]},{"type":"paragraph","text":"Many people use the approach of saving the session data in relational databases, like MySQL, because it's fairly easy to implement. However, this is a less desirable solution because it adds a substantial overhead (reading and writing to the database for every single request), and in high-traffic incidents, the database is usually the first to go down.","spans":[]},{"type":"paragraph","text":"Similarly, using a network filesystem is another easy solution to implement as it doesn't require any changes to the codebase, but network filesystems are slow for I/O operations (again, reading and writing for each request made) and this might have a negative impact in the application performance.","spans":[]},{"type":"paragraph","text":"Sticky Sessions","spans":[]},{"type":"paragraph","text":"Sticky sessions are implemented in the load balancer and don't require any change in the application nodes, so this is the easiest way to handle user sessions. Sticky sessions make the load balancer always redirect a user to the same server, avoiding the need for sharing session information across nodes.","spans":[]},{"type":"paragraph","text":"However, this solution creates new problems. The load balancer now has more responsibilities, which can impact its performance and turn it into a single point of failure. This approach can also create cold and hot spots within the cluster; returning users will always access the same server, even when new nodes are added to the cluster.","spans":[]},{"type":"heading3","text":"Using a Memcached or Redis Server","spans":[]},{"type":"paragraph","text":"This solution requires setting up one or more additional servers to handle the user sessions, but it's the most reliable way for solving the sessions problem. Both Memcached and Redis are super fast key-value storage engines that provide session handling for PHP. In a nutshell, after setting up the Memcached or Redis server, you will need to configure each node to be able to connect to the server and use it as session handler. This includes installing a PHP extension and making a simple change in the php.ini settings.","spans":[{"start":164,"end":173,"type":"hyperlink","data":{"link_type":"Web","url":"http://memcached.org/"}},{"start":178,"end":183,"type":"hyperlink","data":{"link_type":"Web","url":"http://redis.io/"}}]},{"type":"paragraph","text":"More information about setting up the Memcached session handler for PHP can be found in the official PHP documentation. For Redis, you can find a detailed guide in this link.","spans":[{"start":92,"end":118,"type":"hyperlink","data":{"link_type":"Web","url":"http://php.net/manual/en/memcached.sessions.php"}},{"start":161,"end":173,"type":"hyperlink","data":{"link_type":"Web","url":"http://phpave.com/redis-as-a-php-session-handler/"}}]},{"type":"heading2","text":"User File Consistency","spans":[]},{"type":"paragraph","text":"So far, we've separated our application and database and dealt with the user session consistency problem. We still need to find a solution to keep consistency between the files uploaded by users, because they could be stored in any of the application nodes.","spans":[]},{"type":"paragraph","text":"There are different methods for solving this problem. In some ways it is similar to the user sessions case, but fortunately, it's actually much simpler. The files are not written to or read from disk for each request, which makes the file sharing not as resource intensive. A solution like GlusterFS can work really well here, which creates a shared storage that will replicate any contents saved into one node to other nodes in the cluster. You can find detailed instructions on how to use GlusterFS to set up such an environment in this tutorial.","spans":[{"start":290,"end":299,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.gluster.org/"}},{"start":531,"end":547,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers"}}]},{"type":"paragraph","text":"Another popular solution is to use object storage to save the files. This can be implemented using different methods, from simple database blob storage to cloud storage services like AWS S3 and Google Cloud Storage. However, it might require a considerable amount of changes to the codebase, depending on how the application is implemented.","spans":[{"start":35,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"http://en.wikipedia.org/wiki/Object_storage"}},{"start":183,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"http://aws.amazon.com/s3/"}},{"start":194,"end":214,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.google.com/storage/"}}]},{"type":"heading2","text":"Load Balancing","spans":[]},{"type":"paragraph","text":"With the application properly decoupled, it's finally possible to create replica nodes that will compose the app cluster. Our example app now has the following setup:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/120f16e988bb674257d6445a4694f7b64932220e_load-balancing.png?auto=compress,format","alt":"Load Balancing","copyright":null,"dimensions":{"width":745,"height":286}},{"type":"paragraph","text":"Both App01 and App02 should be accessible and able to handle requests in the exact same way. The only thing left to do is set up the load balancer to act as the application entry point, redirecting users to the different nodes in the cluster.","spans":[]},{"type":"paragraph","text":"HAProxy (which stands for High Availability Proxy) is the standard open source choice for load balancing. It's used by high-profile environments like Twitter, Instagram, and Imgur. For a better understanding on how HAProxy works and different ways to configure it, check out this introductory tutorial.","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.haproxy.org/"}},{"start":275,"end":301,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts"}}]},{"type":"paragraph","text":"Another great tutorial from our community explains how to setup HAProxy as load balancer for WordPress servers. It's a good starting point to understand the practical steps necessary to horizontally scale PHP applications.","spans":[{"start":51,"end":110,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-as-a-layer-4-load-balancer-for-wordpress-application-servers-on-ubuntu-14-04"}}]},{"type":"paragraph","text":"Final Considerations","spans":[]},{"type":"paragraph","text":"Preparing an application for horizontal scaling may look intimidating at first, but once you understand how the load balancer works, it gets easier to figure out what steps should be taken in order to get your environment ready for scale.","spans":[]},{"type":"paragraph","text":"Naturally, it's much easier to plan for scalability when you are building an application from scratch, but we don't always have this luxury. It also worth mentioning that scalability walks side by side with performance, but they are not the same thing, and not all applications need to be scalable. Speed, on the other hand, is something all apps can benefit from.","spans":[]},{"type":"paragraph","text":"If you want to learn more, take a look at our YouTube playlist with some of the best talks on PHP performance and scaling, and check out load balancers on DigitalOcean!","spans":[{"start":46,"end":62,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.youtube.com/playlist?list=PLseEp7p6EwiaiJx-AZqXgvpJNJgXuNeBx"}},{"start":137,"end":151,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/load-balancer/"}}]},{"type":"paragraph","text":"by Erika Heidi","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/erikaheidi"}}]}],"blog_post_date":"2015-04-21","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"horizontally-scaling-php-applications"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Bryan Liles","author_image":null,"_meta":{"uid":"bryan_liles"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"gophers digging to a center tunnel with the words 'Taming your Go dependancies'","copyright":null,"url":"https://images.prismic.io/www-static/283d47e0-afd6-46d1-9b56-3226e7ae915f_gophers.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Taming Your Go Dependencies","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Internally at DigitalOcean, we had an issue brewing in our Go code bases.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Separate projects were developed in separate Git repositories, and in order to minimize the fallout from upgraded dependencies, we mirrored all dependencies locally in individual Git repositories. These projects relied on various versions of packages, and the problem was that there was no deterministic way to distinguish which project required what and when.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"As a team, we knew this approach was not optimal, but coming to a consensus on a single way to manage packages was a tough decision. With a little bit of effort, we arrived at a solution which addressed the issue of managing package versions without needing an external management tool. We call our effort cthulhu, which is our Go repository. We also refer to it as a mono repo.","spans":[{"start":306,"end":313,"type":"strong"}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"What's a Mono Repo?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Building a cloud is fast-paced business. We have Go projects that serve APIs, move bits around from server to server, and crunch numbers. Because many of these projects share a common set of components, we determined it would be easier to create a single Git project and import all the existing projects. Here's the high level structure of the project:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"        .","spans":[]},{"type":"paragraph","text":"        ├── README.md","spans":[]},{"type":"paragraph","text":"        ├── docode","spans":[]},{"type":"paragraph","text":"        │   └── src","spans":[]},{"type":"paragraph","text":"        └── third_party","spans":[]},{"type":"paragraph","text":"            └── src","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It is a called a mono repo because we only have one repository. Our setup is straightforward. We have a root directory that serves as the base for cthulhu. Underneath this root, we have two additional directories: `docode` for our code, and `third_party` for other people's code.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"To develop Go software,set your `GOPATH` to `${CTHULHU}/third_party:${CTHULHU}/docode`. That's it!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The reason that the `third_party` directory is listed first is to ensure that, when packages are fetched using `go get`, they'll be installed in this directory's src/ rather than `docode`.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"At this point, you can create a script that can be sourced into a shell, and you can start developing. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Why Is This Good?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"First and foremost, we believe the mono repo is a good idea because using it is frictionless. There are no arcane actions or sacrifices required to configure an individual developer's workstation.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It is also beneficial because at this point of DigitalOcean's Engineering team's evolution, having a single repository for editing software means it is less likely for projects to get lost. Finding code is easy using the mono repo and our team's simple conventions for naming services. We have three types of code: doge, our internal standard library, which contains code that is reused throughout the repository; services, which contains all of our business logic; and tools, which are one off applications and utilities used to manage our Go code, like our custom import rewriter that sorts and separates imports based on our current code guidelines.","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"        .","spans":[]},{"type":"paragraph","text":"        ├── docode","spans":[]},{"type":"paragraph","text":"        │   └── src","spans":[]},{"type":"paragraph","text":"        │       ├── doge","spans":[]},{"type":"paragraph","text":"        │       ├── services","spans":[]},{"type":"paragraph","text":"        │       └── tools","spans":[]},{"type":"paragraph","text":"        └── third_party","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Because all of our Go is in a single repository, everything uses the same versions of external and internal dependencies. If a package is upgraded, every service which depends on the package receives the new functionality. This helps when dealing with security issues. It's also nice to not have to manage versions explicitly. For our purposes, the canonical version is what's under `third_party/src`. If your work requires an upgrade, you install the new dependency, run the tests, and then send a pull request.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":" It Isn't All Rainbows.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Our mono repo is a great solution for us, but it doesn't come without its own set of caveats. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"One of the largest issues is actually an issue with Git. Git prescribes sub-modules for including dependencies in your main repository. When the sub-modules work correctly, there are no problems, but when they don't work, it's a thorny pain for everyone involved. In this case, we chose to sidestep the problem. Instead of dealing with sub-modules or an external management solution, we rename the git config directory (if there is one) for our dependencies. Because the .git directory doesn't exist, Git considers the configuration to be just another set of files. If you want to upgrade the package, just revert the git directory name, and update. This isn't an amazing experience, but it is simple.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Additionally, when you share a repository with all the other projects, you inherit all the other project's issues. This means that if one of our individual services has a slow test suite, all services have a slow test suite. In general, testing Go is very fast. When you involve external tests, like database integration, things can slow down. A solution for this is to use the short flag to skip the long tests. An additional solution is to run tests for individual packages. The DigitalOcean Engineering team is still testing and deciding which solutions works best for us.","spans":[{"start":378,"end":388,"type":"hyperlink","data":{"link_type":"Web","url":"http://golang.org/pkg/testing/#Short"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Where Do We Go Next?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Currently, our mono repo serves our needs well. It is an easy concept for newer developers to grasp, it doesn't require any external dependencies, and it allows us to co-locate all of our Go code. In a nutshell, it's a great thing for us and we believe it could be a great thing for other teams working with Go as well.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by  Bryan Liles","spans":[{"start":4,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/bryanl"}}]}],"blog_post_date":"2015-02-20","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"taming-your-go-dependencies"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Jesse Chase","author_image":null,"_meta":{"uid":"jesse_chase"}},"blog_header_image":{"dimensions":{"width":784,"height":392},"alt":"libscore bookshelf","copyright":null,"url":"https://images.prismic.io/www-static/4cc4061d-7b10-4981-a038-777b85d275f2_banner.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"What's Your Libscore?","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"The contributors to Libscore, including our own Creative Director Jesse Chase, wanted to offer this post as a thank you for all the support the project has received. Julian Shapiro launched Libscore last month hoping that the developer community would find the tool useful, and continues to be grateful for all of the positivity and constructive feedback throughout the web.","spans":[{"start":20,"end":28,"type":"hyperlink","data":{"link_type":"Web","url":"http://libscore.com/"}},{"start":124,"end":139,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.google.com/search?q=libscore&amp;oq=libscore+&amp;aqs=chrome..69i57j0j69i60j69i61j0l2.3373j1j4&amp;sourceid=chrome&amp;es_sm=119&amp;ie=UTF-8#q=libscore&amp;tbm=nws"}}]},{"type":"paragraph","text":"For those who haven't heard, Libscore is a brand new open-source project that scans the top million websites to determine which third-party JavaScript libraries they are using. The tool aims to help front-end open source developers measure their impact – you can read all about it here.","spans":[{"start":281,"end":285,"type":"hyperlink","data":{"link_type":"Web","url":"https://medium.com/@Shapiro/introducing-libscore-com-be93165fa497"}}]},{"type":"paragraph","text":"In this post, we'll break down the technology that Libscore leverages and discuss some of the challenges getting it off the ground. We were also lucky enough to talk with Julian and get some insight as to where he sees the project going.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/1bcaeada9aa8d78d8f71caf8b92e162b85d1e1b0_libscore.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1422,"height":760}},{"type":"heading3","text":"Thomas Davis: A Technical Overview","spans":[]},{"type":"paragraph","text":"Unlike traditional web crawlers, Libscore thoroughly scans the run-time environment of each website into a headless browser. This allows Libscore to monitor the operating environment on each website and to detect as many libraries as possible – even those that have been pre-bundled and required as modules. The tradeoff, of course, is that running one million headless browser connections is much more resource intensive than performing basic cURL requests and parsing static HTML.","spans":[]},{"type":"paragraph","text":"The biggest insight we gained while designing the crawler is that the best way to weed out false positives for third-party plugins is to leverage the broader data set we're aggregating. Specifically, we weed out third-party libraries that didn't exist on at least 30 sites out of the 1 million crawled. Using meta-heuristics like these allowed us to more confidently detect libraries that were in fact third-party plugins, and not just arbitrary JavaScript variables that were leaking to the global scope.","spans":[]},{"type":"paragraph","text":"On the backend, crawls are queued via Redis with the results stored in MongoDB. Both services are loaded fully into RAM which allows our RESTful API to serve requests faster than it would querying the disk.  The main bottleneck to crawling concurrency is network bandwidth, but thanks to DigitalOcean, it was a breeze to repeatedly clone instances and run crawls during off-peak times in different regions. Ultimately, using just a few high-RAM DigitalOcean instances, we parse 600 websites per minute and complete the entire crawl in under 36 hours at the end of each month.","spans":[]},{"type":"paragraph","text":"As the crawler runs, raw library usage data for each site is appended to a master JSON file, which we simply read from the file system with Nodejs. Once all the raw usage data is collected we start a process dubbed \"ingestion\", which is responsible for aggregating the results and making them accessible via the API.  We actually attempted to load the entire dataset into ram to perform our calculations, but quickly ran into a quirky problem with V8 not being able to allocate anymore than approximately 1GB of memory for arrays. For now, we are splitting up the raw dump into smaller files to bypass the memory limit, though in the future we might just rewrite the project to use a more suitable language and environment.","spans":[{"start":426,"end":450,"type":"hyperlink","data":{"link_type":"Web","url":"https://code.google.com/p/v8/issues/detail?id=847"}}]},{"type":"heading3","text":"Jesse Chase: Design Improvements","spans":[]},{"type":"paragraph","text":"While Libscore currently serves as an invaluable tool for surfacing library adoption data, the future is even more exciting. To illustrate it let's jump ahead 6 months – smack in the middle of summer. At this point, Libscore will have crawled through the top million sites 6 times already (or 6 million domain crawls!), bringing forth rich month-over-month trend data on library usage.","spans":[]},{"type":"paragraph","text":"By providing users with a soon-to-be-released time series graph, with the ability to plot multiple libraries over the same time period, developers will gain new insights into how libraries are changing over time. For example, users will be able to see why a library's usage plummeted from one month to the next – potentially due to the increased adoption of another library. Soon, this data will be fully visualized.","spans":[]},{"type":"heading3","text":"Julian Shapiro: The Future Of Libscore","spans":[]},{"type":"paragraph","text":"Libscore is more than a destination for JavaScript statistics; it's also a data store that can be leveraged in the marketing of open source projects. One way we're enabling this is via embeddable badges that showcase real-time site counts. Open source developers can show off these badges in their GitHub README's, and journalists writing about open source can similarly include them to provide context on the real-world usage of libraries.","spans":[{"start":185,"end":202,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/julianshapiro/libscore#badges"}}]},{"type":"paragraph","text":"In addition to badges, we're also releasing quarterly reports on the state of JavaScript library usage. These reports will showcase trends, helping developers learn which libraries are rising in popularity and which are falling. We hope these reports will become a valuable contribution to discussions around the state of web development tooling, and to finally provide the community with concrete data they can use to make decisions.","spans":[]},{"type":"paragraph","text":"Creator and developer – Julian Shapiro\nBackend developer – Thomas Davis\nCreative Director – Jesse Chase","spans":[{"start":0,"end":21,"type":"strong"},{"start":24,"end":38,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/shapiro"}},{"start":39,"end":56,"type":"strong"},{"start":59,"end":71,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/neutralthoughts"}},{"start":72,"end":89,"type":"strong"},{"start":92,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/chasingux"}}]}],"blog_post_date":"2015-01-15","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"whats-your-libscore"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Erika Heidi","author_image":null,"_meta":{"uid":"erika_heidi"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"FreeBSD is here text on illustration with red converse style shoes, a pitchfork, and a devil tail","copyright":null,"url":"https://images.prismic.io/www-static/a0978ef23fa5a14b22c2180b4c40c755e5960a93_freebsd-blog.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Presenting FreeBSD! How We Made It Happen.","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"We're happy to announce that FreeBSD is now available for use on DigitalOcean!","spans":[]},{"type":"paragraph","text":"FreeBSD will be the first non-Linux distribution available for use on our platform.  It's been widely requested because of its reputation of being a stable and performant OS.  While similar to other open source unix-like operating systems, it's unique in that the development of both its kernel and user space utilities are managed by the same core team, ensuring consistent development standards across the project.  FreeBSD also offers a simple, yet powerful package management system that allows you to compile and install third-party software for your system with ease.","spans":[]},{"type":"paragraph","text":"One particularly compelling attribute of the FreeBSD project is the quality of their documentation, including the FreeBSD Handbook which provides a comprehensive and thoughtful overview of the operating system.  We at DigitalOcean love effective and concise technical writing, and so we've also produced numerous FreeBSD tutorials to aid new users with Getting Started with FreeBSD.","spans":[{"start":85,"end":98,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.freebsd.org/docs.html"}},{"start":114,"end":130,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/"}},{"start":304,"end":330,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tags/freebsd?primary_filter=tutorials"}},{"start":353,"end":381,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-get-started-with-freebsd-10-1"}}]},{"type":"paragraph","text":"We understand that this has been a long standing user request, and we've heard you.  You might be asking yourself - what took so long?","spans":[]},{"type":"paragraph","text":"The internal structure of DigitalOcean's engineering team has rapidly changed over time due to the dynamic growth of the company.  What began as a couple of guys coding furiously in a room in Brooklyn has ballooned to a 100+ person organization serving hundreds of thousands of users around the globe.  As we've grown, by necessity we've needed to adjust and reorganize ourselves and our systems to be able to better serve our users.  There have been many experiments in how we approach, prioritize and execute this work; this image is a result of the successful alignment of a few key elements.","spans":[]},{"type":"heading3","text":"Technical Foundation","spans":[]},{"type":"paragraph","text":"Last year, we built our metadata service — allowing a droplet to have access to information about itself at the time that it's being created.  This is a powerful thing because it gives a vanilla image a mechanism to configure itself independently.  This service was a big part what allowed us to offer CoreOS, and in building it, it gave us more flexibility in what we could offer moving forward.  Our backend code would no longer need to know the contents of the image to be able to serve it.  On creation, the droplet itself could query for configurables — hostnames, ssh keys, and the like —  and configure itself instead of relying on a third party.","spans":[]},{"type":"paragraph","text":"This fundamental decoupling is an echo of a familiar refrain: build well defined interfaces and don't let knowledge leak across those boundaries unnecessarily.  It's allowed us to free images from customization by our backend code, and entirely sidestep the problematic issue of modifying a UFS filesystem from a Linux host.","spans":[]},{"type":"paragraph","text":"Since we now had a feasible mechanism to allow images to be instantiated independently of our backend, we just needed to put the parts together that would allow us to inject the configuration upon creation.  FreeBSD doesn't itself offer cloud versions of the OS similar to what Canonical and Red Hat provide, so we started from a publicly available port of cloud-init meant to allow FreeBSD to run on OpenStack.","spans":[{"start":349,"end":367,"type":"hyperlink","data":{"link_type":"Web","url":"http://pellaeon.github.io/bsd-cloudinit/"}}]},{"type":"paragraph","text":"In order to query metadata, we need to have an initial network configuration in order to build our configuration, since DigitalOcean's droplets use static networking.  During boot time, we bring up the droplet on a v4 link-local address in order to do the initial query to the service.  From there, we pick up the real network config, hostname, and ssh keys.  The cloud-init project then writes a configuration that's associated with the droplet's ID.  Linking this configuration to the droplet ID is the mechanism that allows it to know whether the image is being created from a snapshot or new create, or is just a rebooted instance of an already configured droplet.","spans":[]},{"type":"paragraph","text":"Once this configuration has been injected, FreeBSD's boot process can continue and use it accordingly — eventually booting into the instance as expected.","spans":[]},{"type":"heading3","text":"Focus","spans":[]},{"type":"paragraph","text":"This endeavor began life as an experiment in how we organize ourselves in the engineering team.  We were given a few weeks to pick a project, self organize in cross-functional teams, and execute.  A lot went right during this process that allowed this project to succeed.","spans":[]},{"type":"paragraph","text":"Deadlines are powerful things.  Not in a punitive or negative sense of the word, but in a sense that there will be a well defined time where work on this will collectively end.  So is having a very clear picture of what \"done\" looks like.  In the case of BSD, it was particularly powerful to have a clear goal of a alpha functional BSD droplet with a date to drive for.  Given the freedom to focus on a single goal, clear communication, and well defined restraints, we were able to finally deliver a long standing user request with relative ease.","spans":[]},{"type":"paragraph","text":"This is the start to the many things we're excited to build in 2015!","spans":[]},{"type":"paragraph","text":"By: Neal Shrader","spans":[{"start":4,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/icosahedral"}}]}],"blog_post_date":"2015-01-13","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}}],"_meta":{"uid":"presenting-freebsd-how-we-made-it-happen"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"David E. Worth","author_image":{"dimensions":{"width":250,"height":250},"alt":"David E. Worth","copyright":null,"url":"https://images.prismic.io/www-static/88908d6f279ad5cae0d19e5f8f8193854aa2d489_da3f9c3ffc8b92a283a0dc067f6750f7.jpg?auto=compress,format"},"_meta":{"uid":"david_e_worth"}},"blog_header_image":{"dimensions":{"width":735,"height":392},"alt":"user data automation illustration","copyright":null,"url":"https://images.prismic.io/www-static/2b415496-24d9-4fdf-95c5-2ce9041bf814_user-data.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Automating App Deployments with User-Data","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Automating common development tasks such as building, testing, and deploying your application has many benefits, including increasing repeatability and consistency by removing the potential for interference by \"the human element.\" Deploying your applications by running a single command from the commandline means that your team can spend their time working on the app and rather than the care and feeding of installations.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"There are some very convenient use-cases for creating new Droplets and automatically running applications on them. Your team may want to deploy a feature-branch containing new customer or user-facing code in order to get feedback or stand up a demo-instance of your product for a customer at the touch of a button. This blog post will cover how you can accomplish these and other use cases with the DigitalOcean API.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Mitchell Anicas has written about using Metadata via the API in the DigitalOcean Community. With that as a starting point, we can create some workflows that automatically deploy applications to Droplets.  With the DigitalOcean API and `CloudInit` accessed via User-Data we can","spans":[{"start":40,"end":60,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-droplet-metadata"}},{"start":236,"end":245,"type":"hyperlink","data":{"link_type":"Web","url":"https://help.ubuntu.com/community/CloudInit"}}]},{"type":"paragraph","text":"- Get an application or source code onto a Droplet","spans":[]},{"type":"paragraph","text":"- Run an application in a Docker container so that it \"just works\" with a","spans":[]},{"type":"paragraph","text":"single API call","spans":[]},{"type":"paragraph","text":"- Setup configuration management tools automatically","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Getting your application code to the Droplet","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Before we can run our application, its source code or binary needs to be on a Droplet.  As Mitchell described, spinning up a new Droplet via the API is very simple so our only modification will be in setting up an application stored in public version control, specifically GitHub.  If your project happens to be on another service such as BitBucket or another hosted version-control service the appropriate changes should be simple.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Suppose I have a public GitHub repository housing a Rails application that I would like to deploy to a Droplet via the API.  Using the User-Data functionality I can simply install Git and clone the repository in the `runcmd` block of the Cloud Config:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H 'Content-Type: application/json' \\","spans":[]},{"type":"paragraph","text":"    -H 'Authorization: Bearer <MY TOKEN>' \\","spans":[]},{"type":"paragraph","text":"    -d '{\"name\":\"example.com\", \"region\":\"nyc3\", \"size\":\"512mb\",","spans":[]},{"type":"paragraph","text":"         \"image\":\"ubuntu-14-04-x64\", \"ssh_keys\":null, \"backups\":false,","spans":[]},{"type":"paragraph","text":"         \"ipv6\":false, \"private_networking\":false,","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"         \"user_data\":\"","spans":[]},{"type":"paragraph","text":"    #cloud-config","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    runcmd:  ","spans":[]},{"type":"paragraph","text":"      - apt-get install -y git","spans":[]},{"type":"paragraph","text":"      - git clone https://github.com/daveworth/sample_app_rails_4 /opt/apps/sample_app_rails_4","spans":[]},{"type":"paragraph","text":"    \"}'","spans":[]},{"type":"paragraph","text":"`}```","spans":[]},{"type":"paragraph","text":" ","spans":[]},{"type":"paragraph","text":"In the case where we are cloning a private repository, we can simply change the `git clone` command to include a token issued by GitHub:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    git clone https://<MY GitHub TOKEN>:x-oauth-basic@github.com/daveworth/sample_app_rails_4 /opt/apps/sample_app_rails_4  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Similarly, if we want to deploy a specific feature-branch of the repository we can simply use the `-b` flag to specify that branch:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    git clone -b feature/some-great-feature https://github.com/our_team/our_big_project.git /opt/apps/our_big_project  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Getting your application running!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Simply cloning your code onto a fresh running Droplet is nice, but is not nearly as useful as having your application \"just work\" on that Droplet.  We've written fairly extensively about Docker previously, including a Getting Started Guide to using it on DigitalOcean.  Not every image at DigitalOcean supports User-Data but conveniently our Docker Application Image does, allowing you to deploy a running instance of your application on it.","spans":[{"start":162,"end":180,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tags/docker"}},{"start":218,"end":239,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-getting-started"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"We are going to work through the process of getting an example Rails 4 application up and running on a new Droplet using User-Data and Docker.  ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"I have forked the Sample Rails 4 application from railstutorial to my personal github in the `sample_app_rails_4` repository. In my fork I included a `Dockerfile` which configures a Docker container with all of the application's dependencies, sets up its database, and finally runs the application.","spans":[{"start":50,"end":63,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/railstutorial"}},{"start":94,"end":112,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/daveworth/sample_app_rails_4"}},{"start":151,"end":161,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/daveworth/sample_app_rails_4/blob/master/Dockerfile"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"With that file in the repository, modifying our User-Data to run the application is very simple. First change the image from `\"ubuntu-14-04-x64\"` to an image that ships with Docker (to find those use our `/v2/images` API endpoint with application image filters). In this case we will use `Docker 1.4.1 on 14.04` whose `slug` is `docker`. We can instruct Docker to build and run our container while exposing ports 80 and 443 to the application's HTTP(s) server port (in this case 3000) by changing the `user_data` field in our JSON body as follows.  Walking through the commands below, we first install git and clone down our sample application with it.  We then instruct Docker to build a container from the application, run it, and bind ports 80 and 443 to the rails server running on port 3000.","spans":[{"start":204,"end":260,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/#list-all-application-images"}}]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H 'Content-Type: application/json' \\","spans":[]},{"type":"paragraph","text":"    -H 'Authorization: Bearer <MY TOKEN>' \\","spans":[]},{"type":"paragraph","text":"    -d '{\"name\":\"example.com\", \"region\":\"nyc3\", \"size\":\"512mb\",","spans":[]},{"type":"paragraph","text":"         \"image\":\"docker\", \"ssh_keys\":null, \"backups\":false,","spans":[]},{"type":"paragraph","text":"         \"ipv6\":false, \"private_networking\":false,","spans":[]},{"type":"paragraph","text":"         \"user_data\":\"","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    #cloud-config","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    runcmd:  ","spans":[]},{"type":"paragraph","text":"      - apt-get -y install git","spans":[]},{"type":"paragraph","text":"      - git clone https://github.com/daveworth/sample_app_rails_4.git /opt/apps/sample_app_rails_4","spans":[]},{"type":"paragraph","text":"      - docker build -t sample_app_rails_4 /opt/apps/sample_app_rails_4","spans":[]},{"type":"paragraph","text":"      - docker run --name sample_app_rails_4  -p 80:3000 -p 443:3000 -d sample_app_rails_4","spans":[]},{"type":"paragraph","text":"    \"}'","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"For the sake of simplicity and brevity in this post, we have simplified the deployed application to use SQLite3 in production.  In the case where you have a more realistic infrastructure including relational databases, key-value stores, full-text search engines, etc, you will need to build separate Docker containers for each and link them up. The [dockerfile project](https://github.com/dockerfile) on GitHub has `Dockerfile`s for many of your favorite projects to help you on your way.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Building new Droplets using Configuration Management","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"For larger and more complicated infrastructure many teams will lean on sophisticated configuration management tools to automate everything and refocus their attention on more challenging problems than installing dependencies. The DigitalOcean community has covered several options in their tutorials: Puppet, Ansible, and Chef. Many of those tools include modules for interacting with DigitalOcean already such as Knife's DigitalOcean Plugin and Ansible's DigitalOcean Module but at the time of this writing they do not include User-Data support. Much of the same functionality from our previous User-Data example can be replicated in a Configuration-Management system such as Puppet, Chef, or Ansible.  As the complexity of your configuration grows User-Data alone can become unwieldy. Configuration management tools allow you break your configurations into more manageable units.  ","spans":[{"start":301,"end":307,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules"}},{"start":309,"end":316,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-an-ubuntu-12-04-vps"}},{"start":322,"end":326,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-plugin-for-knife-to-manage-droplets-in-chef"}},{"start":414,"end":441,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-plugin-for-knife-to-manage-droplets-in-chef"}},{"start":446,"end":475,"type":"hyperlink","data":{"link_type":"Web","url":"http://docs.ansible.com/digital_ocean_module.html"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"We can use User-Data to install and configure our configuration-management tools, which can in-turn, configure your application.  Using the previous User-Data techniques we can install Puppet, fetch your manifests, and configure the Droplet.  Here we fetch Puppet Lab's package and install it (per their instructions).  We then update Apt and install both puppet and git.  After getting those packages installed, we clone our Puppet manifests and apply them.  After that we are free to do whatever we like with our newly configured Droplet.","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H 'Content-Type: application/json' \\","spans":[]},{"type":"paragraph","text":"    -H 'Authorization: Bearer <MY TOKEN>' \\","spans":[]},{"type":"paragraph","text":"    -d '{\"name\":\"puppet.example.com\", \"region\":\"nyc3\", \"size\":\"512mb\",","spans":[]},{"type":"paragraph","text":"         \"image\":\"ubuntu-14-04-x64\",","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"         \"user_data\":\"","spans":[]},{"type":"paragraph","text":"    #cloud-config","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    runcmd:  ","spans":[]},{"type":"paragraph","text":"      - wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb","spans":[]},{"type":"paragraph","text":"      - dpkg -i puppetlabs-release-precise.deb","spans":[]},{"type":"paragraph","text":"      - apt-get update","spans":[]},{"type":"paragraph","text":"      - apt-get -y install puppet","spans":[]},{"type":"paragraph","text":"      - apt-get -y install git","spans":[]},{"type":"paragraph","text":"      - git clone https://<Our Team Token>:x-oauth-token@github.com/our_team/puppet_manifests.git /etc/puppet/manifests","spans":[]},{"type":"paragraph","text":"      - puppet apply /etc/puppet/manifests/site.pp","spans":[]},{"type":"paragraph","text":"      - # ... do something with your newly configured infrastructure... for instance, setup some containers!","spans":[]},{"type":"paragraph","text":"    \"}'","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Packaging our Application for easy deployment","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Because all of our work is being executed with standard tools like `curl`, we can codify it in a simple shell script which could even be shipped with your open-source projects.  ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Simply including a `deploy_to_do.sh` script in your project  would help new users quickly get a working application on DigitalOcean right from your github repo.  ","spans":[{"start":19,"end":36,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/daveworth/sample_app_rails_4/blob/master/deploy_to_do.sh"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Here's an example script:","spans":[]},{"type":"paragraph","text":"```[bin]`{","spans":[]},{"type":"paragraph","text":"    #!/bin/sh","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    # Deploy our cool tool to DigitalOCean.","spans":[]},{"type":"paragraph","text":"    # Make sure you set the DIGITALOCEAN_TOKEN environment variable to your API token before running.","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    set -e     # Stop on first error  ","spans":[]},{"type":"paragraph","text":"    set -u     # Stop if an unbound variable is referenced","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H \"Authorization: Bearer $DIGITALOCEAN_TOKEN\"","spans":[]},{"type":"paragraph","text":"    # ... the rest of your command goes here","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Conclusion","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The User-Data functionality support in DigitalOcean's API allows you and your team to automatically run your code on Droplets.  By automating the deployment process, your team will be able spin up new instances of your application on Droplets as quickly as running any other command.  From there testing new features or letting prospective clients use their own demo instance is one command away!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Have any questions about automating your infrastructure using User-Data? Found any exciting use cases? Let us know in the comment section!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by David E Worth","spans":[{"start":3,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/david_e_worth"}}]}],"blog_post_date":"2015-01-08","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"automating-application-deployments-with-user-data"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Erika Heidi","author_image":null,"_meta":{"uid":"erika_heidi"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"php thank you","copyright":null,"url":"https://images.prismic.io/www-static/0d34a56c-8cc0-4df2-b91a-b6a9a72a031c_php-thanks.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Thank You To PHP's Top Package Authors!","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"PHP remains the most popular server-side programming language powering the world wide web and in use by 82% of websites. Metrics focused on server-side languages show that PHP usage has increased by 1% in the past year alone.","spans":[{"start":16,"end":28,"type":"hyperlink","data":{"link_type":"Web","url":"http://w3techs.com/technologies/overview/programming_language/all"}},{"start":121,"end":128,"type":"hyperlink","data":{"link_type":"Web","url":"http://w3techs.com/technologies/history_overview/programming_language"}}]},{"type":"paragraph","text":"Much of the growth in the last few years was driven by recently developed tools and frameworks, especially Composer. Composer is a dependency management tool, similar to Node's npm, that manages per-project dependencies and package versions for PHP projects. It uses Packagist as its main package repository, which has shown impressive growth in the last year, doubling the number of tracked packages.  This past October, the number of installations, themselves, reached the 45 million mark.","spans":[{"start":107,"end":115,"type":"hyperlink","data":{"link_type":"Web","url":"https://getcomposer.org/"}},{"start":170,"end":180,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.npmjs.org/"}},{"start":267,"end":276,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/"}},{"start":325,"end":342,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/statistics"}}]},{"type":"paragraph","text":"As such, Mikeal and Erika from the DigitalOcean Evangelism team, were curious to find the top 10 Packagist contributors based on the 50 most required packages and their authors. We used this script to collect our data.","spans":[{"start":9,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/mikeal"}},{"start":20,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/erikaheidi"}},{"start":133,"end":158,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/mikeal/php-analytics/blob/master/top50-packages.md"}},{"start":186,"end":197,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/mikeal/php-analytics"}}]},{"type":"paragraph","text":"Why the most required packages? Open source project authors rely on libraries that are well-maintained and stable.  These provide a solid structure on which to build a successful project. If hundreds or thousands of projects are relying on a specific package, this will also mean more people able to contribute and quickly fix any bugs that might show up in the underlying required library.","spans":[]},{"type":"paragraph","text":"Thus, we'd like to give a huge thank you to the authors who took their time to create and share awesome projects with the open source community!","spans":[{"start":31,"end":40,"type":"strong"}]},{"type":"heading3","text":"1) Fabien Potencier  – 22 packages, 16412  total references","spans":[{"start":3,"end":19,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/fabpot"}}]},{"type":"paragraph","text":"Fabien Potencier leads the ranking with 22 packages being referenced (required) by a total of 16412 other packages. Most part of these packages are components of the Symfony Framework, created by Fabien, which are also widely used together or isolated in other projects. His most required package is symfony/framework-bundle with 2626 packages depending on it. This package is a requirement for Symfony bundles, which basically extends the main framework's functionality.","spans":[{"start":300,"end":324,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/symfony/framework-bundle"}}]},{"type":"heading3","text":"2) Sebastian Bergman – 1 package, 9181 total references","spans":[{"start":3,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/s_bergmann"}}]},{"type":"paragraph","text":"Sebastian Bergman is the author of phpunit/phpunit, the most referenced package on Packagist. PHPUnit is a popular unit testing framework for PHP, used as a development requirement by 9181 other projects of all sizes and types on Packagist.","spans":[{"start":35,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/phpunit/phpunit"}},{"start":94,"end":101,"type":"hyperlink","data":{"link_type":"Web","url":"https://phpunit.de/"}}]},{"type":"heading3","text":"3) Taylor Otwell – 3 packages, 3608 total references","spans":[{"start":3,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/taylorotwell"}}]},{"type":"paragraph","text":"Taylor Otwell is the creator of the Laravel Framework. His package illuminate/support is the second most required on Packagist, with 3608 projects depending on it. This library offers a series of helpers for dealing with databases, arrays, and collections. It is a component of the Laravel Framework but can also be used as a standalone library.","spans":[{"start":36,"end":53,"type":"hyperlink","data":{"link_type":"Web","url":"http://laravel.com/"}},{"start":67,"end":85,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/illuminate/support"}}]},{"type":"heading3","text":"4) Benjamin Eberlei – 4 packages, 3170 total references","spans":[{"start":3,"end":19,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/beberlei"}}]},{"type":"paragraph","text":"Benjamin Eberlei is the lead of the Doctrine project, a collection of several PHP libraries focused on database abstraction and object mapping. The package doctrine/orm is the most required, with 1421 other packages depending on it. Those include frameworks, CMSs, and various database-related libraries.","spans":[{"start":36,"end":52,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.doctrine-project.org/"}},{"start":156,"end":168,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/doctrine/orm"}}]},{"type":"heading3","text":"5) Jordi Boggiano – 2 packages, 1975 total references","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/seldaek"}}]},{"type":"paragraph","text":"Jordi Boggiano is the co-author of Composer, the project that inspired this article and stands as one of the most relevant milestones in modern PHP. Jordi is one of the authors of composer/installers, and he also created monolog/monolog. The former is commonly required by frameworks and CMSs to bring composer features into those projects, and the latter is a very popular logging library for PHP.","spans":[{"start":35,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://getcomposer.org/"}},{"start":180,"end":199,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/composer/installers"}},{"start":221,"end":236,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/monolog/monolog"}}]},{"type":"heading3","text":"6) Pádraic Brady – 1 package, 1660 total references","spans":[{"start":3,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/padraicb"}}]},{"type":"paragraph","text":"Pádraic Brady is the author of mockery/mockery, a mock object framework for unit testing in PHP. As with PHPUnit, this is usually a development requirement for creating and running the project test suite. It's required by 1660 other packages on Packagist.","spans":[{"start":31,"end":46,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/mockery/mockery"}}]},{"type":"heading3","text":"7) Zend Framework – 2 packages, 1453 total references","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/zfdevteam"}}]},{"type":"paragraph","text":"Zend is a popular framework for PHP. The Zend Framework development team has two packages in the TOP 50, the most required one being zendframework/zendframework with 1123 packages depending on it. Between the dependant packages are components of the main framework, as well as many extensions created by users.","spans":[{"start":133,"end":160,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/zendframework/zendframework"}}]},{"type":"heading3","text":"8) Kitamura Satoshi – 1 package, 1371 total references","spans":[{"start":3,"end":19,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/satooshi"}}]},{"type":"paragraph","text":"Kitamura Satoshi is the author of satooshi/php-coveralls, a PHP client library for Coveralls – an application that basically provides test coverage stats for continuous integration environments.This library is required by 1371 other projects on Packagist as it is a popular asset for continuous integration within PHP projects.","spans":[{"start":34,"end":56,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/satooshi/php-coveralls"}},{"start":83,"end":92,"type":"hyperlink","data":{"link_type":"Web","url":"https://coveralls.io/"}}]},{"type":"heading3","text":"9) Michael Dowling – 2 packages, 1329 total references","spans":[{"start":3,"end":18,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/mtdowling"}}]},{"type":"paragraph","text":"Michael Dowling is the creator of Guzzle, a HTTP client library and framework for PHP. This library is very popular with projects that make use of remote APIs. His package guzzle/guzzle is required by other 811 projects on Packagist, and many of those are wrapper libraries created to facilitate the use of various APIs.","spans":[{"start":34,"end":40,"type":"hyperlink","data":{"link_type":"Web","url":"http://docs.guzzlephp.org/en/latest/"}},{"start":172,"end":185,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/guzzle/guzzle"}}]},{"type":"heading3","text":"10) Greg Sherwood – 1 package, 1264 total references","spans":[{"start":4,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/gregsherwood"}}]},{"type":"paragraph","text":"Greg Sherwood is the author of squizlabs/php_codesniffer, a library for detecting violations according to a defined code standard. His package is required by 1264 other projects on Packagist.","spans":[{"start":31,"end":56,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/squizlabs/php_codesniffer"}}]},{"type":"paragraph","text":"by Erika Heidi","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/erikaheidi"}}]}],"blog_post_date":"2014-11-25","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"thank-you-to-phps-top-package-authors"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"oauth","copyright":null,"url":"https://images.prismic.io/www-static/ea4c857d-93d3-4a3d-b5f3-89b7d4d2e280_oauth_sammy.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Integrate Your Apps With Our API Using OAuth","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"OAuth 2 is now available for applications harnessing the DigitalOcean API, giving developers an easy way to integrate their applications with user accounts. Users can quickly authorize third-party applications with read or read & write access to their DigitalOcean account without exposing personal credentials.","spans":[{"start":57,"end":73,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/v2/"}}]},{"type":"paragraph","text":"Prior to OAuth support, applications that used the API required users to supply their personal access token, a manual and inconvenient process. The new OAuth flow, available in APIv2, is much better suited for web applications, as users can safely and easily provide access to their accounts through a DigitalOcean authorization request page.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/c29e9aa5778d5f8316a68dec1533a0217339ac6c_authorize_application.png?auto=compress,format","alt":"Authorize Application page","copyright":null,"dimensions":{"width":750,"height":336}},{"type":"paragraph","text":"Additionally, users can view and revoke account access to authorized applications within the DigitalOcean control panel.","spans":[{"start":93,"end":119,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/settings/applications"}}]},{"type":"heading2","text":"Getting Started","spans":[]},{"type":"paragraph","text":"Since we use Ruby internally, we are providing our open source OAuth strategy for the community to use. The omniauth-digitalocean gem is on Github and published to RubyGems. Based on OmniAuth, the widely used Rack-based library for multi-provider authentication, the gem is an easy way to integrate \"sign in with DigitalOcean\" into Rails and Rack frameworks. We are excited to join the growing list of providers with OmniAuth strategies.","spans":[{"start":108,"end":133,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitaloceancloud/omniauth-digitalocean"}},{"start":183,"end":191,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/intridea/omniauth"}},{"start":386,"end":398,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/intridea/omniauth/wiki/List-of-Strategies"}}]},{"type":"paragraph","text":"We've also developed some community resources to help you get started with OAuth, including a general intro to OAuth 2 and a tutorial on how to use OAuth with DigitalOcean both as a user and as a developer.","spans":[{"start":94,"end":118,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2"}},{"start":137,"end":171,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-oauth-authentication-with-digitalocean-as-a-user-or-developer"}}]},{"type":"paragraph","text":"As we continue to refine our new API, we always appreciate any feedback on our API v2 Github page. And if you want to let people know about what you've built, submit your applications to the DigitalOcean projects page. We're excited to highlight the great work of the DO community.","spans":[{"start":79,"end":97,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/digitaloceancloud/api-v2"}},{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/projects"}}]}],"blog_post_date":"2014-07-23","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"integrate-your-apps-with-our-api-using-oauth"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Bryan Liles","author_image":null,"_meta":{"uid":"bryan_liles"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"gophers digging through the ground illustration with words 'Getting started with Go'","copyright":null,"url":"https://images.prismic.io/www-static/83c59839-f74f-4beb-bc84-0a64c4d1bdf0_Go_Blog.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Get Your Development Team Started With Go","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Here at DigitalOcean, Go is quickly becoming one of our favorite programming languages. After a few internal debates,  I've distilled a few thoughts that I'd like to share with teams new to Go (or thinking of taking it on in the future).","spans":[{"start":119,"end":123,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/bryanl"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Using External Code","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The Go package landscape is growing every day. People are sharing high quality code that prevents you from having to reinvent the wheel. There are packages that help you with tasks that range from implementing complex algorithms and building networking services to interfacing with other low level systems through the Go C bindings.","spans":[]},{"type":"paragraph","text":"Given what's available, it's still a challenge to locate high quality packages to help you build your projects. Through word of mouth and on social channels (e.g. Twitter), I've found special gems like go-tigertonic and testify. Yes, we could have gotten by without them – but they provide benefits we don't feel the need to replicate in-house.","spans":[{"start":202,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/rcrowley/go-tigertonic"}},{"start":220,"end":227,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/stretchr/testify"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"There are also a few package repositories that exist; however, none of them can be considered a standard. There are announcement services like OSS Go, but they aren't helpful if you are looking for something specific.","spans":[{"start":143,"end":149,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/oss_go"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"While this remains an unsolved issue, Go has a secret weapon:","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The standard library included with Go is incredibly robust, and unless you are looking for something industry specific or niche, there's a high probability the standard library has a complete solution – or the stepping stone to help you build a solution.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":" Integrating With External Code","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"As you start integrating packages written externally, you should take care to *use* the external packages and not *become* the external package. External API interfaces can change, or your team may decide that it needs to replace backend systems with something more robust. Use Go's interfaces to insulate your application from your imported package's types; in doing this, the focus will shift to fulfilling your project's needs rather than building around a core you don't own or have control over.","spans":[{"start":79,"end":82,"type":"em"},{"start":115,"end":121,"type":"em"}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"As an example, if you're using an external Redis client package, it exports the following:","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"    ```[redis]{`","spans":[]},{"type":"paragraph","text":"     package redis","spans":[]},{"type":"paragraph","text":"    type Redis struct {}","spans":[]},{"type":"paragraph","text":"    func (r *Redis) Get(k string) (*RedisKVPair, error) { // omited}`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"You could use this code in your application, but you'll run into two problems: The first problem is that you'll need a Redis server if you ever want to test code that uses this package. The second is that you won't be able to easily upgrade or swap the Redis client out if you require additional functionality.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Using Go's powerful interfaces, you start specifying the behavior you desire:","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"```[go]{`","spans":[]},{"type":"paragraph","text":"     package myapp","spans":[]},{"type":"paragraph","text":"    type KeyPair struct {","spans":[]},{"type":"paragraph","text":"      Key string","spans":[]},{"type":"paragraph","text":"      Value []byte","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    type KeyStore interface {","spans":[]},{"type":"paragraph","text":"      Get(k string) (*KeyPair, error)","spans":[]},{"type":"paragraph","text":"    }`}```","spans":[]},{"type":"paragraph","text":"Next, you can create your own type that wraps the external dependency:","spans":[]},{"type":"paragraph","text":"    ```[go]{`","spans":[]},{"type":"paragraph","text":"package myapp","spans":[]},{"type":"paragraph","text":"       func NewRedisKeyStore() {","spans":[]},{"type":"paragraph","text":"      return &RedisKeyStore{","spans":[]},{"type":"paragraph","text":"        r: redis.Redis{},","spans":[]},{"type":"paragraph","text":"      }","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    func (rks *RedisKeyStore) Get(k string) (*KeyPair, error) {","spans":[]},{"type":"paragraph","text":"      rkvp, err := rks.r.Get(k)","spans":[]},{"type":"paragraph","text":"      if err != nil {","spans":[]},{"type":"paragraph","text":"        return nil, err","spans":[]},{"type":"paragraph","text":"      }","spans":[]},{"type":"paragraph","text":"      return &KeyPair{","spans":[]},{"type":"paragraph","text":"        Key: rkvp.Key,","spans":[]},{"type":"paragraph","text":"        Value: rkvp.Value,","spans":[]},{"type":"paragraph","text":"      }, nil","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Finally, instead of using the external Redis client explicitly, you can use your wrapper or swap it out in tests:","spans":[]},{"type":"paragraph","text":"```[go]{`","spans":[]},{"type":"paragraph","text":"    type MockKeyStore struct {","spans":[]},{"type":"paragraph","text":"      Dict map[string][]byte","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    func NewMockKeyStore() {","spans":[]},{"type":"paragraph","text":"      return &MockKeyStore{","spans":[]},{"type":"paragraph","text":"        Dict: map[string][]byte{},","spans":[]},{"type":"paragraph","text":"      }","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    func (mks *MockKeyStore) Get(k string) (*KeyPair, error) {","spans":[]},{"type":"paragraph","text":"      return &KeyPair{Key: k, Value: mks.Dict[k]}, nil","spans":[]},{"type":"paragraph","text":"    }","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Managing Dependencies","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"After a bit of time, packages you rely on will be updated to fix bugs and add new features. You will quickly learn that `go get` is not a robust solution for maintaining and organizing dependencies. There are two solutions here that you can try: vendoring your dependencies or using an external tool such as [godep](https://github.com/tools/godep).","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The team evaluated vendoring packages but found out quickly that it can be overwhelming. Hence, we are currently leaning towards using godep to manage our dependencies. It provides a method for ensuring that an explicit package is used. Keep in mind it doesn't do so in a declarative manner, so you have to make sure you have the proper version installed prior to saving with godep.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"While our engineering team has a strong grasp on managing our dependencies, this too remains an unsolved problem. There still isn't a simple, Go community accepted, declarative solution that allows us to specify external dependencies and the exact versions we want to use.  The community is working on it, however, and a few solutions exist for your team to evaluate at the [Go Wiki Tools Page](https://code.google.com/p/go-wiki/wiki/PackageManagementTools).","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Writing Go","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"One thing your team may appreciate about Go is that it doesn't require a resource heavy IDE to be productive. You can use your already-familiar text editor and start writing code. If you do this without any research though, you will be missing out on extensions that can help your productivity. I use Sublime Text for writing Go, and the popular GoSublime plugin provides features like code completion and formatting. Other team members use [Vim Go](https://github.com/fatih/vim-go) with large amounts of success.","spans":[{"start":346,"end":355,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/DisposaBoy/GoSublime"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Another tool to keep an eye on is Oracle – it provides source analysis that will aid you in navigating your projects. From evaluating expressions to understanding types and callees, Oracle integration in editors will be a huge productivity boost to developers. You can try it out today if you use Emacs or the Atom package.","spans":[{"start":34,"end":40,"type":"hyperlink","data":{"link_type":"Web","url":"https://godoc.org/code.google.com/p/go.tools/oracle"}},{"start":310,"end":322,"type":"hyperlink","data":{"link_type":"Web","url":"https://atom.io/packages/go-oracle"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Our team has settled on a single workspace with multiple repositories. As mentioned above, it's still the early days of dependency management, but since we ensure that the team works on all projects in a similar manner, multiple people are able to work with multiple projects simultaneously without any major problems.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Deploying Go","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"After writing your software, you'll want to run it in production. Use a tool like fpm to build debs and RPMs and deploy those to your production. You'll get the value of being able to deploy from an internal repository with an explicit version. Since Go compiles down to a single binary, there will be no dependencies to manage.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"As an example, you can create a Makefile to build a deb:","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"```[go]{` VERSION=0.5.0","spans":[]},{"type":"paragraph","text":"    BUILD=$(shell git rev-list --count HEAD)","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    widget-dpkg:","spans":[]},{"type":"paragraph","text":"      mkdir -p deb/widget/usr/local/bin","spans":[]},{"type":"paragraph","text":"      cp $(GOPATH)/bin/widget  deb/widget/usr/local/bin","spans":[]},{"type":"paragraph","text":"      fpm -s dir -t deb -n widget -v $(VERSION)-$(BUILD) -C deb/widget .","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Before thinking of creating packages by hand, consider using your continuous integration system as the builder. This way you'll be guaranteed an up-to-date package after every successful build. We are currently using Drone.IO for continuous integration. We're also moving to Makefiles for automating tasks, which are well known and a great way to ensure that everyone who touches the code can test and build it.","spans":[{"start":217,"end":225,"type":"hyperlink","data":{"link_type":"Web","url":"https://drone.io/"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Taking Advantage Of Go's Ecosystem","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Go provides tools to make it easier to work within its ecosystem. The DO team uses `golint` which finds simple mistakes and ensures that everything has at least a minimal set of documentation. With `godoc` we have an easy to use interface for viewing the documentation for all the code that our projects contain. The Golang Nuts mailing list is an invaluable resource.","spans":[{"start":317,"end":328,"type":"hyperlink","data":{"link_type":"Web","url":"https://groups.google.com/forum/#!forum/golang-nuts"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"So far, our short foray into Go has been a great success. Developers not familiar with the language have ramped up and become productive quickly. The language's speed of development, coupled with its easy-to-grasp concurrency, has allowed to us to write better software in a faster manner. We experimented with Go while rewriting our Droplet Console, and that experience has given us the confidence to move forward on multiple new projects.","spans":[{"start":334,"end":349,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/new-super-fast-droplet-console-thanks-golang/"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"If your team hasn't tried Go, it should be on your short list.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Write Go For DO","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"As we continue to write new services, as well as rewrite old ones, we grow a deeper appreciation for the language's strength in building distributed systems. If you're a software engineer that's interested in writing Go – we are hiring.","spans":[{"start":222,"end":235,"type":"hyperlink","data":{"link_type":"Web","url":"https://careers.digitalocean.com/careers/software-engineer/"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"*Oh, and if you have experience writing Go, feel free to comment below and share your thoughts.*","spans":[{"start":1,"end":95,"type":"em"}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by Bryan Liles","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/bryanl"}}]}],"blog_post_date":"2014-06-30","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"get-your-development-team-started-with-go"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":750,"height":392},"alt":"VNC console","copyright":null,"url":"https://images.prismic.io/www-static/adab6381-70d0-4ac9-94ed-716b1931f2e6_vnc-console.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"New Super Fast Droplet Console. Thanks, Golang!","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"**We think Go is awesome! **A year ago we started looking into Go as a potential language to standardize our backend around. Since then we've been looking for every excuse to rewrite some of our services in Go.","spans":[]},{"type":"paragraph","text":"The first publicly visible service we've rewritten is our new web console. Using Go, we were able to dramatically decrease load and connection times from seconds to milliseconds. Aside from the immediate benefits, the console is also much more reliable and scalable.","spans":[]},{"type":"paragraph","text":"What benefits did we get from using Go?","spans":[{"start":0,"end":39,"type":"strong"}]},{"type":"list-item","text":"Goroutines made it easy to duplex the tcp and websocket connections, allowing us to dramatically improve the speed of the entire service.","spans":[]},{"type":"list-item","text":"Interfaces allowed us to build end to end testing, ensuring future updates are easier to ship and bugs can be fixed quickly.","spans":[]},{"type":"list-item","text":"Go's built in net/http package means we are able to do live deploys and keep on-going development invisible to users. In fact, the three or four times that we have deployed code to the console since we switched to the new version were done with no interruptions or customer tickets being opened.","spans":[]},{"type":"list-item","text":"Go's package system makes sharing code incredibly easy. That means we can share the code developed for the new console between projects seamlessly.","spans":[]},{"type":"paragraph","text":"The more services that we rewrite in Go, the more we fall in love with the language and the more we feel it complements the development of distributed systems.","spans":[]},{"type":"paragraph","text":"Oh, by the way. If you're a software engineer that's interested in writing Go -- we are hiring.","spans":[{"start":88,"end":94,"type":"hyperlink","data":{"link_type":"Web","url":"https://careers.digitalocean.com/"}}]}],"blog_post_date":"2014-04-24","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"new-super-fast-droplet-console-thanks-golang"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"Avoid Duplicate SSH Host Keys","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"The ssh daemon uses host keys to uniquely identify itself to connecting clients.  The host keys are typically stored in /etc/ssh. Security best practices dictate that these host keys be unique for each operating system instance.  DigitalOcean typically removes host keys when creating a new Droplet from a snapshot or a standard image.","spans":[]},{"type":"paragraph","text":"The SSH host keys for some Ubuntu-based systems could have been duplicated by DigitalOcean's snapshot and creation process.  Therefore, our system is now configured to remove the host keys on Droplets that are created from snapshots at the time of the first boot.  This removal process only happens in situations where we have a high degree of confidence that the host-keys will be regenerated on boot.","spans":[]},{"type":"paragraph","text":"Most Linux distributions will generate new host keys at boot time if host keys are not found.  However, some images may not do this due to local customization. This can be resolved in the majority of cases simply by logging in to the virtual terminal on the Droplet control panel, adding the following line to /etc/rc.local:","spans":[]},{"type":"preformatted","text":"test -f /etc/ssh/ssh_host_dsa_key || dpkg-reconfigure openssh-server\n","spans":[]},{"type":"paragraph","text":"and rebooting the affected Droplet.","spans":[]},{"type":"paragraph","text":"DigitalOcean also recommends that users of existing Ubuntu-based Droplets and snapshots regenerate their SSH host keys. To do this, ensure that the above test or an equivalent is in place, remove the host keys, and generate new ones following the procedure below.","spans":[]},{"type":"paragraph","text":"Step 1: remove potentially duplicated host key.","spans":[]},{"type":"preformatted","text":"rm /etc/ssh/ssh_host_*\n","spans":[]},{"type":"paragraph","text":"Step 2: regenerate host keys.","spans":[]},{"type":"preformatted","text":"/usr/sbin/dpkg-reconfigure openssh-server\n","spans":[]},{"type":"paragraph","text":"For snapshots, please create a Droplet from the snapshot, apply the above changes, and create a new snapshot from that Droplet.  Then, after making sure your snapshot is functional by spinning up a new Droplet, you can delete the old snapshot and the new Droplet.","spans":[]},{"type":"paragraph","text":"UPDATE: Sometimes using the dpkg-reconfigure script throws an error instead of generating new keys. Should this happen to you, please run the following commands to manually generate keys:","spans":[]},{"type":"preformatted","text":"ssh-keygen -t dsa -N \"\" -f /etc/ssh/ssh*host*dsa*key\nssh-keygen -t rsa -N \"\" -f /etc/ssh/ssh*host*rsa*key\nssh-keygen -t ecdsa -N \"\" -f /etc/ssh/ssh*host*ecdsa_key\n","spans":[]}],"blog_post_date":"2013-07-25","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"avoid-duplicate-ssh-host-keys"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":null,"blog_headline":[{"type":"heading1","text":"Development Environments Made Easy with Vagrant and DigitalOcean","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"","spans":[{"start":0,"end":0,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/smdahlen/vagrant-digitalocean"}}]},{"type":"image","url":"https://images.prismic.io/www-static/5139885bbfaad4445e728d6928b80bf4af29a779_vagrant.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":250,"height":305}},{"type":"paragraph","text":"Developing is now easier than before with the new DigitalOcean provider driver in Vagrant 1.1.","spans":[{"start":82,"end":93,"type":"strong"},{"start":82,"end":93,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/smdahlen/vagrant-digitalocean"}}]},{"type":"paragraph","text":"Vagrant provides the framework and configuration format to create and manage complete portable development environments. These development environments can live on your computer or in the cloud, and are portable between Windows, Mac OS X, and Linux.","spans":[]},{"type":"paragraph","text":"With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases development/production parity, and makes the \"works on my machine\" excuse a relic of the past.","spans":[]},{"type":"paragraph","text":"With Vagrant 1.1 and the new DigitalOcean driver, you aren't limited to your local machine and VirtualBox anymore. This allows you to utilize all of the benefits of DigitalOcean's SSD cloud servers, snapshots, server resizing and more.","spans":[]},{"type":"paragraph","text":"To read more and to learn how it works click here.","spans":[{"start":39,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/smdahlen/vagrant-digitalocean"}}]}],"blog_post_date":"2013-03-14","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"development-environments-made-easy-with-vagrant-and-digitalocean"}}}]}}}