{"componentChunkName":"component---src-templates-blog-list-jsx","path":"/blog/24/","result":{"data":{"prismic":{"allFeaturedblogs":{"edges":[{"node":{"featured_blogs_enabled":true,"heading":[{"type":"paragraph","text":"Featured posts","spans":[]}],"featured_blog_1":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":395},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/6d8d81b1-971a-4313-b033-b4e125cb14a0_MondoDB-blog-header-790x395.PNG?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Introducing DigitalOcean Managed MongoDB – a fully managed, database as a service for modern apps","spans":[]}],"blog_post_date":"2021-06-29","blog_post_content":[{"type":"paragraph","text":"MongoDB is one of the most popular databases, and it’s ideal for apps that evolve rapidly and need to handle huge volumes of data and traffic. It offers advantages like flexible document schemas, code-native data access, change-friendly design, and easy horizontal scale-out.","spans":[{"start":22,"end":44,"type":"hyperlink","data":{"link_type":"Web","url":"https://db-engines.com/en/ranking","target":"_blank"}}]},{"type":"paragraph","text":"However, building and maintaining MongoDB clusters from the ground up can be a huge undertaking. Developers often complain that they have to spend their valuable time and resources on database management. Well, we’ve been listening and have some great news: accessing and managing MongoDB on DigitalOcean just got a lot simpler!","spans":[]},{"type":"paragraph","text":"We are excited to announce that DigitalOcean Managed MongoDB is now in General Availability. Managed MongoDB is a fully managed, database as a service (DBaaS) offering from DigitalOcean, built in partnership with and certified by MongoDB Inc. It provides you all the technical capabilities that make MongoDB so beloved in the developer community. Together we have ensured that you will get access to all the latest releases of the MongoDB document database as they become available.","spans":[{"start":32,"end":91,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases-mongodb/"}},{"start":230,"end":241,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/","target":"_blank"}}]},{"type":"paragraph","text":"Managed MongoDB simplifies the MongoDB administration. Developers of all skill levels, even those who do not have prior experience in databases, can spin up MongoDB clusters in just a few minutes. We handle the provisioning, managing, scaling, updates, backups, and security of your MongoDB clusters, allowing you to offload the complex, time consuming –yet critical – database administration tasks to us. This empowers you to focus on what really matters: building awesome apps.","spans":[]},{"type":"embed","oembed":{"height":113,"width":200,"embed_url":"https://www.youtube.com/watch?v=NvHQSV7jnKA","type":"video","version":"1.0","title":"Create a MongoDB Database on DigitalOcean","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","provider_name":"YouTube","provider_url":"https://www.youtube.com/","cache_age":null,"thumbnail_url":"https://i.ytimg.com/vi/NvHQSV7jnKA/hqdefault.jpg","thumbnail_width":480,"thumbnail_height":360,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/NvHQSV7jnKA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"heading2","text":"Benefits of Managed MongoDB","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"list-item","text":"Easy set up and maintenance: We create the database clusters for you. Simply choose the cluster configuration (e.g., memory, disk size, number of nodes, etc.), and the data center in which you want to host the database. Follow a few simple steps and your database cluster will be up and running in a matter of minutes. You can spin up clusters using the cloud control panel, CLI, or API.\n\n","spans":[{"start":0,"end":28,"type":"strong"}]},{"type":"list-item","text":"Automatic daily backups with point in time recovery: Data is one of the most important assets of an app, so it’s critical to backup your database. We take backups of your entire clusters automatically on a daily basis, for free. We also provide a point in time recovery for 7 days, that way if things go wrong due to human error, machine error, or some combination of both, you can easily restore the database as it was at any point in the previous 7 days. \n\n","spans":[{"start":0,"end":52,"type":"strong"}]},{"type":"list-item","text":"Automatic updates and access to latest MongoDB releases: You get access to MongoDB 4.4. This is the latest release of MongoDB and comes packed with numerous enhancements like hedged reads, rust, and swift drivers. Since we have developed Managed MongoDB in partnership with MongoDB Inc, you will always get access to new releases as they become available. With Managed MongoDB, the updates happen automatically. Just select a date and time for the updates and we take care of the rest. This makes it easy to stay up to date with MongoDB releases without disrupting your business.\n\n","spans":[{"start":0,"end":56,"type":"strong"},{"start":148,"end":169,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.mongodb.com/new","target":"_blank"}}]},{"type":"list-item","text":"High availability with automated failover: If your database goes down, it can take down the entire app, leading to bad customer experiences. With Managed MongoDB, you can easily minimize the downtime for your database and make it highly available with standby nodes. Standby nodes add redundancy, so if for example the primary node fails, the standby node is immediately promoted to primary and begins serving requests while we provision a replacement standby node in the background.\n\n","spans":[{"start":0,"end":42,"type":"strong"}]},{"type":"list-item","text":"Scale up easily to handle traffic spikes: As your app gains traction and the usage grows, it’s important to have a database that can keep up with the increased demand. With Managed MongoDB, you can easily scale up the size of database nodes when needed.\n\n","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Secure by default: Since data is critical, it also needs to be secure. We encrypt data at rest with LUKS and in transit with SSL. When you create a new cluster, it’s placed in a VPC network by default that provides a more secure connection between resources. You can also restrict access to your nodes to prevent brute-force password and denial-of-service attacks.","spans":[{"start":0,"end":18,"type":"strong"},{"start":178,"end":189,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/networking/vpc/"}}]},{"type":"heading2","text":"The need for Managed Databases","spans":[]},{"type":"paragraph","text":"DigitalOcean’s mission is to simplify cloud computing so developers, startups, and SMBs can spend more time building software that changes the world. While databases are a critical component to any application, building, maintaining, and scaling them can be complex and time consuming. For developers that are building apps for their business, database administration is often not a core focus area. But it’s quite common to find developers that write the code and then also roll up their sleeves to maintain databases. Such users would rather offload the tedious database administration and focus their limited time and energy on building and enhancing their apps. ","spans":[]},{"type":"paragraph","text":"With this in mind, we introduced Managed Databases a couple of years ago and are excited to add Managed MongoDB to our portfolio. With this release, DigitalOcean Managed Databases now supports the following engines:","spans":[{"start":33,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/managed-databases/"}}]},{"type":"image","url":"https://images.prismic.io/www-static/87745cc1-1c5f-4463-b104-104b7fc30dc7_managed-databases-logos.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":849,"height":104}},{"type":"paragraph","text":"Managed MongoDB launch comes on the heels of DigitalOcean App Platform, a modern, reimagined PaaS (Platform as a Service) that we released a few months ago. App Platform makes it very easy to build, deploy, and scale apps and static sites. You can deploy code by simply pointing to your GitHub and GitLab repos, and App Platform will do all the heavy lifting of managing infrastructure, app runtimes, and dependencies. App Platform, along with Managed Databases, helps fulfill DigitalOcean’s mission by empowering developers, startups, and SMBs to focus more on their apps, and less on the underlying infrastructure and databases.","spans":[{"start":45,"end":70,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"heading2","text":"How Managed MongoDB works","spans":[]},{"type":"paragraph","text":"DigitalOcean provides you with various compute options to build your apps like:","spans":[]},{"type":"list-item","text":"Droplets: On-demand, Linux virtual machines suitable for production business applications and personal passion projects.","spans":[{"start":0,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/droplets/"}}]},{"type":"list-item","text":"DigitalOcean Kubernetes: Managed Kubernetes with automatic scaling, upgrades, and a free control plane.","spans":[{"start":0,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/kubernetes/"}}]},{"type":"list-item","text":"DigitalOcean App Platform: A fully managed Platform as a Service.","spans":[{"start":0,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/app-platform/"}}]},{"type":"paragraph","text":"No matter which compute option you choose to build your apps, you can easily add Managed MongoDB to it. In addition to this, Managed MongoDB also integrates with the Node.js 1-Click App from DigitalOcean Marketplace making it a lot easier to build Node.js apps.","spans":[{"start":166,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/apps/nodejs"}}]},{"type":"heading2","text":"Simple, predictable pricing","spans":[]},{"type":"paragraph","text":"Just like all DigitalOcean products, Managed MongoDB provides simple, predictable pricing that allows you to control costs and prevent any surprise bills. You can spin up a database cluster for just $15/month, or a highly available three-node replica set for $45/month. Click here for more information.","spans":[{"start":270,"end":301,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/pricing/#managed-databases"}}]},{"type":"heading2","text":"Regional availability","spans":[]},{"type":"paragraph","text":"Managed MongoDB is currently available in the following regions:","spans":[]},{"type":"list-item","text":"NYC3 (New York, USA)","spans":[]},{"type":"list-item","text":"FRA1 (Frankfurt, Germany)","spans":[]},{"type":"list-item","text":"AMS3 (Amsterdam, Netherlands)","spans":[]},{"type":"paragraph","text":"We will be making Managed Mongo available in other regions soon. Please check out the release notes for most up to date information on regional availability.","spans":[{"start":86,"end":99,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/docs/release-notes/"}}]},{"type":"heading2","text":"Join us at deploy, DigitalOcean’s virtual user conference","spans":[]},{"type":"paragraph","text":"Today we have deploy, DigitalOcean’s signature user conference, which focuses on celebrating, educating, and connecting awesome builders from all over the world.","spans":[{"start":14,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/home"}}]},{"type":"paragraph","text":"Check out the keynote session from DigitalOcean's CEO, Yancey Spruill, in which he talks about where we're headed as a company and shares some exciting product updates. His keynote will be followed by sessions from community members, engineers, customers, and other experts that are building technologies and businesses powered by the cloud. With live Q&A and an active Discord server, there’s ample opportunity to engage and learn something new. Click here to attend the deploy conference.","spans":[{"start":14,"end":69,"type":"hyperlink","data":{"link_type":"Web","url":"https://deploy.digitalocean.com/agenda/session/552806"}},{"start":347,"end":384,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy-discord"}},{"start":461,"end":489,"type":"hyperlink","data":{"link_type":"Web","url":"http://do.co/deploy"}}]},{"type":"paragraph","text":"We are also launching a hackathon for DigitalOcean Managed MongoDB. Learn how you can participate, submit an app and get a t-shirt.","spans":[{"start":24,"end":66,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/mongodb-hackathon"}}]},{"type":"paragraph","text":"We hope you will give Managed MongoDB a try. Here are some sample datasets and sample apps that you can use to kick the tires. Check out the docs and let us know what you think!","spans":[{"start":22,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/databases/new?engine=mongodb"}},{"start":59,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/do-community/mongodb-resources","target":"_blank"}},{"start":141,"end":145,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/databases/mongodb/"}}]},{"type":"paragraph","text":"If you’d like to have a conversation about using DigitalOcean and Managed MongoDB in your business, please feel free to contact our sales team.","spans":[{"start":120,"end":142,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/contact/sales/"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"André Bearfield","spans":[]},{"type":"paragraph","text":"Director of Product Management","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"André Bearfield","author_image":{"dimensions":{"width":553,"height":547},"alt":"André Bearfield","copyright":null,"url":"https://images.prismic.io/www-static/fdc7c85186f0a850b04083e1d4306bd1c19772e8_andre-bearfield.png?auto=compress,format"},"_meta":{"uid":"andre-bearfield"}},"_meta":{"uid":"introducing-digitalocean-managed-mongodb"}},"featured_blog_2":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":"Droplet Console","copyright":null,"url":"https://images.prismic.io/www-static/710499ae-78cc-4179-afc1-15793637b200_DODX3727-790x400-logo-2.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Securely connect to Droplets with SSH key pairs using a new Droplet Console","spans":[]}],"blog_post_date":"2021-08-10","blog_post_content":[{"type":"paragraph","text":"The famous author Ken Blanchard once said, “Feedback is the breakfast of champions.\" This is something we truly believe at DigitalOcean, and we always strive to enhance our products based on customer feedback.","spans":[]},{"type":"paragraph","text":"With this goal in mind, we are excited to introduce a new Droplet Console that will make it much easier to connect to your Droplets securely. The new Droplet Console provides one-click SSH access to your Droplets through a native-like SSH/Terminal experience. It also eliminates the need for a password or manual configuration of SSH keys. Starting today, we’re pleased to announce that the new Droplet Console is now available to all Droplet users.","spans":[]},{"type":"heading2","text":"Why you should be using Secure Shell (SSH) ","spans":[]},{"type":"paragraph","text":"Password-based security is notoriously insecure due to password fatigue and the overuse of passwords such as ‘123456’. Secure Shell or SSH is a network communication protocol that solves this by using passwordless solutions for encryption, enabling two computers to communicate and securely share data. At a high level, SSH works by creating cryptographic key pairs consisting of a public and private key, which are computer generated and stored separately to ensure their security. ","spans":[{"start":80,"end":117,"type":"hyperlink","data":{"link_type":"Web","url":"https://cybernews.com/best-password-managers/most-common-passwords/"}}]},{"type":"paragraph","text":"SSH has become the default encryption protocol for many industries, but it was difficult to use SSH keys with DigitalOcean’s current Recovery (VNC) console, which is why we developed our new Droplet Console. The new Droplet Console is backed by an agent that security supervises the key pair, while also providing one-click SSH access to our users. You can see the full list of features below.","spans":[]},{"type":"heading2","text":"The new Droplet Console: More time saving, less time wasting ","spans":[]},{"type":"paragraph","text":"The new Droplet Console is for everyone who is looking to build fast, secure apps and avoid hassles with SSH access & usability issues.","spans":[]},{"type":"paragraph","text":"In addition to easier SSH access, the new Droplet Console comes with:","spans":[]},{"type":"list-item","text":"Copy/paste text: Instead of typing lengthy key pairs and text manually, you can use copy/paste to save time. ","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Multi-color support: Multi-color support makes the console more useful and intuitive, and breaks the conventional standard appearance which is black text on a white background. ","spans":[{"start":0,"end":41,"type":"strong"}]},{"type":"list-item","text":"Multi-language support: DigitalOcean’s new Droplet Console supports multiple languages, meaning you can now type and view any content in any language that is supported by UTF-8","spans":[{"start":0,"end":24,"type":"strong"}]},{"type":"list-item","text":"OS/images supported: Linux distributions (Ubuntu(16.04 - 20.04), Fedora (32 & 33), Debian (9), CentOS (7.6 & 8.3), CentOS 8 Stream, Rocky Linux and Marketplace images.","spans":[{"start":0,"end":20,"type":"strong"},{"start":148,"end":159,"type":"hyperlink","data":{"link_type":"Web","url":"https://marketplace.digitalocean.com/"}}]},{"type":"paragraph","text":"The new Droplet Console is available by default on any new Droplets you spin up. You can also enable it manually on older Droplets. Click here to learn more!","spans":[{"start":132,"end":157,"type":"hyperlink","data":{"link_type":"Web","url":"https://docs.digitalocean.com/products/droplets/how-to/connect-with-console/"}}]},{"type":"paragraph","text":"Check out this short walkthrough video that shows the new Droplet Console in action: ","spans":[]},{"type":"embed","oembed":{"type":"video","embed_url":"https://www.youtube.com/watch?v=Qt7QihVuxiE","title":"Access Your Droplet Terminal Through the Web Console","provider_name":"YouTube","thumbnail_url":"https://i.ytimg.com/vi/Qt7QihVuxiE/hqdefault.jpg","provider_url":"https://www.youtube.com/","author_name":"DigitalOcean","author_url":"https://www.youtube.com/c/Digitalocean","height":113,"width":200,"version":"1.0","thumbnail_height":360,"thumbnail_width":480,"html":"<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/Qt7QihVuxiE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>"}},{"type":"paragraph","text":"We hope you’re excited about the new Droplet Console. You’re welcome to spin some Droplets up right now, and try out the new Droplet Console – why wait?","spans":[{"start":72,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://cloud.digitalocean.com/droplets/new"}}]},{"type":"paragraph","text":"Happy coding!","spans":[]},{"type":"paragraph","text":"Harsh Banwait, Senior Product Manager","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Harsh Banwait","author_image":{"dimensions":{"width":600,"height":399},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/e83ff690-b20c-4d88-a2b6-57e562558cd6_download.png?auto=compress,format"},"_meta":{"uid":"harsh-banwait"}},"_meta":{"uid":"new-droplet-console-ssh-support"}},"featured_blog_3":{"__typename":"PRISMIC_Blog","_linkType":"Link.document","blog_header_image":{"dimensions":{"width":790,"height":400},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/588e28d3-d41e-480b-937b-8c3b19201f6e_DODX3568-790x400-Blog.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"How to scale your SaaS product without breaking the bank","spans":[]}],"blog_post_date":"2021-06-22","blog_post_content":[{"type":"paragraph","text":"These days, if you are in the business of software, chances are you are delivering or plan to deliver your services using a Software-as-a-Service (SaaS) model. A combination of internet-based delivery, subscription-based pricing, and low-friction product experiences have made SaaS solutions valuable tools for their users, and an excellent vehicle for software builders looking to distribute their products.","spans":[]},{"type":"paragraph","text":"These factors have made SaaS solutions ubiquitous; SaaS is the largest segment in the public cloud market, and is used to provide functionality ranging from personal finance apps for consumers, to productivity software for businesses, and even tools and services for software developers themselves to compose their applications and simplify their workflows. It is also not uncommon to find micro-SaaS applications being built for specific industries such as retail, job functions such as accounting or marketing, or tasks such as event management. ","spans":[]},{"type":"paragraph","text":"The best thing about this SaaS wave has been that it has allowed a new generation of software builders to build and monetize applications and participate in the digital economy. Previously, you had to be a big company with lots of resources, name recognition and distribution networks to successfully sell software products. Now, irrespective of whether you are a single person working on a passion project, a small team of developers in a startup, or a small and medium-sized business (SMB), the SaaS model enables you to express your ideas in the form of software and deliver them to customers anywhere in the world.","spans":[]},{"type":"heading2","text":"The unique challenges of building SaaS solutions","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Despite the opportunities that come with the widespread adoption of SaaS products, software builders still have to answer key questions in their journey to building successful SaaS products. Understanding what customers to target, features to prioritize, how to price your product, and how to acquire customers are all critical questions to figure out while you are also doing the important job of actually building and operating the product. ","spans":[]},{"type":"paragraph","text":"Writing the code, testing, deployment, monitoring the usage in production, and ensuring that your apps are able to handle the additional demand when customer base and usage grows are all essential and time-consuming tasks.","spans":[]},{"type":"paragraph","text":"Additionally, being able to test multiple ideas, pivot, and double down on the ideas that actually work is critical in early stages of SaaS development. Once growth comes, it is equally important to scale up without compromising on performance or reliability. Needless to say, all of this needs to be economically viable as well, since not everyone has the resources of large SaaS providers like Salesforce or Adobe.","spans":[]},{"type":"heading2","text":"Cloud Computing enables builders but also poses challenges","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Fortunately, for the act of building and operating your apps, cloud computing can help take some load off your shoulders. Unless you have the scale and resources of Facebook, chances are you are not going to set up your own data centers to host the computing infrastructure that powers your SaaS company. Public cloud infrastructure providers can bring great value to SaaS builders by providing on-demand computing services with usage-based pricing. However, just like how the legacy software companies weren't built for the SaaS model, the early (and big) cloud computing services were not optimized for the unique needs of small SaaS building teams. ","spans":[]},{"type":"paragraph","text":"Smaller SaaS teams face challenges with large cloud computing providers, including:","spans":[]},{"type":"heading4","text":"Too many technology options","spans":[]},{"type":"paragraph","text":"There are just too many options for tech stacks on which to build your SaaS - programming languages, application development frameworks, libraries, runtime environments, architectural patterns, and deployment models - and the list is growing by the day.","spans":[]},{"type":"heading4","text":"Complexity of cloud computing services","spans":[]},{"type":"paragraph","text":"Even when you have decided on a technology stack, there is a lot of cloud vendor-specific terminology you need to learn and heavy lifting you need to do to build on the cloud, not all of which contributes to making your SaaS applications successful.","spans":[]},{"type":"heading4","text":"Unpredictable costs","spans":[]},{"type":"paragraph","text":"The experimentation necessary in early stages of SaaS development, as well as the scaling of applications required during the growth phase, call for affordable and predictable pricing from your cloud provider. The last thing SaaS teams want is surprising and indecipherable bills from your cloud provider. Unfortunately, smaller businesses often experience unpredictable costs with cloud providers who are busy serving only the large enterprises.","spans":[]},{"type":"heading2","text":"DigitalOcean provides a simple, cost effective solution for SaaS builders","spans":[]},{"type":"paragraph","text":"Fortunately, at DigitalOcean we have a laser focus on small software development teams, who are trying to build the next generation of applications. Today, DigitalOcean customers are already building SaaS applications which serve all kinds of customers.","spans":[{"start":191,"end":217,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/solutions/saas/"}}]},{"type":"paragraph","text":"We believe SaaS builders should focus on building apps that power their business, and not spend their valuable time on managing infrastructure. That is exactly what we have been able to enable through our intuitive products that are built for scale and reliability.","spans":[{"start":205,"end":223,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/products/"}}]},{"type":"list-item","text":"Vidazoo is an advertising technology company specializing in video streaming and serving. It serves video ads to thousands of websites and handles close to 10 billion requests per day. \n\n“We are as much a data company as an adtech company. Our business relies on speedy and accurate data processing at massive scale. DigitalOcean provides us the perfect set of tools to operate our SaaS business profitably, while not making us feel the need to become full time system administrators. We plan to move a lot of our apps to DigitalOcean App Platform and other fully managed products.” - Roman Svichar, CTO of Vidazoo","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://vidazoo.com/"}},{"start":187,"end":583,"type":"em"}]},{"type":"paragraph","text":"We believe in meeting customers where they are. If they already have an understanding of cloud infrastructure technologies, they should be able to leverage that knowledge and get started with our products without any further ramp up.","spans":[]},{"type":"list-item","text":"Whatfix is an enterprise SaaS provider that offers a digital adoption platform to businesses. The company helps enterprises gain the full value of their investments in enterprise applications by providing real-time, interactive, and contextual guidance to users of those applications. \n\n“What we really love about the DigitalOcean platform is the ease of use. We feel like we know infrastructure and can handle most of the configuration and management. What we needed from a cloud was not bells and whistles but efficiency and reliability. DigitalOcean provides us a platform to build our apps and then gets out of the way. Just how we like it.” - Achyuth Krishna, Director of Engineering of Whatfix","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://whatfix.com/blog/driving-the-future-now-were-excited-to-announce-our-90-million-series-d-funding/"}},{"start":287,"end":648,"type":"em"}]},{"type":"paragraph","text":"We understand that scaling while maintaining reliability of applications and profitability of business is important, so we provide robust solutions which minimize downtime.","spans":[]},{"type":"list-item","text":"Centra is a SaaS-based e-commerce platform for global direct-to-consumer and wholesale e-commerce brands. Centra provides a powerful e-commerce backend that lets brands build pixel-perfect, custom designed, online flagship stores. \n\n“How do we enable our customers to create differentiated online experiences? How do we ensure their e-commerce apps stay up and running at all times? How do we scale on-demand when traffic grows or new customers come in? These are the questions that we ask ourselves every day. Thankfully, we have a partner in DigitalOcean that provides just the platform to answer those questions enabling us to guarantee 99.9% uptime for our clients.” - Martin Jensen, CEO of Centra","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"https://centra.com/"}},{"start":233,"end":673,"type":"em"}]},{"type":"paragraph","text":"These are just a few examples of SaaS businesses finding success on DigitalOcean. We are constantly amazed by the creativity and innovation that software builders are utilizing our platform for. If you are interested in learning more about product updates, technical deep-dives and best practices for building SaaS products and businesses, please contact us to learn how we can help you get started. ","spans":[{"start":340,"end":357,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"Come build with DigitalOcean!","spans":[]},{"type":"paragraph","text":"Looking to migrate your SaaS to DigitalOcean? Leverage free infrastructure credits, robust training, and technical support to ensure a worry-free migration.","spans":[{"start":0,"end":156,"type":"strong"},{"start":0,"end":156,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/migrate/?utmmedium=blog","target":"_blank"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Raman Sharma","spans":[]},{"type":"paragraph","text":"Vice President, Product & Programs Marketing","spans":[]}],"tags":[{"tag1":{"__typename":"PRISMIC_Tag","tag":"Developer Relations","_linkType":"Link.document","_meta":{"uid":"developer-relations"}}}],"author":{"__typename":"PRISMIC_Author","author_name":"Raman Sharma","author_image":{"dimensions":{"width":512,"height":512},"alt":null,"copyright":null,"url":"https://images.prismic.io/www-static/497b4b14-d192-493a-8b66-7ae176ba99f3_raman.png?auto=compress,format"},"_meta":{"uid":"raman-sharma"}},"_meta":{"uid":"how-to-scale-your-saas-product-without-breaking-the-bank"}}}}]}}},"pageContext":{"limit":12,"skip":276,"numPages":33,"currentPage":24,"data":[{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":784,"height":392},"alt":"Introducing H20 our first under water data center illustration of Atlantis ","copyright":null,"url":"https://images.prismic.io/www-static/950ad40c6ab7577a96f28161c42fb09eadf36b45_atlantis.jpg?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Atlantis - Our First Underwater Datacenter","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"We're very excited to be announcing a new region: Atlantis (Datacenter Abbreviation: H2O), submerged in the Straits of Gibraltar. This underwater datacenter will provide unparalleled connectivity to the surrounding countries like Spain, Portugal, Morocco, Algeria, and Tunisia.","spans":[{"start":50,"end":58,"type":"strong"}]},{"type":"paragraph","text":"While we are still actively building out our German datacenter, we wanted to investigate the money-saving possibilities of underwater datacenter cooling. Our investigation was a great success:  not only were we able to reduce our electricity costs by 35%, but we discovered our high-density SSD storage was even more dense at 87atm! Despite dramatically efficient cooling and more GB per cubic inch, these servers will still be offered at our standard pricing plan as any savings we found were, unfortunately, offset by the cost of diving equipment.","spans":[{"start":28,"end":62,"type":"hyperlink","data":{"link_type":"Web","url":"http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/4296967-datacenter-in-germany"}}]},{"type":"paragraph","text":"While this datacenter may come as a pleasant surprise to residents in the surrounding countries, we have actually been actively looking into the possibility since mid-2013, inspired by Facebook's energy efficient Arctic Datacenter. Some potential issues we faced in our initial investigations included transporting safe electrical current under the sea, providing sufficient illumination on the ocean floor (around 900 meters deep), and our technicians' inability to swim.","spans":[]},{"type":"paragraph","text":"You can easily spin up a server in the new region by selecting \"Atlantis\" in the Droplet create screen or choosing that location in the API. Our initial run of servers in this region is limited. We will be adding more capacity to H2O at low tide.","spans":[]},{"type":"paragraph","text":"When asked about the new location, DigitalOcean's Director of Infrastructure, Lev Uretsky explained: \"Our Datacenter Techs are very excited about Atlantis. We firmly believe that this will be the easiest DC to rack, as our servers become much lighter underwater.\"","spans":[]},{"type":"paragraph","text":"If this sounds exciting to you, DigitalOcean is actively hiring for the new location. Scuba certified candidates are welcome to apply. Background in Marine Biology a plus.","spans":[{"start":32,"end":63,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/company/careers/"}}]}],"blog_post_date":"2015-03-31","tags":[{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"announcing-atlantis"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Bryan Liles","author_image":null,"_meta":{"uid":"bryan_liles"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"gophers digging to a center tunnel with the words 'Taming your Go dependancies'","copyright":null,"url":"https://images.prismic.io/www-static/283d47e0-afd6-46d1-9b56-3226e7ae915f_gophers.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Taming Your Go Dependencies","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Internally at DigitalOcean, we had an issue brewing in our Go code bases.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Separate projects were developed in separate Git repositories, and in order to minimize the fallout from upgraded dependencies, we mirrored all dependencies locally in individual Git repositories. These projects relied on various versions of packages, and the problem was that there was no deterministic way to distinguish which project required what and when.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"As a team, we knew this approach was not optimal, but coming to a consensus on a single way to manage packages was a tough decision. With a little bit of effort, we arrived at a solution which addressed the issue of managing package versions without needing an external management tool. We call our effort cthulhu, which is our Go repository. We also refer to it as a mono repo.","spans":[{"start":306,"end":313,"type":"strong"}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"What's a Mono Repo?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Building a cloud is fast-paced business. We have Go projects that serve APIs, move bits around from server to server, and crunch numbers. Because many of these projects share a common set of components, we determined it would be easier to create a single Git project and import all the existing projects. Here's the high level structure of the project:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"        .","spans":[]},{"type":"paragraph","text":"        ├── README.md","spans":[]},{"type":"paragraph","text":"        ├── docode","spans":[]},{"type":"paragraph","text":"        │   └── src","spans":[]},{"type":"paragraph","text":"        └── third_party","spans":[]},{"type":"paragraph","text":"            └── src","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It is a called a mono repo because we only have one repository. Our setup is straightforward. We have a root directory that serves as the base for cthulhu. Underneath this root, we have two additional directories: `docode` for our code, and `third_party` for other people's code.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"To develop Go software,set your `GOPATH` to `${CTHULHU}/third_party:${CTHULHU}/docode`. That's it!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The reason that the `third_party` directory is listed first is to ensure that, when packages are fetched using `go get`, they'll be installed in this directory's src/ rather than `docode`.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"At this point, you can create a script that can be sourced into a shell, and you can start developing. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Why Is This Good?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"First and foremost, we believe the mono repo is a good idea because using it is frictionless. There are no arcane actions or sacrifices required to configure an individual developer's workstation.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"It is also beneficial because at this point of DigitalOcean's Engineering team's evolution, having a single repository for editing software means it is less likely for projects to get lost. Finding code is easy using the mono repo and our team's simple conventions for naming services. We have three types of code: doge, our internal standard library, which contains code that is reused throughout the repository; services, which contains all of our business logic; and tools, which are one off applications and utilities used to manage our Go code, like our custom import rewriter that sorts and separates imports based on our current code guidelines.","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"        .","spans":[]},{"type":"paragraph","text":"        ├── docode","spans":[]},{"type":"paragraph","text":"        │   └── src","spans":[]},{"type":"paragraph","text":"        │       ├── doge","spans":[]},{"type":"paragraph","text":"        │       ├── services","spans":[]},{"type":"paragraph","text":"        │       └── tools","spans":[]},{"type":"paragraph","text":"        └── third_party","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Because all of our Go is in a single repository, everything uses the same versions of external and internal dependencies. If a package is upgraded, every service which depends on the package receives the new functionality. This helps when dealing with security issues. It's also nice to not have to manage versions explicitly. For our purposes, the canonical version is what's under `third_party/src`. If your work requires an upgrade, you install the new dependency, run the tests, and then send a pull request.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":" It Isn't All Rainbows.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Our mono repo is a great solution for us, but it doesn't come without its own set of caveats. ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"One of the largest issues is actually an issue with Git. Git prescribes sub-modules for including dependencies in your main repository. When the sub-modules work correctly, there are no problems, but when they don't work, it's a thorny pain for everyone involved. In this case, we chose to sidestep the problem. Instead of dealing with sub-modules or an external management solution, we rename the git config directory (if there is one) for our dependencies. Because the .git directory doesn't exist, Git considers the configuration to be just another set of files. If you want to upgrade the package, just revert the git directory name, and update. This isn't an amazing experience, but it is simple.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Additionally, when you share a repository with all the other projects, you inherit all the other project's issues. This means that if one of our individual services has a slow test suite, all services have a slow test suite. In general, testing Go is very fast. When you involve external tests, like database integration, things can slow down. A solution for this is to use the short flag to skip the long tests. An additional solution is to run tests for individual packages. The DigitalOcean Engineering team is still testing and deciding which solutions works best for us.","spans":[{"start":378,"end":388,"type":"hyperlink","data":{"link_type":"Web","url":"http://golang.org/pkg/testing/#Short"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"heading2","text":"Where Do We Go Next?","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Currently, our mono repo serves our needs well. It is an easy concept for newer developers to grasp, it doesn't require any external dependencies, and it allows us to co-locate all of our Go code. In a nutshell, it's a great thing for us and we believe it could be a great thing for other teams working with Go as well.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by  Bryan Liles","spans":[{"start":4,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/bryanl"}}]}],"blog_post_date":"2015-02-20","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"taming-your-go-dependencies"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Jesse Chase","author_image":null,"_meta":{"uid":"jesse_chase"}},"blog_header_image":{"dimensions":{"width":784,"height":392},"alt":"libscore bookshelf","copyright":null,"url":"https://images.prismic.io/www-static/4cc4061d-7b10-4981-a038-777b85d275f2_banner.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"What's Your Libscore?","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"The contributors to Libscore, including our own Creative Director Jesse Chase, wanted to offer this post as a thank you for all the support the project has received. Julian Shapiro launched Libscore last month hoping that the developer community would find the tool useful, and continues to be grateful for all of the positivity and constructive feedback throughout the web.","spans":[{"start":20,"end":28,"type":"hyperlink","data":{"link_type":"Web","url":"http://libscore.com/"}},{"start":124,"end":139,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.google.com/search?q=libscore&amp;oq=libscore+&amp;aqs=chrome..69i57j0j69i60j69i61j0l2.3373j1j4&amp;sourceid=chrome&amp;es_sm=119&amp;ie=UTF-8#q=libscore&amp;tbm=nws"}}]},{"type":"paragraph","text":"For those who haven't heard, Libscore is a brand new open-source project that scans the top million websites to determine which third-party JavaScript libraries they are using. The tool aims to help front-end open source developers measure their impact – you can read all about it here.","spans":[{"start":281,"end":285,"type":"hyperlink","data":{"link_type":"Web","url":"https://medium.com/@Shapiro/introducing-libscore-com-be93165fa497"}}]},{"type":"paragraph","text":"In this post, we'll break down the technology that Libscore leverages and discuss some of the challenges getting it off the ground. We were also lucky enough to talk with Julian and get some insight as to where he sees the project going.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/1bcaeada9aa8d78d8f71caf8b92e162b85d1e1b0_libscore.png?auto=compress,format","alt":null,"copyright":null,"dimensions":{"width":1422,"height":760}},{"type":"heading3","text":"Thomas Davis: A Technical Overview","spans":[]},{"type":"paragraph","text":"Unlike traditional web crawlers, Libscore thoroughly scans the run-time environment of each website into a headless browser. This allows Libscore to monitor the operating environment on each website and to detect as many libraries as possible – even those that have been pre-bundled and required as modules. The tradeoff, of course, is that running one million headless browser connections is much more resource intensive than performing basic cURL requests and parsing static HTML.","spans":[]},{"type":"paragraph","text":"The biggest insight we gained while designing the crawler is that the best way to weed out false positives for third-party plugins is to leverage the broader data set we're aggregating. Specifically, we weed out third-party libraries that didn't exist on at least 30 sites out of the 1 million crawled. Using meta-heuristics like these allowed us to more confidently detect libraries that were in fact third-party plugins, and not just arbitrary JavaScript variables that were leaking to the global scope.","spans":[]},{"type":"paragraph","text":"On the backend, crawls are queued via Redis with the results stored in MongoDB. Both services are loaded fully into RAM which allows our RESTful API to serve requests faster than it would querying the disk.  The main bottleneck to crawling concurrency is network bandwidth, but thanks to DigitalOcean, it was a breeze to repeatedly clone instances and run crawls during off-peak times in different regions. Ultimately, using just a few high-RAM DigitalOcean instances, we parse 600 websites per minute and complete the entire crawl in under 36 hours at the end of each month.","spans":[]},{"type":"paragraph","text":"As the crawler runs, raw library usage data for each site is appended to a master JSON file, which we simply read from the file system with Nodejs. Once all the raw usage data is collected we start a process dubbed \"ingestion\", which is responsible for aggregating the results and making them accessible via the API.  We actually attempted to load the entire dataset into ram to perform our calculations, but quickly ran into a quirky problem with V8 not being able to allocate anymore than approximately 1GB of memory for arrays. For now, we are splitting up the raw dump into smaller files to bypass the memory limit, though in the future we might just rewrite the project to use a more suitable language and environment.","spans":[{"start":426,"end":450,"type":"hyperlink","data":{"link_type":"Web","url":"https://code.google.com/p/v8/issues/detail?id=847"}}]},{"type":"heading3","text":"Jesse Chase: Design Improvements","spans":[]},{"type":"paragraph","text":"While Libscore currently serves as an invaluable tool for surfacing library adoption data, the future is even more exciting. To illustrate it let's jump ahead 6 months – smack in the middle of summer. At this point, Libscore will have crawled through the top million sites 6 times already (or 6 million domain crawls!), bringing forth rich month-over-month trend data on library usage.","spans":[]},{"type":"paragraph","text":"By providing users with a soon-to-be-released time series graph, with the ability to plot multiple libraries over the same time period, developers will gain new insights into how libraries are changing over time. For example, users will be able to see why a library's usage plummeted from one month to the next – potentially due to the increased adoption of another library. Soon, this data will be fully visualized.","spans":[]},{"type":"heading3","text":"Julian Shapiro: The Future Of Libscore","spans":[]},{"type":"paragraph","text":"Libscore is more than a destination for JavaScript statistics; it's also a data store that can be leveraged in the marketing of open source projects. One way we're enabling this is via embeddable badges that showcase real-time site counts. Open source developers can show off these badges in their GitHub README's, and journalists writing about open source can similarly include them to provide context on the real-world usage of libraries.","spans":[{"start":185,"end":202,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/julianshapiro/libscore#badges"}}]},{"type":"paragraph","text":"In addition to badges, we're also releasing quarterly reports on the state of JavaScript library usage. These reports will showcase trends, helping developers learn which libraries are rising in popularity and which are falling. We hope these reports will become a valuable contribution to discussions around the state of web development tooling, and to finally provide the community with concrete data they can use to make decisions.","spans":[]},{"type":"paragraph","text":"Creator and developer – Julian Shapiro\nBackend developer – Thomas Davis\nCreative Director – Jesse Chase","spans":[{"start":0,"end":21,"type":"strong"},{"start":24,"end":38,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/shapiro"}},{"start":39,"end":56,"type":"strong"},{"start":59,"end":71,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/neutralthoughts"}},{"start":72,"end":89,"type":"strong"},{"start":92,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/chasingux"}}]}],"blog_post_date":"2015-01-15","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"whats-your-libscore"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Erika Heidi","author_image":null,"_meta":{"uid":"erika_heidi"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"FreeBSD is here text on illustration with red converse style shoes, a pitchfork, and a devil tail","copyright":null,"url":"https://images.prismic.io/www-static/a0978ef23fa5a14b22c2180b4c40c755e5960a93_freebsd-blog.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Presenting FreeBSD! How We Made It Happen.","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"We're happy to announce that FreeBSD is now available for use on DigitalOcean!","spans":[]},{"type":"paragraph","text":"FreeBSD will be the first non-Linux distribution available for use on our platform.  It's been widely requested because of its reputation of being a stable and performant OS.  While similar to other open source unix-like operating systems, it's unique in that the development of both its kernel and user space utilities are managed by the same core team, ensuring consistent development standards across the project.  FreeBSD also offers a simple, yet powerful package management system that allows you to compile and install third-party software for your system with ease.","spans":[]},{"type":"paragraph","text":"One particularly compelling attribute of the FreeBSD project is the quality of their documentation, including the FreeBSD Handbook which provides a comprehensive and thoughtful overview of the operating system.  We at DigitalOcean love effective and concise technical writing, and so we've also produced numerous FreeBSD tutorials to aid new users with Getting Started with FreeBSD.","spans":[{"start":85,"end":98,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.freebsd.org/docs.html"}},{"start":114,"end":130,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/"}},{"start":304,"end":330,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tags/freebsd?primary_filter=tutorials"}},{"start":353,"end":381,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-get-started-with-freebsd-10-1"}}]},{"type":"paragraph","text":"We understand that this has been a long standing user request, and we've heard you.  You might be asking yourself - what took so long?","spans":[]},{"type":"paragraph","text":"The internal structure of DigitalOcean's engineering team has rapidly changed over time due to the dynamic growth of the company.  What began as a couple of guys coding furiously in a room in Brooklyn has ballooned to a 100+ person organization serving hundreds of thousands of users around the globe.  As we've grown, by necessity we've needed to adjust and reorganize ourselves and our systems to be able to better serve our users.  There have been many experiments in how we approach, prioritize and execute this work; this image is a result of the successful alignment of a few key elements.","spans":[]},{"type":"heading3","text":"Technical Foundation","spans":[]},{"type":"paragraph","text":"Last year, we built our metadata service — allowing a droplet to have access to information about itself at the time that it's being created.  This is a powerful thing because it gives a vanilla image a mechanism to configure itself independently.  This service was a big part what allowed us to offer CoreOS, and in building it, it gave us more flexibility in what we could offer moving forward.  Our backend code would no longer need to know the contents of the image to be able to serve it.  On creation, the droplet itself could query for configurables — hostnames, ssh keys, and the like —  and configure itself instead of relying on a third party.","spans":[]},{"type":"paragraph","text":"This fundamental decoupling is an echo of a familiar refrain: build well defined interfaces and don't let knowledge leak across those boundaries unnecessarily.  It's allowed us to free images from customization by our backend code, and entirely sidestep the problematic issue of modifying a UFS filesystem from a Linux host.","spans":[]},{"type":"paragraph","text":"Since we now had a feasible mechanism to allow images to be instantiated independently of our backend, we just needed to put the parts together that would allow us to inject the configuration upon creation.  FreeBSD doesn't itself offer cloud versions of the OS similar to what Canonical and Red Hat provide, so we started from a publicly available port of cloud-init meant to allow FreeBSD to run on OpenStack.","spans":[{"start":349,"end":367,"type":"hyperlink","data":{"link_type":"Web","url":"http://pellaeon.github.io/bsd-cloudinit/"}}]},{"type":"paragraph","text":"In order to query metadata, we need to have an initial network configuration in order to build our configuration, since DigitalOcean's droplets use static networking.  During boot time, we bring up the droplet on a v4 link-local address in order to do the initial query to the service.  From there, we pick up the real network config, hostname, and ssh keys.  The cloud-init project then writes a configuration that's associated with the droplet's ID.  Linking this configuration to the droplet ID is the mechanism that allows it to know whether the image is being created from a snapshot or new create, or is just a rebooted instance of an already configured droplet.","spans":[]},{"type":"paragraph","text":"Once this configuration has been injected, FreeBSD's boot process can continue and use it accordingly — eventually booting into the instance as expected.","spans":[]},{"type":"heading3","text":"Focus","spans":[]},{"type":"paragraph","text":"This endeavor began life as an experiment in how we organize ourselves in the engineering team.  We were given a few weeks to pick a project, self organize in cross-functional teams, and execute.  A lot went right during this process that allowed this project to succeed.","spans":[]},{"type":"paragraph","text":"Deadlines are powerful things.  Not in a punitive or negative sense of the word, but in a sense that there will be a well defined time where work on this will collectively end.  So is having a very clear picture of what \"done\" looks like.  In the case of BSD, it was particularly powerful to have a clear goal of a alpha functional BSD droplet with a date to drive for.  Given the freedom to focus on a single goal, clear communication, and well defined restraints, we were able to finally deliver a long standing user request with relative ease.","spans":[]},{"type":"paragraph","text":"This is the start to the many things we're excited to build in 2015!","spans":[]},{"type":"paragraph","text":"By: Neal Shrader","spans":[{"start":4,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/icosahedral"}}]}],"blog_post_date":"2015-01-13","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}}],"_meta":{"uid":"presenting-freebsd-how-we-made-it-happen"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"David E. Worth","author_image":{"dimensions":{"width":250,"height":250},"alt":"David E. Worth","copyright":null,"url":"https://images.prismic.io/www-static/88908d6f279ad5cae0d19e5f8f8193854aa2d489_da3f9c3ffc8b92a283a0dc067f6750f7.jpg?auto=compress,format"},"_meta":{"uid":"david_e_worth"}},"blog_header_image":{"dimensions":{"width":735,"height":392},"alt":"user data automation illustration","copyright":null,"url":"https://images.prismic.io/www-static/2b415496-24d9-4fdf-95c5-2ce9041bf814_user-data.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Automating App Deployments with User-Data","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Automating common development tasks such as building, testing, and deploying your application has many benefits, including increasing repeatability and consistency by removing the potential for interference by \"the human element.\" Deploying your applications by running a single command from the commandline means that your team can spend their time working on the app and rather than the care and feeding of installations.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"There are some very convenient use-cases for creating new Droplets and automatically running applications on them. Your team may want to deploy a feature-branch containing new customer or user-facing code in order to get feedback or stand up a demo-instance of your product for a customer at the touch of a button. This blog post will cover how you can accomplish these and other use cases with the DigitalOcean API.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Mitchell Anicas has written about using Metadata via the API in the DigitalOcean Community. With that as a starting point, we can create some workflows that automatically deploy applications to Droplets.  With the DigitalOcean API and `CloudInit` accessed via User-Data we can","spans":[{"start":40,"end":60,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-droplet-metadata"}},{"start":236,"end":245,"type":"hyperlink","data":{"link_type":"Web","url":"https://help.ubuntu.com/community/CloudInit"}}]},{"type":"paragraph","text":"- Get an application or source code onto a Droplet","spans":[]},{"type":"paragraph","text":"- Run an application in a Docker container so that it \"just works\" with a","spans":[]},{"type":"paragraph","text":"single API call","spans":[]},{"type":"paragraph","text":"- Setup configuration management tools automatically","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Getting your application code to the Droplet","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Before we can run our application, its source code or binary needs to be on a Droplet.  As Mitchell described, spinning up a new Droplet via the API is very simple so our only modification will be in setting up an application stored in public version control, specifically GitHub.  If your project happens to be on another service such as BitBucket or another hosted version-control service the appropriate changes should be simple.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Suppose I have a public GitHub repository housing a Rails application that I would like to deploy to a Droplet via the API.  Using the User-Data functionality I can simply install Git and clone the repository in the `runcmd` block of the Cloud Config:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H 'Content-Type: application/json' \\","spans":[]},{"type":"paragraph","text":"    -H 'Authorization: Bearer <MY TOKEN>' \\","spans":[]},{"type":"paragraph","text":"    -d '{\"name\":\"example.com\", \"region\":\"nyc3\", \"size\":\"512mb\",","spans":[]},{"type":"paragraph","text":"         \"image\":\"ubuntu-14-04-x64\", \"ssh_keys\":null, \"backups\":false,","spans":[]},{"type":"paragraph","text":"         \"ipv6\":false, \"private_networking\":false,","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"         \"user_data\":\"","spans":[]},{"type":"paragraph","text":"    #cloud-config","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    runcmd:  ","spans":[]},{"type":"paragraph","text":"      - apt-get install -y git","spans":[]},{"type":"paragraph","text":"      - git clone https://github.com/daveworth/sample_app_rails_4 /opt/apps/sample_app_rails_4","spans":[]},{"type":"paragraph","text":"    \"}'","spans":[]},{"type":"paragraph","text":"`}```","spans":[]},{"type":"paragraph","text":" ","spans":[]},{"type":"paragraph","text":"In the case where we are cloning a private repository, we can simply change the `git clone` command to include a token issued by GitHub:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    git clone https://<MY GitHub TOKEN>:x-oauth-basic@github.com/daveworth/sample_app_rails_4 /opt/apps/sample_app_rails_4  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Similarly, if we want to deploy a specific feature-branch of the repository we can simply use the `-b` flag to specify that branch:","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    git clone -b feature/some-great-feature https://github.com/our_team/our_big_project.git /opt/apps/our_big_project  ","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Getting your application running!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Simply cloning your code onto a fresh running Droplet is nice, but is not nearly as useful as having your application \"just work\" on that Droplet.  We've written fairly extensively about Docker previously, including a Getting Started Guide to using it on DigitalOcean.  Not every image at DigitalOcean supports User-Data but conveniently our Docker Application Image does, allowing you to deploy a running instance of your application on it.","spans":[{"start":162,"end":180,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tags/docker"}},{"start":218,"end":239,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-getting-started"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"We are going to work through the process of getting an example Rails 4 application up and running on a new Droplet using User-Data and Docker.  ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"I have forked the Sample Rails 4 application from railstutorial to my personal github in the `sample_app_rails_4` repository. In my fork I included a `Dockerfile` which configures a Docker container with all of the application's dependencies, sets up its database, and finally runs the application.","spans":[{"start":50,"end":63,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/railstutorial"}},{"start":94,"end":112,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/daveworth/sample_app_rails_4"}},{"start":151,"end":161,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/daveworth/sample_app_rails_4/blob/master/Dockerfile"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"With that file in the repository, modifying our User-Data to run the application is very simple. First change the image from `\"ubuntu-14-04-x64\"` to an image that ships with Docker (to find those use our `/v2/images` API endpoint with application image filters). In this case we will use `Docker 1.4.1 on 14.04` whose `slug` is `docker`. We can instruct Docker to build and run our container while exposing ports 80 and 443 to the application's HTTP(s) server port (in this case 3000) by changing the `user_data` field in our JSON body as follows.  Walking through the commands below, we first install git and clone down our sample application with it.  We then instruct Docker to build a container from the application, run it, and bind ports 80 and 443 to the rails server running on port 3000.","spans":[{"start":204,"end":260,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/#list-all-application-images"}}]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H 'Content-Type: application/json' \\","spans":[]},{"type":"paragraph","text":"    -H 'Authorization: Bearer <MY TOKEN>' \\","spans":[]},{"type":"paragraph","text":"    -d '{\"name\":\"example.com\", \"region\":\"nyc3\", \"size\":\"512mb\",","spans":[]},{"type":"paragraph","text":"         \"image\":\"docker\", \"ssh_keys\":null, \"backups\":false,","spans":[]},{"type":"paragraph","text":"         \"ipv6\":false, \"private_networking\":false,","spans":[]},{"type":"paragraph","text":"         \"user_data\":\"","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    #cloud-config","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    runcmd:  ","spans":[]},{"type":"paragraph","text":"      - apt-get -y install git","spans":[]},{"type":"paragraph","text":"      - git clone https://github.com/daveworth/sample_app_rails_4.git /opt/apps/sample_app_rails_4","spans":[]},{"type":"paragraph","text":"      - docker build -t sample_app_rails_4 /opt/apps/sample_app_rails_4","spans":[]},{"type":"paragraph","text":"      - docker run --name sample_app_rails_4  -p 80:3000 -p 443:3000 -d sample_app_rails_4","spans":[]},{"type":"paragraph","text":"    \"}'","spans":[]},{"type":"paragraph","text":"`}```    ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"For the sake of simplicity and brevity in this post, we have simplified the deployed application to use SQLite3 in production.  In the case where you have a more realistic infrastructure including relational databases, key-value stores, full-text search engines, etc, you will need to build separate Docker containers for each and link them up. The [dockerfile project](https://github.com/dockerfile) on GitHub has `Dockerfile`s for many of your favorite projects to help you on your way.","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Building new Droplets using Configuration Management","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"For larger and more complicated infrastructure many teams will lean on sophisticated configuration management tools to automate everything and refocus their attention on more challenging problems than installing dependencies. The DigitalOcean community has covered several options in their tutorials: Puppet, Ansible, and Chef. Many of those tools include modules for interacting with DigitalOcean already such as Knife's DigitalOcean Plugin and Ansible's DigitalOcean Module but at the time of this writing they do not include User-Data support. Much of the same functionality from our previous User-Data example can be replicated in a Configuration-Management system such as Puppet, Chef, or Ansible.  As the complexity of your configuration grows User-Data alone can become unwieldy. Configuration management tools allow you break your configurations into more manageable units.  ","spans":[{"start":301,"end":307,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules"}},{"start":309,"end":316,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-an-ubuntu-12-04-vps"}},{"start":322,"end":326,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-plugin-for-knife-to-manage-droplets-in-chef"}},{"start":414,"end":441,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-plugin-for-knife-to-manage-droplets-in-chef"}},{"start":446,"end":475,"type":"hyperlink","data":{"link_type":"Web","url":"http://docs.ansible.com/digital_ocean_module.html"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"We can use User-Data to install and configure our configuration-management tools, which can in-turn, configure your application.  Using the previous User-Data techniques we can install Puppet, fetch your manifests, and configure the Droplet.  Here we fetch Puppet Lab's package and install it (per their instructions).  We then update Apt and install both puppet and git.  After getting those packages installed, we clone our Puppet manifests and apply them.  After that we are free to do whatever we like with our newly configured Droplet.","spans":[]},{"type":"paragraph","text":"```[html]{`","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H 'Content-Type: application/json' \\","spans":[]},{"type":"paragraph","text":"    -H 'Authorization: Bearer <MY TOKEN>' \\","spans":[]},{"type":"paragraph","text":"    -d '{\"name\":\"puppet.example.com\", \"region\":\"nyc3\", \"size\":\"512mb\",","spans":[]},{"type":"paragraph","text":"         \"image\":\"ubuntu-14-04-x64\",","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"         \"user_data\":\"","spans":[]},{"type":"paragraph","text":"    #cloud-config","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    runcmd:  ","spans":[]},{"type":"paragraph","text":"      - wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb","spans":[]},{"type":"paragraph","text":"      - dpkg -i puppetlabs-release-precise.deb","spans":[]},{"type":"paragraph","text":"      - apt-get update","spans":[]},{"type":"paragraph","text":"      - apt-get -y install puppet","spans":[]},{"type":"paragraph","text":"      - apt-get -y install git","spans":[]},{"type":"paragraph","text":"      - git clone https://<Our Team Token>:x-oauth-token@github.com/our_team/puppet_manifests.git /etc/puppet/manifests","spans":[]},{"type":"paragraph","text":"      - puppet apply /etc/puppet/manifests/site.pp","spans":[]},{"type":"paragraph","text":"      - # ... do something with your newly configured infrastructure... for instance, setup some containers!","spans":[]},{"type":"paragraph","text":"    \"}'","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Packaging our Application for easy deployment","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Because all of our work is being executed with standard tools like `curl`, we can codify it in a simple shell script which could even be shipped with your open-source projects.  ","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Simply including a `deploy_to_do.sh` script in your project  would help new users quickly get a working application on DigitalOcean right from your github repo.  ","spans":[{"start":19,"end":36,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/daveworth/sample_app_rails_4/blob/master/deploy_to_do.sh"}}]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Here's an example script:","spans":[]},{"type":"paragraph","text":"```[bin]`{","spans":[]},{"type":"paragraph","text":"    #!/bin/sh","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    # Deploy our cool tool to DigitalOCean.","spans":[]},{"type":"paragraph","text":"    # Make sure you set the DIGITALOCEAN_TOKEN environment variable to your API token before running.","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    set -e     # Stop on first error  ","spans":[]},{"type":"paragraph","text":"    set -u     # Stop if an unbound variable is referenced","spans":[]},{"type":"paragraph","text":"    ","spans":[]},{"type":"paragraph","text":"    curl -X POST https://api.digitalocean.com/v2/droplets \\  ","spans":[]},{"type":"paragraph","text":"    -H \"Authorization: Bearer $DIGITALOCEAN_TOKEN\"","spans":[]},{"type":"paragraph","text":"    # ... the rest of your command goes here","spans":[]},{"type":"paragraph","text":"    `}```","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"heading3","text":"Conclusion","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"The User-Data functionality support in DigitalOcean's API allows you and your team to automatically run your code on Droplets.  By automating the deployment process, your team will be able spin up new instances of your application on Droplets as quickly as running any other command.  From there testing new features or letting prospective clients use their own demo instance is one command away!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"Have any questions about automating your infrastructure using User-Data? Found any exciting use cases? Let us know in the comment section!","spans":[]},{"type":"paragraph","text":"","spans":[]},{"type":"paragraph","text":"by David E Worth","spans":[{"start":3,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/david_e_worth"}}]}],"blog_post_date":"2015-01-08","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}}],"_meta":{"uid":"automating-application-deployments-with-user-data"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Erika Heidi","author_image":null,"_meta":{"uid":"erika_heidi"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"php thank you","copyright":null,"url":"https://images.prismic.io/www-static/0d34a56c-8cc0-4df2-b91a-b6a9a72a031c_php-thanks.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Thank You To PHP's Top Package Authors!","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"PHP remains the most popular server-side programming language powering the world wide web and in use by 82% of websites. Metrics focused on server-side languages show that PHP usage has increased by 1% in the past year alone.","spans":[{"start":16,"end":28,"type":"hyperlink","data":{"link_type":"Web","url":"http://w3techs.com/technologies/overview/programming_language/all"}},{"start":121,"end":128,"type":"hyperlink","data":{"link_type":"Web","url":"http://w3techs.com/technologies/history_overview/programming_language"}}]},{"type":"paragraph","text":"Much of the growth in the last few years was driven by recently developed tools and frameworks, especially Composer. Composer is a dependency management tool, similar to Node's npm, that manages per-project dependencies and package versions for PHP projects. It uses Packagist as its main package repository, which has shown impressive growth in the last year, doubling the number of tracked packages.  This past October, the number of installations, themselves, reached the 45 million mark.","spans":[{"start":107,"end":115,"type":"hyperlink","data":{"link_type":"Web","url":"https://getcomposer.org/"}},{"start":170,"end":180,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.npmjs.org/"}},{"start":267,"end":276,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/"}},{"start":325,"end":342,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/statistics"}}]},{"type":"paragraph","text":"As such, Mikeal and Erika from the DigitalOcean Evangelism team, were curious to find the top 10 Packagist contributors based on the 50 most required packages and their authors. We used this script to collect our data.","spans":[{"start":9,"end":15,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/mikeal"}},{"start":20,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/erikaheidi"}},{"start":133,"end":158,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/mikeal/php-analytics/blob/master/top50-packages.md"}},{"start":186,"end":197,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/mikeal/php-analytics"}}]},{"type":"paragraph","text":"Why the most required packages? Open source project authors rely on libraries that are well-maintained and stable.  These provide a solid structure on which to build a successful project. If hundreds or thousands of projects are relying on a specific package, this will also mean more people able to contribute and quickly fix any bugs that might show up in the underlying required library.","spans":[]},{"type":"paragraph","text":"Thus, we'd like to give a huge thank you to the authors who took their time to create and share awesome projects with the open source community!","spans":[{"start":31,"end":40,"type":"strong"}]},{"type":"heading3","text":"1) Fabien Potencier  – 22 packages, 16412  total references","spans":[{"start":3,"end":19,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/fabpot"}}]},{"type":"paragraph","text":"Fabien Potencier leads the ranking with 22 packages being referenced (required) by a total of 16412 other packages. Most part of these packages are components of the Symfony Framework, created by Fabien, which are also widely used together or isolated in other projects. His most required package is symfony/framework-bundle with 2626 packages depending on it. This package is a requirement for Symfony bundles, which basically extends the main framework's functionality.","spans":[{"start":300,"end":324,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/symfony/framework-bundle"}}]},{"type":"heading3","text":"2) Sebastian Bergman – 1 package, 9181 total references","spans":[{"start":3,"end":20,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/s_bergmann"}}]},{"type":"paragraph","text":"Sebastian Bergman is the author of phpunit/phpunit, the most referenced package on Packagist. PHPUnit is a popular unit testing framework for PHP, used as a development requirement by 9181 other projects of all sizes and types on Packagist.","spans":[{"start":35,"end":50,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/phpunit/phpunit"}},{"start":94,"end":101,"type":"hyperlink","data":{"link_type":"Web","url":"https://phpunit.de/"}}]},{"type":"heading3","text":"3) Taylor Otwell – 3 packages, 3608 total references","spans":[{"start":3,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/taylorotwell"}}]},{"type":"paragraph","text":"Taylor Otwell is the creator of the Laravel Framework. His package illuminate/support is the second most required on Packagist, with 3608 projects depending on it. This library offers a series of helpers for dealing with databases, arrays, and collections. It is a component of the Laravel Framework but can also be used as a standalone library.","spans":[{"start":36,"end":53,"type":"hyperlink","data":{"link_type":"Web","url":"http://laravel.com/"}},{"start":67,"end":85,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/illuminate/support"}}]},{"type":"heading3","text":"4) Benjamin Eberlei – 4 packages, 3170 total references","spans":[{"start":3,"end":19,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/beberlei"}}]},{"type":"paragraph","text":"Benjamin Eberlei is the lead of the Doctrine project, a collection of several PHP libraries focused on database abstraction and object mapping. The package doctrine/orm is the most required, with 1421 other packages depending on it. Those include frameworks, CMSs, and various database-related libraries.","spans":[{"start":36,"end":52,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.doctrine-project.org/"}},{"start":156,"end":168,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/doctrine/orm"}}]},{"type":"heading3","text":"5) Jordi Boggiano – 2 packages, 1975 total references","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/seldaek"}}]},{"type":"paragraph","text":"Jordi Boggiano is the co-author of Composer, the project that inspired this article and stands as one of the most relevant milestones in modern PHP. Jordi is one of the authors of composer/installers, and he also created monolog/monolog. The former is commonly required by frameworks and CMSs to bring composer features into those projects, and the latter is a very popular logging library for PHP.","spans":[{"start":35,"end":43,"type":"hyperlink","data":{"link_type":"Web","url":"https://getcomposer.org/"}},{"start":180,"end":199,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/composer/installers"}},{"start":221,"end":236,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/monolog/monolog"}}]},{"type":"heading3","text":"6) Pádraic Brady – 1 package, 1660 total references","spans":[{"start":3,"end":16,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/padraicb"}}]},{"type":"paragraph","text":"Pádraic Brady is the author of mockery/mockery, a mock object framework for unit testing in PHP. As with PHPUnit, this is usually a development requirement for creating and running the project test suite. It's required by 1660 other packages on Packagist.","spans":[{"start":31,"end":46,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/mockery/mockery"}}]},{"type":"heading3","text":"7) Zend Framework – 2 packages, 1453 total references","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/zfdevteam"}}]},{"type":"paragraph","text":"Zend is a popular framework for PHP. The Zend Framework development team has two packages in the TOP 50, the most required one being zendframework/zendframework with 1123 packages depending on it. Between the dependant packages are components of the main framework, as well as many extensions created by users.","spans":[{"start":133,"end":160,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/zendframework/zendframework"}}]},{"type":"heading3","text":"8) Kitamura Satoshi – 1 package, 1371 total references","spans":[{"start":3,"end":19,"type":"hyperlink","data":{"link_type":"Web","url":"https://github.com/satooshi"}}]},{"type":"paragraph","text":"Kitamura Satoshi is the author of satooshi/php-coveralls, a PHP client library for Coveralls – an application that basically provides test coverage stats for continuous integration environments.This library is required by 1371 other projects on Packagist as it is a popular asset for continuous integration within PHP projects.","spans":[{"start":34,"end":56,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/satooshi/php-coveralls"}},{"start":83,"end":92,"type":"hyperlink","data":{"link_type":"Web","url":"https://coveralls.io/"}}]},{"type":"heading3","text":"9) Michael Dowling – 2 packages, 1329 total references","spans":[{"start":3,"end":18,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/mtdowling"}}]},{"type":"paragraph","text":"Michael Dowling is the creator of Guzzle, a HTTP client library and framework for PHP. This library is very popular with projects that make use of remote APIs. His package guzzle/guzzle is required by other 811 projects on Packagist, and many of those are wrapper libraries created to facilitate the use of various APIs.","spans":[{"start":34,"end":40,"type":"hyperlink","data":{"link_type":"Web","url":"http://docs.guzzlephp.org/en/latest/"}},{"start":172,"end":185,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/guzzle/guzzle"}}]},{"type":"heading3","text":"10) Greg Sherwood – 1 package, 1264 total references","spans":[{"start":4,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/gregsherwood"}}]},{"type":"paragraph","text":"Greg Sherwood is the author of squizlabs/php_codesniffer, a library for detecting violations according to a defined code standard. His package is required by 1264 other projects on Packagist.","spans":[{"start":31,"end":56,"type":"hyperlink","data":{"link_type":"Web","url":"https://packagist.org/packages/squizlabs/php_codesniffer"}}]},{"type":"paragraph","text":"by Erika Heidi","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/erikaheidi"}}]}],"blog_post_date":"2014-11-25","tags":[{"tag1":{"tag":"Engineering","_linkType":"Link.document","_meta":{"uid":"engineering"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"thank-you-to-phps-top-package-authors"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Sharon Campbell","author_image":null,"_meta":{"uid":"sharon_campbell"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"dns","copyright":null,"url":"https://images.prismic.io/www-static/32cc248d-5e59-48da-be50-8435c81d1b6a_dns_banner.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Coming To Port 53 Near You: The New DigitalOcean DNS!","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Over the past few months, our engineering team has been hard at work replacing our current DNS resolvers with a lightning fast solution. We've updated our old architecture with a much more scalable and reliable system for creating and resolving DNS entries.","spans":[]},{"type":"paragraph","text":"The main concern with rolling out this new system was the potential of any downtime. We have many thousands of queries per second hitting our resolvers, which means any downtime would be inconvenient for our users.","spans":[]},{"type":"paragraph","text":"Our requirements were:","spans":[{"start":0,"end":22,"type":"strong"}]},{"type":"o-list-item","text":"Keep both DNS systems in sync and check for inconsistencies in order to mitigate them","spans":[]},{"type":"o-list-item","text":"Be able to fallback in the event that the new system contained a hidden demon (performance, bugs under load, etc)","spans":[]},{"type":"heading3","text":"The New Architecture","spans":[]},{"type":"paragraph","text":"The new system architecture now looks like this:","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/6123bbc76c8f7e8ce7c87c5fba6881df92b06c91_new-diagram.png?auto=compress,format","alt":"new architecture diagram","copyright":null,"dimensions":{"width":668,"height":623}},{"type":"paragraph","text":"Here's the updated application flow when users add a DNS entry from a DigitalOcean application, API, or Control Panel:","spans":[]},{"type":"o-list-item","text":"Add the record to the DNS database via a RESTful API written in Go","spans":[]},{"type":"o-list-item","text":"The API will verify the entry, and if valid, will create record in the new DNS Database","spans":[]},{"type":"o-list-item","text":"After that, when a query comes into our resolvers, they will query the database for the entry and respond accordingly","spans":[]},{"type":"heading3","text":"Keeping Two Systems Alive","spans":[]},{"type":"paragraph","text":"As mentioned above, we wanted to be able to fallback to the old system should the new one fall over. We performed a full backfill of the DNS entries into the new system by using the new DNS API endpoints. This did two things for us: 1) It stress tested the application for a high amount of requests; and 2) it backfilled all of the data into the new application.","spans":[{"start":233,"end":235,"type":"strong"},{"start":304,"end":306,"type":"strong"}]},{"type":"paragraph","text":"We also had the challenge of converting our DNS entries from BIND syntax into a Fully Qualified Domain Name, which is a requirement in our new system. This proved to be a challenge – we ended up having many records that became inconsistent with the old implementation of DNS. We solved this by creating a small conversion library that accepts BIND syntax and returns a FQDN.","spans":[]},{"type":"paragraph","text":"While our users were adding or updating DNS entries, we were concurrently writing to the new service, preparing it for prime time. If the service could not accept the record, say because of a failed validation, it was logged to a separate list of entries that existed in the old system (but not the new). This allowed us to triage issues separately and notify customers that they have invalid DNS entries, should that be the case.","spans":[]},{"type":"image","url":"https://images.prismic.io/www-static/a90110ac83f83661b9ca2149b5951f303c03b08d_dns-both.png?auto=compress,format","alt":"new architecture diagram","copyright":null,"dimensions":{"width":652,"height":265}},{"type":"paragraph","text":"After we were confident that we had a reliable system, we switched over the concurrent writes to be synchronous. Creating a domain record, for example, would now be written to both systems synchronously. If either failed, the transaction would be rolled back and the error was presented to the user. This was great because it allowed us to populate both systems with good certainty that they matched each other.","spans":[]},{"type":"heading3","text":"Turning It Up To 11","spans":[]},{"type":"paragraph","text":"On the 27th of October, we slowly rolled out changes to the first nameserver, fixed minor configuration issues, and then continued to flip over each nameserver slowly. Now all of our DNS is served off the new architecture and we're very pleased with it. Propagation is nearly instant from the moment you hit Submit on a domain entry.","spans":[{"start":308,"end":314,"type":"strong"}]},{"type":"heading3","text":"Takeaways","spans":[]},{"type":"paragraph","text":"We found that splitting our DNS into its own service proved to be immensely more powerful. Also, instead of doing a hard cutover, writing concurrently to the new service found issues that likely would have been missed if we had switched over without a proper release plan.","spans":[]},{"type":"paragraph","text":"We hope you enjoy a much faster DNS!","spans":[]},{"type":"paragraph","text":"by Robert Ross","spans":[{"start":3,"end":14,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/robertoross"}}]}],"blog_post_date":"2014-11-18","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}}],"_meta":{"uid":"coming-to-port-53-near-you-new-digitalocean-dns"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"Kaushal Parikh","author_image":null,"_meta":{"uid":"kaushal_parikh"}},"blog_header_image":{"dimensions":{"width":784,"height":418},"alt":"hacktoberfest","copyright":null,"url":"https://images.prismic.io/www-static/f28388e2-0568-48a9-8a3f-5c462f4d0d2a_hacktoberfest-night.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Goodbye To Hacktoberfest: Events Roundup","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Did you meet me in October? I'm Kaushal Parikh aka Cashbagel aka DO's college evangelist.","spans":[{"start":51,"end":60,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/cashbagel"}}]},{"type":"paragraph","text":"Now that #hacktoberfest is over,  it's awesome seeing the pictures you guys took and have been tweeting all month. I was lucky enough to spend every weekend of October attending some of the coolest hackathons around – check them out below:","spans":[{"start":9,"end":23,"type":"hyperlink","data":{"link_type":"Web","url":"http://hacktoberfest.digitalocean.com/"}}]},{"type":"heading3","text":"HackMIT","spans":[{"start":0,"end":7,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackmit.org/"}}]},{"type":"paragraph","text":"HackMIT, one of the largest college hackathons, brings in participants from all over the world, and this year was no different. October 4th's event brought together over 900 students and produced a total of 284 projects! Most hackers worked in teams of four made up of people from different universities (and even different countries).","spans":[]},{"type":"paragraph","text":"One of the coolest apps that was built that weekend was Surge Purge Plus. This team of two was tired of paying so much for Uber's surge pricing, so they set out to fix this problem. Willing to walk a few minutes out of their way if it meant not paying for Uber's surge price, they used the new Uber API to find locations around them that had a lower surge than their current location. Over the weekend they built a native iPhone app that checked if the area was surging, gave you walking directions to a nearby location that wasn't surging, and ordered you an uber to that new location.","spans":[{"start":56,"end":72,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackmit2014.challengepost.com/submissions/28074-surge-purge-plus"}},{"start":294,"end":302,"type":"hyperlink","data":{"link_type":"Web","url":"https://developer.uber.com/"}}]},{"type":"paragraph","text":"Click here to see all 284 submissions.","spans":[{"start":0,"end":10,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackmit.challengepost.com/submissions/"}}]},{"type":"paragraph","text":"Photo by Elisa Young, MIT Technique","spans":[{"start":0,"end":35,"type":"em"},{"start":22,"end":35,"type":"hyperlink","data":{"link_type":"Web","url":"http://technique.mit.edu/"}}]},{"type":"image","url":"https://images.prismic.io/www-static/84236809547477cbc432e95ccbf96c158c6aaaf7_hackmit.jpg?auto=compress,format","alt":"HackMIT","copyright":null,"dimensions":{"width":2048,"height":1363}},{"type":"heading3","text":"HackRU","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackru.org/"}}]},{"type":"paragraph","text":"As a current Rutgers student, and a previous organizer of this event, HackRU is a really personal event for me. It was great to see a significant number of high school students in attendance this year (over 100 of the 700 attendees were still in high school). This is a trend that I can only imagine will continue to grow, with some high schools even starting their own events like HackBCA and HSHACKS.","spans":[{"start":382,"end":389,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackbca.com/"}},{"start":394,"end":401,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.hshacks.com/"}}]},{"type":"paragraph","text":"One of the projects created by a group of teenagers was TouchFree. This team took a risk and used technology and hardware that they had never used before to create a cool new way to interact with their computers. The project used a MYO armband to play games, give powerpoint presentations, and navigate around their PCs effortlessly.","spans":[{"start":56,"end":65,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/hackru-fall-2014/hacks/touchfree"}}]},{"type":"paragraph","text":"Another high school student spent the entire night learning C to create a minimalist watch face for his Pebble watch. His effort paid off – he won the Best First Time Hacker prize.","spans":[]},{"type":"paragraph","text":"Click here to see all 101 submissions.","spans":[{"start":0,"end":10,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/hackru-fall-2014/hacks"}}]},{"type":"image","url":"https://images.prismic.io/www-static/f834a460177d4465da8b1788b669c54204a687c4_hackru.png?auto=compress,format","alt":"HackRU","copyright":null,"dimensions":{"width":917,"height":683}},{"type":"heading3","text":"HackNY","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackny.org/a/"}}]},{"type":"paragraph","text":"Not only does HackNY bring together the New York tech community, but it also serves as a reunion for alumni of the HackNY fellows program. Because of a prize criteria based solely on how impressive the projects are, HackNY has a track record of having some really creative and awesome hacks win the contest. We've seen everything from a breathalyzer that stops you from committing code when you've been drinking to an awesome drum set that you can play in mid air.","spans":[{"start":115,"end":137,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackny.org/a/"}},{"start":337,"end":411,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.youtube.com/watch?v=NnBb1wmHj5k"}},{"start":426,"end":463,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/spring-2013-hackny-student-hackathon/hacks/airdrum"}}]},{"type":"paragraph","text":"And this year's winner was no exception...","spans":[]},{"type":"paragraph","text":"Calclash is an addictive multiplayer game that pits up to 25 players against each other and challenges them with math questions. This team was bored of traditional studying methods and tried to make it more fun with a fast paced game that they could play with their friends. The final product looked polished and ran on node and firebase.","spans":[{"start":0,"end":8,"type":"hyperlink","data":{"link_type":"Web","url":"http://calclash.me/"}}]},{"type":"paragraph","text":"Click here to see all 39 submissions.","spans":[{"start":0,"end":10,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/fall-2014-hackny-student-hackathon/hacks"}}]},{"type":"image","url":"https://images.prismic.io/www-static/b5265c9889a6000735bea01bcc1cdf4d0c3c3cdb_hackny.jpg?auto=compress,format","alt":"HackNY","copyright":null,"dimensions":{"width":2048,"height":1367}},{"type":"heading3","text":"HackNC","spans":[{"start":0,"end":6,"type":"hyperlink","data":{"link_type":"Web","url":"http://hacknc.us/"}}]},{"type":"paragraph","text":"HackNC is one of the largest hackathons in NC, and has a track record of very creative and awesome hacks. Recently there's been a trend of more students incorporating hardware elements into their hacks. This is in large part due to how accessible it has become for attendees thanks to MLH, who have been shipping crates of hardware around to these events. There was also a higher density of hacks on the MYO, Arduino, Raspberry Pi and LeapMotion at this event than any of the other events.","spans":[{"start":285,"end":288,"type":"hyperlink","data":{"link_type":"Web","url":"http://mlh.io/"}},{"start":404,"end":407,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.thalmic.com/en/myo/"}},{"start":409,"end":416,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.arduino.cc/"}},{"start":418,"end":430,"type":"hyperlink","data":{"link_type":"Web","url":"http://www.raspberrypi.org/"}},{"start":435,"end":445,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.leapmotion.com/"}}]},{"type":"paragraph","text":"One of the projects that was particularly impressive and won third place this year was Boxwitch, an \"asynchronous, non-blocking, event-driven sandwich delivery at the push of a button.\" The team of 3 reverse engineered the Jimmy John's api and created a physical box that was capable of ordering sandwiches to be delivered wherever you were. Not only was Boxwitch technically impressive; even their presentation included a complicated arming mechanism for the box itself, which added even more of a wow factor to the the hack.","spans":[{"start":87,"end":95,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/hacknc-fall-2014/hacks/boxwich"}}]},{"type":"paragraph","text":"Click here to see all 65 submissions.","spans":[{"start":0,"end":10,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/hacknc-fall-2014/hacks"}}]},{"type":"image","url":"https://images.prismic.io/www-static/71ef4367015b487b0b4c796605c8a97fdb2bbfba_hacknc.jpg?auto=compress,format","alt":"HackNc","copyright":null,"dimensions":{"width":2048,"height":1536}},{"type":"heading3","text":"Honorable Mention: HackPR","spans":[{"start":19,"end":25,"type":"hyperlink","data":{"link_type":"Web","url":"http://hackpr.io/"}}]},{"type":"paragraph","text":"Although this event wasn't in October, we wanted to highlight it as an awesome event that we were happy to be a part of. It happens once per semester at the University of Puerto Rico on their engineering campus in Mayaguez, PR.","spans":[]},{"type":"paragraph","text":"We can all acknowledge that it's difficult to start new hacker communities. The organizers of HackPR have done a great job not only scaling this event year after year, but also an awesome hacker community at their school . The event on September 27th was the largest HackPR, with approximately 280 participants.","spans":[]},{"type":"paragraph","text":"One of the coolest projects created at the event was Air Parranda. For those who don't know, a Parranda is a traditional musical style played in Puerto Rico. This style was brought into modern times with a clever combination of an iPhone app and MYO arm bands that let you play virtual parrandas by waving your hands in front of you.","spans":[{"start":53,"end":65,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/hackpr-fall-2014/hacks/air-parranda"}},{"start":95,"end":103,"type":"hyperlink","data":{"link_type":"Web","url":"http://en.wikipedia.org/wiki/Parranda"}},{"start":246,"end":259,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.thalmic.com/en/myo/"}}]},{"type":"paragraph","text":"Check out all the projects here.","spans":[{"start":10,"end":31,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.hackerleague.org/hackathons/hackpr-fall-2014/hacks"}}]},{"type":"image","url":"https://images.prismic.io/www-static/c973ce9ed36818d0ddca57ef89befbf20f21543d_hackpr.jpg?auto=compress,format","alt":"HackNc","copyright":null,"dimensions":{"width":2048,"height":1536}},{"type":"paragraph","text":"The goal of #hacktoberfest was to promote open source development and show our support for developer communities. It's been great checking out all of the cool stuff people in the community have been working on. So, whether you were taking #hacktoberfest pictures at hackathons or working towards the 50 commits challenge, we wanted to highlight all the awesome things you have been building and the communities that helped.","spans":[{"start":300,"end":320,"type":"hyperlink","data":{"link_type":"Web","url":"https://assets.digitalocean.com/blog/static/hacktoberfest/"}}]},{"type":"paragraph","text":"And of course, all this doesn't end with the month of October. If you're interested in having us be a part of your event, or think DigitalOcean can help in any way, please don't hesitate to reach out.","spans":[{"start":190,"end":199,"type":"hyperlink","data":{"link_type":"Web","url":"mailto:sammy@digitalocean.com"}}]},{"type":"paragraph","text":"by Kaushal Parikh","spans":[{"start":3,"end":17,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/cashbagel"}}]}],"blog_post_date":"2014-11-11","tags":[{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"hacktoberfest-events-roundup"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"docker puzzle","copyright":null,"url":"https://images.prismic.io/www-static/b6c61b1e-e8fe-4393-8931-e3c46fced7b3_docker-puzzle.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Test Your Skills With The Docker Puzzle","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"We've teamed up with TrueAbility to present The Docker Puzzle Challenge.","spans":[{"start":44,"end":71,"type":"hyperlink","data":{"link_type":"Web","url":"https://trueability.com/digitalocean-contest"}}]},{"type":"paragraph","text":"Using Docker administration skills, those participating will attempt to solve a jigsaw puzzle for bragging rights on our leaderboard (and prizes). Besides some good fun, we're hoping to attract those interested in Linux and containerization to our open positions. The contest runs from Nov 1 - 30: By the end of the contest, the Top 10 performers will be guaranteed an interview with DigitalOcean.","spans":[{"start":264,"end":397,"type":"strong"}]},{"type":"paragraph","text":"Our awesome customer support manager, Tammy Butow, has been using TrueAbility to help find the best candidates to join our support team. Read the interview below for some insight into the importance of innovative hiring techniques.","spans":[{"start":38,"end":49,"type":"hyperlink","data":{"link_type":"Web","url":"https://twitter.com/tammybutow"}}]},{"type":"paragraph","text":"What are the top three qualities you are looking for in a job candidate?","spans":[{"start":0,"end":72,"type":"strong"}]},{"type":"paragraph","text":"Our support team is awesome; everyone is a self-learner and an excellent problem solver. We all enjoy helping developers with their Droplets and get really excited when see what developers are building. We look for the following qualities in candidates:","spans":[]},{"type":"o-list-item","text":"Self-Starter / Self-Learner","spans":[]},{"type":"o-list-item","text":"Team Player","spans":[]},{"type":"o-list-item","text":"Love of Linux and Open Source","spans":[]},{"type":"paragraph","text":"Why is hiring people who are keeping up with current tech important?","spans":[{"start":0,"end":68,"type":"strong"}]},{"type":"paragraph","text":"We are constantly excited to be working with new technologies. We recently launched CoreOS and Mesosphere on DigitalOcean. We love being able to support developers using many different types of technologies. In addition to the core Linux fundamentals, we always need to be learning so that we are able to support developers that reach out to us.","spans":[{"start":84,"end":90,"type":"hyperlink","data":{"link_type":"Web","url":"https://coreos.com/docs/running-coreos/cloud-providers/digitalocean/"}},{"start":95,"end":105,"type":"hyperlink","data":{"link_type":"Web","url":"https://digitalocean.mesosphere.com/"}}]},{"type":"paragraph","text":"Can you explain why innovative hiring techniques are important to quickly filter for the best candidates?","spans":[{"start":0,"end":105,"type":"strong"}]},{"type":"paragraph","text":"We want to find the best people to help support and build a simple cloud hosting platform. It's a mission that requires the greatest talent out there from all over. Our support team come from a variety of locations: Texas, Utah, Virginia. I moved to New York from Australia to join DigitalOcean.  We are constantly on the lookout for extremely talented candidates. Using TrueAbility and other progressive hiring practices can help speed up the process tremendously.","spans":[]},{"type":"paragraph","text":"Can you go into detail concerning your experience with TrueAbility?","spans":[{"start":0,"end":67,"type":"strong"}]},{"type":"paragraph","text":"Actually when I interviewed with DigitalOcean, I completed a TrueAbility challenge. I found it to be a really enjoyable experience. It's a great way to test your skills in a real-life environment. It's excellent for us to be able to playback the TrueAbility challenge and then chat with the candidate about their approach to problem solving in a fast-paced environment. It's a lot to do in a short amount of time, but it's a great way to assess a candidate's skills. We've found our best candidates often love these types of challenges.","spans":[]},{"type":"paragraph","text":"How do you compare these \"challenges\" which aim to mimic real-life circumstances, opposed to more traditional interviewing processes?","spans":[{"start":0,"end":133,"type":"strong"}]},{"type":"paragraph","text":"It's fantastic to be able to simulate real-life experiences. For years engineering talent has been assessed using coding assignments, this is just another way to go about it. We still take more traditional pieces of an application into consideration: job history, expertise, recommendations, etc. But this is a way to level the playing field a bit and give everyone a shot to show their skills.","spans":[]},{"type":"paragraph","text":"Why guarantee interviews to the top 10? Does the fact that they \"performed\" better actually guarantee better performance on the job?","spans":[{"start":0,"end":132,"type":"strong"}]},{"type":"paragraph","text":"Not necessarily, but again it's just a piece of the application. And it certainly doesn't hurt to be a top scorer. At the end of the day there are developers all over the world using Docker on their DigitalOcean Droplets, so it's important for our support team members to be able to work with Docker and other leading open source technologies.. We won't just be looking at the top 10 though, we are excited to find out who has Docker skills!","spans":[]},{"type":"paragraph","text":"Why The Docker Challenge as opposed to other Linux-based testing?","spans":[{"start":0,"end":65,"type":"strong"}]},{"type":"paragraph","text":"We have been running TrueAbility Linux Systems Administrator challenges for a while now. We started to wonder what else we could do! We love Docker and we're excited to be able to create this experience with TrueAbility.  Show us your Docker skills :)","spans":[]},{"type":"paragraph","text":"Sign up and take the challenge  here!","spans":[{"start":0,"end":37,"type":"em"},{"start":32,"end":36,"type":"hyperlink","data":{"link_type":"Web","url":"https://trueability.com/digitalocean-contest"}}]}],"blog_post_date":"2014-11-02","tags":[{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"docker-puzzle-challenge"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"mesosphere","copyright":null,"url":"https://images.prismic.io/www-static/556c23cc-3cb7-429c-b787-985d18df9c3b_mesosphere.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Pool Your Resources With DigitalOcean Droplets + Mesosphere And Deploy Your App In Seconds","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Now you can spin up Mesosphere clusters on DigitalOcean! This is an easy way to deploy, scale, and manage your applications.","spans":[]},{"type":"paragraph","text":"Our friends at Mesosphere created an automated provisioning tool where you can simply choose your plan and launch. In a few clicks you'll have a self-healing environment that offers fault tolerance and scalability with minimal configuration.","spans":[{"start":37,"end":64,"type":"hyperlink","data":{"link_type":"Web","url":"https://digitalocean.mesosphere.com/"}}]},{"type":"paragraph","text":"The potential for developers is huge, as Mesosphere's API gives users the ability to manage literally thousands of Droplets like a single computer. This makes it simple to run a number of applications, services, and diverse workloads side-by-side on the Mesosphere cluster, as well as expand its size at any time by simply adding Droplets.","spans":[]},{"type":"paragraph","text":"To get started, simply visit the Mesosphere web page, sign up, and pick an installation option:","spans":[{"start":54,"end":61,"type":"hyperlink","data":{"link_type":"Web","url":"https://digitalocean.mesosphere.com/"}}]},{"type":"list-item","text":"Development: 4 instances of the 2GB Droplets","spans":[{"start":0,"end":12,"type":"strong"}]},{"type":"list-item","text":"Highly-Available: 10 instances of the 2GB Droplets","spans":[{"start":0,"end":17,"type":"strong"}]},{"type":"list-item","text":"Custom: choose the number and types of instances","spans":[{"start":0,"end":7,"type":"strong"}]},{"type":"paragraph","text":"Our hope is that Mesosphere's technology will save you a lot of time and make you much more productive. With much of the DevOps work abstracted, you can focus your attention fully on your applications instead of worrying about servers and hostnames.","spans":[]},{"type":"paragraph","text":"– Team DO","spans":[]}],"blog_post_date":"2014-10-28","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}},{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}}],"_meta":{"uid":"pool-your-resources-with-digitalocean-droplets-and-mesosphere"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"ripe","copyright":null,"url":"https://images.prismic.io/www-static/a420f2b5-2acc-4afa-bfbb-f4a1b2635368_ripe.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"We're Participating In The RIPE Atlas Program!","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"We're proud to announce our participation in the RIPE Atlas project – the world's largest Internet measurement network!","spans":[{"start":49,"end":67,"type":"hyperlink","data":{"link_type":"Web","url":"https://atlas.ripe.net/"}}]},{"type":"paragraph","text":"The idea behind our participation is pretty simple: we host a few servers dedicated to running measurement tests administered by the Atlas network; in return, we get access to every other Atlas server around the world – well over six thousand to date – to run our own performance tests. This allows us to test and validate access to our data centers from thousands of locations around the world in just minutes! It's an honor to participate in the program, and to be able to give back by contributing to a program that many others can benefit from as well.","spans":[{"start":220,"end":250,"type":"em"}]},{"type":"paragraph","text":"We initially got involved with RIPE due to their responsibility of allocating IP space in Europe, as one of the five global Regional Internet Registries (RIRs). Our networking team came across a post on their portal requesting participants in their Atlas program: a global network of thousands of probes that measure Internet connectivity and reachability, providing an unprecedented understanding of the state of the Internet in real time. The entire Internet community can access the data collected by the network, as well as Internet maps, graphs and analyses based on the aggregated results.","spans":[{"start":193,"end":215,"type":"hyperlink","data":{"link_type":"Web","url":"https://labs.ripe.net/Members/suzanne_taylor_muzzin/announcing-the-ripe-atlas-anchors-service"}}]},{"type":"paragraph","text":"There are two types of Atlas nodes: probes, which are smaller servers that just about anyone can run; and anchors, which serve as the solid foundation of the network. DigitalOcean is running Atlas anchors at our SFO1, SGP1, and LON1 sites, as these were the regions with the least amount of overlap. As we continue to expand, we'll be in communication with RIPE to see if anchors are necessary in new locations.","spans":[{"start":36,"end":42,"type":"strong"},{"start":106,"end":113,"type":"strong"}]},{"type":"paragraph","text":"We've only just begun harnessing the benefits of the Atlas network. Our engineers are currently working on ways to integrate this new distributed measuring ability with our existing systems to better detect Internet connectivity issues before they become problematic for customers. Previously, we've had to rely on intermittent data collected from customers to troubleshoot regional Internet disruptions. Soon, we'll be able to automate connectivity and throughput testing from just about anywhere in the world!","spans":[]}],"blog_post_date":"2014-10-16","tags":[{"tag1":{"tag":"Community","_linkType":"Link.document","_meta":{"uid":"community"}}},{"tag1":{"tag":"News","_linkType":"Link.document","_meta":{"uid":"news"}}}],"_meta":{"uid":"were-participating-in-the-ripe-atlas-program"}}},{"node":{"author":{"_linkType":"Link.document","author_name":"DigitalOcean","author_image":{"dimensions":{"width":600,"height":600},"alt":"Sammy avatar","copyright":null,"url":"https://images.prismic.io/www-static/a10e3c2eb15b74ee43f872be3044313423b1c9a9_sammy_avatar.png?auto=compress,format"},"_meta":{"uid":"digitalocean"}},"blog_header_image":{"dimensions":{"width":750,"height":400},"alt":"metadata","copyright":null,"url":"https://images.prismic.io/www-static/ee8c16fb-98bd-4422-a97e-3b9713232535_metadata.png?auto=compress,format"},"blog_headline":[{"type":"heading1","text":"Easily Automate The Provisioning Of Your DigitalOcean Droplets!","spans":[]}],"blog_post_content":[{"type":"paragraph","text":"Our metadata service is live! This enables Droplets to query information about themselves, and allows the use of CloudInit to bootstrap new servers. This is significant for users who want to improve the automation of their server provisioning process. As there are several products and services tied into this release, we want to provide a quick overview to get users up and running.","spans":[]},{"type":"heading3","text":"What kind of information is available via metadata?","spans":[]},{"type":"paragraph","text":"Examples of available Droplet metadata include Droplet ID, data center region, IP addresses, and user-data.","spans":[]},{"type":"heading3","text":"What is user-data?","spans":[]},{"type":"paragraph","text":"User-data is a special piece of metadata that can be provided by the user during the Droplet creation process. This data can be consumed by CloudInit to configure a server.","spans":[]},{"type":"heading3","text":"Which regions support metadata?","spans":[]},{"type":"paragraph","text":"At launch, the SGP1, SFO1, LON1, AMS2, AMS3, & NYC3 regions have metadata available. It is enabled on new droplets in these regions.","spans":[]},{"type":"heading3","text":"What is CloudInit?","spans":[]},{"type":"paragraph","text":"CloudInit is a process enabled on recent DigitalOcean images that is able to pull down and process information from metadata. When the Droplet boots for the first time, the CloudInit program executes the script it finds in the \"user-data\" field, providing users the opportunity to automate the initial configuration of their servers.","spans":[]},{"type":"heading3","text":"Which images can process metadata information with CloudInit?","spans":[]},{"type":"paragraph","text":"Currently, Ubuntu 14.04 and CentOS 7 base images have CloudInit enabled. Any one-click apps based on these releases will also have this functionality available.  CoreOS servers also process the \"user-data\" field using a different mechanism.","spans":[]},{"type":"heading3","text":"Where can I learn more about using metadata and CloudInit?","spans":[]},{"type":"paragraph","text":"We have prepared community articles that cover using the metadata service and writing scripts for CloudConfig. Also, our developer portal contains full documentation of the Metadata API.","spans":[{"start":47,"end":73,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-droplet-metadata"}},{"start":78,"end":109,"type":"hyperlink","data":{"link_type":"Web","url":"https://www.digitalocean.com/community/tutorials/an-introduction-to-cloud-config-scripting"}},{"start":173,"end":185,"type":"hyperlink","data":{"link_type":"Web","url":"https://developers.digitalocean.com/metadata/"}}]}],"blog_post_date":"2014-10-13","tags":[{"tag1":{"tag":"Product Updates","_linkType":"Link.document","_meta":{"uid":"product-updates"}}}],"_meta":{"uid":"easily-automate-the-provisioning-of-your-droplets"}}}]}}}