"}],"slug":"bonnier-news-chose-elastx-to-create-their-new-digital-platform","url":"/en/news/bonnier-news-chose-elastx-to-create-their-new-digital-platform","lang":"en","filename":"bonnier-news-valde-elastx-för-att-bygga-sin-nya-digitala-plattform-1","image":"/assets/uploads/case-study-elastx-bonnier-news-hashicorp.jpg"},{"date":"2019-01-08","case_slug":"Leading DDoS protection for our clients","title":"Leading DDoS protection for our clients","intro":"We are happy to announce our partnership with DDoS security experts Baffin Bay Networks. They will be protecting our Cloud Infrastructure and our customers by using their DDoS Protection and Threat Intelligence platform.","tags":["elastx"],"body":[{"type":"body_text","text":"### Who are Baffin Bay Networks?\n\nWe're a Swedish tech startup based in Stockholm, founded in 2016, poised to disrupt the cloud-based cyber security market. Our team consist of gifted cyber security experts, passionate about building a world-class Threat Protection Platform that helps our customers to mitigate both network based and application level cyber threats."},{"type":"body_text","text":"### Why was Baffin Bay Networks created?\n\nOur founder spent several years commuting from Stockholm to Silicon Valley, working for some of the largest U.S. corporations in the industry. During this time (flying over Baffin Bay, in Canada) an idea began to evolve. To create a new and innovative cloud-based security solution, to provide cleaner traffic, faster mitigation - and built in Sweden. It was time to leverage the competence and engineering art in the home country, and to create something unique and groundbreaking. So in a sense, Baffin Bay Networks was created to put Sweden on the map as a cyber security nation. Hopefully we can inspire and lead the way for more Swedish Cyber Security companies to come.\n\n*\"Also, companies should keep in mind that DDoS attacks are often used as smoke screens for other cyber crime ...\"*\n\n### What challenges do companies face who take lightly on DDoS security?\n\nEveryone knows the risks of a large scale DDoS-attack, sites are unaccessible, customers are angry. Sometimes these attacks result in negative headlines and brand reputation suffers. Often, companies are aware of this threat, but have not taken the time to do a risk assessment and determine the impact of an attack. Also, companies should keep in mind that DDoS attacks are often used as smoke screens for other cyber crime, that potentially could be far more serious, especially in times of GDPR and privacy; data breaches, intel theft etc. If you’re hit by a DDoS attack, that’s most likely not even your worst problem that day.\n\n*\"As a cloud services platform provider, we need to offer the best solutions possible also from a security perspective. Baffin Bay Networks allow us to do just that.\" - Henrik Grankvist, COO, Elastx*\n\n*\"We were very happy when we found Baffin Bay Networks as their services are a perfect match to our needs. They understand our business model and provide the tools so we can offer enterprise class security services in an agile and automated way.\" - Joakim Öhman, CEO, Elastx*\n\n### What are the largest challenges for a DDoS security company?\n\nA lot of companies don’t realise the risks before actually being hit by a DDoS-attack. Although we’re happy to help them in time of crisis, too, we’d much rather be there before damage is. Since most DDoS mitigation services are solely focused on large, volumetric attacks, we have to be very clear in elaborating that Riverview does much more than just DDoS mitigation, and prove the value of our solution for all those days where the customer doesn’t have a DDoS attack.\n\n*\"We built our solution to offer our customers direct insight into what is happening in their online environment.\"*\n\n### Why should someone choose Baffin Bay Networks?\n\nWe are more than just a DDoS protection service, our multi vector threat mitigation platform stops not only DDoS-attacks but also web application attacks, malware and exploit attempts. We built our solution to offer our customers direct insight into what is happening in their online environment. With Riverview our customers get access to traffic patterns and overview of threats and attempted attacks. They can easily integrate with existing solutions, and even feed the system with own threat intelligence. Moreover, we provide expert help around the clock and are very flexible when it comes to meeting customer specific needs - our customers are at the heart of everything we do.\n\n### What are you hoping to gain together with Elastx?\n\nBesides a great reference customer, with an impressive customer base of its own, Elastx is a very technically competent friend in business and technology. They provide valuable feedback on what works well, and the areas we need to improve. It is this sort of open and honest feedback that will challenge us and help us grow and become even better.\n\nAvailable now: The Elastx cloud platform including the increased security capabilities from Baffin Bay Networks.\n\nRead more about Baffin Bay Networks [here](https://www.baffinbaynetworks.com/about#about)\n\nRead the press release [here](http://www.mynewsdesk.com/bbn/pressreleases/elastx-selects-baffin-bay-networks-as-their-ddos-protection-and-threat-intelligence-platform-2380436)"}],"slug":"leading-ddos-protection-for-our-clients","url":"/en/news/leading-ddos-protection-for-our-clients","lang":"en","filename":"ledande-ddos-skydd-för-våra-kunder","image":"/assets/uploads/article-hero-16.jpg"},{"date":"2018-12-07","title":"Can you keep a single security strategy in a multi-cloud environment?","case_slug":"Can you keep a single security strategy a multi-cloud environment","intro":"A guest blogpost by James Tucker, Director of System Engineering at Baffin Bay Networks","tags":["security"],"body":[{"type":"body_text","text":"### Did you know that about 96% of all enterprises are using the cloud in some way? \n\nIt seems shocking until you start thinking about how many cloud services you use in the course of a day. The average office worker might start thinking about Gmail or Office365 for mail, file sharing on Dropbox, tracking customers in Salesforce, and so on. Meanwhile the IT staff is thinking of all the infrastructure they have moved to the cloud, and perhaps thinking about what to do with all the extra rack space in the server room in the basement. Regardless of your perspective, it’s clear that the cloud is here to stay. But it gets even more complex.\n\nAs cloud computing becomes more of a commodity, it makes sense to shop around and put your assets into the cloud which provides the best cost to benefit ratio for that particular asset or application. You may have an infrastructure project with high bandwidth usage, so it would make sense to find the most cost efficient solution for that project. On the other hand, the marketing department may be running their own sites, and would prefer something that was as easy as possible to use.\n\nWith all the options out there, it’s no surprise that 81% of businesses have a multi-cloud strategy, according to this report from [RightScale](https://www.rightscale.com/lp/state-of-the-cloud). There are benefits to a single cloud strategy. It makes it easier for IT Staff to know where an asset might be, it simplifies compliance and IT Governance. That being said, the advantages of multiple cloud providers often quickly outweigh the drawbacks, particularly if the organization can ensure a few key processes are in place.\n\nFirst, there needs to be a process to track and inventory all assets across your environment. This includes the servers in your datacenters, cloud assets, as well as ‘sites around town’. Sites around town are those one time use sites, say ELASTX has a Movember campaign and creates elastxmovember2018.se. This might be created by non-technical staff or a third party agency, and put in a web host outside the typical business processes. In a year, this site will be forgotten, unmaintained and a potential security risk. In a perfect world, these sites are shut down soon after the event, but in practice they are often forgotten.\n\nTo put it bluntly, you can’t protect what you don’t know about. While the legal responsibility for a breach might not be yours, there is a potential for data leakage and loss of reputation due to forgotten assets.\n\nSecond, the wider your assets are spread, the harder security becomes. Many IaaS vendors provide very basic security services for assets you have with them. If you are running two or more cloud vendors, this becomes a challenge to know what level of security each provides. What ends up happening far too often is you end up with a mix of different security levels, disjointed information, and are left confused during an incident. At Baffin Bay Networks, we advise our customers to avoid having multiple different security regimes for assets in the cloud. Instead, you should establish a high baseline for the minimum security requirement and add additional protections where needed. Ideally all assets, regardless of where they are deployed should share the same security.\n\nFinally, remember that proper security is based on policies and procedures. Having a coherent security policy and detailed plans on how to implement them is the single best way you can spend your security budget. Not only will this reduce the overall number of security incidents you see in a year, it will reduce the time to resolution across the board. Following on that, these policies should be communicated via a strong IT Governance organization to ensure that your cloud strategy and security strategy are aligned with the goals of the business at large. After all whole purpose of a cloud strategy is to enable the business to do more, faster. Just make sure you do it securely!\n\n//James Tucker, Director of System Engineering at Baffin Bay Networks\n\nBaffin Bay Networks is a Swedish tech startup based in Stockholm, founded in 2016, poised to disrupt the cloud-based cyber security market. Their team consist of gifted cyber security experts, passionate about building a world-class Threat Protection Platform that helps their customers to mitigate both network based and application level cyber threats.\n\n[Click here to read more about Baffin Bay Networks!](https://www.baffinbaynetworks.com/)"},{"type":"body_image","src":"/assets/uploads/old-man-yells-at-cloud-34970211.png","alt":"Cartoon with an angry man that yells at cloud"}],"slug":"can-you-keep-a-single-security-strategy-a-multi-cloud-environment","url":"/en/news/can-you-keep-a-single-security-strategy-a-multi-cloud-environment","lang":"en","filename":"can-you-keep-a-single-security-strategy-in-a-multi-cloud-environment","image":"/assets/uploads/rack_rails.jpeg"},{"title":"Elastx is now a member of MANRS","case_slug":"Elastx now a member of MANRS","tags":["network"],"intro":"We are happy to inform you that Elastx is now a member of MANRS (Mutually Agreed Norms for Routing Security).","body":[{"type":"body_text","text":"**We are happy to inform you that Elastx is now a member of MANRS (Mutually Agreed Norms for Routing Security).**\n\nWhat is [MANRS](https://www.manrs.org/)?\n\n[This short video helps explain what MANRS is](https://youtu.be/nJINk5p-HEE).\n\nRouting Incidents are Increasing\n\nIn 2017 alone, 14,000 routing outages or attacks - such as hijacking, leaks, and spoofing - led to a range of problems including stolen data, lost revenue, repetitional damage, and more.\n\nAbout 40% of all network incidents are attacks, with the mean duration per incident lasting 19 hours.\n\nIncidents are global in scale, with one operator’s routing problems cascading to impact others.\n\nWe at Elastx always strive to deliver a product of quality to our customers and partners. This includes delivering good and strong routing security. And thanks to the membership at MANRS, we can now ensure everyone of the high importance and quality routing security has at Elastx.\n\nWith MANRS, we can introduce a safer routing ecosystem in Sweden and become an example to other IT companies striving for the best experience possible online. Most importantly for us at Elastx, we can keep delivering bleeding edge safety to all our customers and partners."},{"type":"body_image","src":"/assets/uploads/cropped-manrs_icon.png","alt":"MANRS logotype"}],"slug":"elastx-now-a-member-of-manrs","url":"/en/news/elastx-now-a-member-of-manrs","lang":"en","filename":"elastx-is-now-a-member-of-manrs-1","date":"2018-12-02","image":"/assets/uploads/yellow_cables.jpeg"},{"title":"Risks with Cloud-IT outside of Sweden","case_slug":"Risks Cloud-IT outside of Sweden","tags":["cloud-act"],"intro":"Today, Swedish as well as international companies have the opportunity to get their cloud-based IT operations handled by major suppliers such as AWS, Google and Azure. A service with security that is taken for granted. Something the customer always counts with is that the data stored with the supplier is always taken care of and may only be handled by the customer himself.","body":[{"type":"body_text","text":"However, something we need to keep in mind today is the risk of having IT operations abroad. Regulations made in other countries that, for example, do not have the same relation to the Swedish and European GDPR legislation, may be a stopper for the confidentiality of one’s data abroad. An example taken today is the US Cloud Act, which went in effect on March 23, 2018. In short, Cloud Act means in that US IT service providers need to disclose data to US authorities when needed. An appeal by a Swedish company that stores data in the United States would not be valid in this case, which unfortunately may put the company at risk, depending on the type of company that is at jeopardy.\n\n[Read more about Cloud Act at ComputerSweden here. (swedish)](https://computersweden.idg.se/2.2683/1.701858/cloud-act)\n\nIn an article written by eSam, they write about the risks that may appear with cloud services in the public sector. Swedish public companies using confidential data risk being unprotected to some extent, depending on the laws of each country where data centers are located and the data is stored.\n\n[Read the article “eSam sees risks with public sector cloud services” here. (swedish)](http://esamverka.se/nyheter/nyheter/2018-11-12-esam-ser-risker-med-molntjanster-i-offentlig-sektor.html)\n\nIt is only a matter of time before more companies in the Swedish public sector begin to see the risks of cloud services abroad. Then the question is how they will act. Will they see, for example, the Cloud Act as a clear threat to confidentiality or will you continue with the same setup.\n\nThanks to the opportunities, freedom and, above all, the cost-effectiveness of cloud-based IT operations, more companies choose to move to the cloud. Companies are now also looking for the safest cloud service. Companies that have begun to look at cloud IT options are starting to become more aware of the risks of foreign cloud services and turn the binoculars towards the Swedish suppliers.\n\n[Read more about ELASTX Cloud product range here.](https://elastx.se/en)"}],"slug":"risks-cloud-it-outside-of-sweden","url":"/en/news/risks-cloud-it-outside-of-sweden","lang":"en","filename":"risks-with-cloud-it-outside-of-sweden","date":"2018-11-20","image":"/assets/uploads/metal_case_with_pattern.jpeg"},{"date":"2018-09-03","title":"We are now CNCF members","case_slug":"We are now CNCF members","tags":["elastx"],"intro":"We are proud to inform you that ELASTX is now a member of the Cloud Native Computing Foundation.","body":[{"type":"body_text","text":"### What is CNCF?\n\nCloud Native Computing Foundation is a foundation dedicated to making cloud native universal and sustainable. The cloud native technology gives developers the opportunities and resources to build products and applications with support from a whole community that is dedicated to making sure that cloud native always keeps growing and lives on.\n\nAt the same time as we are entering the CNCF, we will also be launching our very coveted Private Kubernetes. We will be putting up a private Kubernetes cluster for our customers in our powerful OpenStack platform that runs in our availability zones here in Sweden.\n\nThanks to our newly found membership at the CNCF, we will now be able to go deeper in to the cloud native ecosystem where we can together with large actors contribute and make change. We will have opportunities to learn from each other and ELASTX will be able to continue to deliver a fantastic product to its costumers and partners.\n\n***“We are happy to become part of the CNCF eco system. This aligns well with our view of helping customers navigate the open-source landscape and build cloud agnostic solutions.”*** - Henrik Grankvist, COO ELASTX\n\n***“We want to be able to create industry standards within the cloud native landscape, and not just contribute to creating more “buzz words” which can be interpreted the wrong way. The CNCF membership will give us this opportunity.”*** - Tobias Jakobsson, CloudOps Architect ELASTX"}],"slug":"we-are-now-cncf-members","url":"/en/news/we-are-now-cncf-members","lang":"en","filename":"we-are-now-cncf-members","image":"/assets/uploads/kubecon.jpg"},{"date":"2018-05-07","title":"Cloud Act - time to bring our data storage home","case_slug":"Cloud Act time to bring our data storage home","tags":["cloud-act"],"intro":"As some of you already heard or read, a new law has emerged in the United States, where US cloud suppliers and IT companies are now forced to disclose data to US authorities, they have chosen to name it Cloud Act.","body":[{"type":"body_text","text":"Cloud Act became effective on March 23 this year. The law came in to discussion in 2013, and now it has officially been implemented. All the data that we have chosen to store at US-based cloud vendors are now visible and available to US authorities. This has of course cought the eye of swedish companies whom are now highly interrested in bringing their data back home for storage.\n\nWe can actually be pretty grateful that GDPR has been implemented and ISO certifications are being followed quite strictly here in Sweden. We follow the opposite stream here when it comes to Cloud Act. Not only in Sweden but also in large parts of Europe. We demand responsibility and privacy when it comes to data and personal information. We have put a lot of focus on adapting to GDPR and focusing on customer needs and integrity.\n\n### In a way, the timing couldn’t be better\n\n**Just when data management is at it’s “worst” in the US, companies now have the opportunity to move storage to a safer place.** \n\nThis is the time for Swedish IT providers to show their full strength domestically and deliver quality in swedish IT operations. Something that ELASTX are definitely are focusing on. Great and exciting times are ahead.\n\n### What does ELASTX do?\n\n[IT Operation](/services/)\n\nWe have two public cloud-based platforms, covering most client needs. But we can also tailor a private solution for customers wanting even more control over their system.\n\nThrough our Professional services, we help automate your development process: from platform selection, including pre-study, to migration of systems to a fully automated platform.\n\nWant to know more about ELASTX services? [Click here!](/services/)"}],"slug":"cloud-act-time-to-bring-our-data-storage-home","url":"/en/news/cloud-act-time-to-bring-our-data-storage-home","lang":"en","filename":"cloud-act-time-to-bring-our-data-storage-home","image":"/assets/uploads/andreas-bergmann-qKGvDT0lOjw-unsplash.jpg"},{"date":"2018-03-07","title":"Elastx and HashiCorp become Technology Partners","case_slug":"Elastx and HashiCorp become Technology Partners","tags":["hashicop","publiccloud","elastx"],"intro":"It is of high importance for us at Elastx to be given the freedom to work in open source based cloud environments with no lock-in solutions. We believe that this is the core to gaining constant new knowledge in the field as well as to give a larger freedom for the developer. And it is as important to be able to give our customers the same opportunities.","body":[{"type":"body_text","text":"To have an automated underlying infrastructure gives the developers the time to focus on other critical parts in the application environment. HashiCorp products such as Terraform, Nomad and Vault enable us to build the automated infrastructures for our customers and help us deliver the cloud based IT solutions that we strive for.\n\n> “ELASTX believes the best way we can help companies is with sustainable, automated IT. We focus on codified infrastructure for our IaaS, PaaS, and bare metal solutions. The codified infrastructure is the foundation of automation for our customers. Terraform and the other HashiCorp products enable us to build this infrastructure stack.” - Henrik Grankvist, COO, ELASTX\n\nThanks to the HashiCorp products, we are able to provide consistent workflows regardless of the underlying technology. This includes infrastructure provisioning, security, service discovery, and application deployment.\n\nBonnier News, one of our customers, chose ELASTX as their hosting provider and also use complements from HashiCorp to help gain control of their infrastructure. You can read more about this in our [case study.](https://elastx.se/en/use-case/bonnier-news-chose-elastx-to-create-their-new-digital-platform/)"}],"slug":"elastx-and-hashicorp-become-technology-partners","url":"/en/news/elastx-and-hashicorp-become-technology-partners","lang":"en","filename":"elastx-and-hashicorp-become-technology-partners","image":"/assets/uploads/microservers.jpeg"},{"date":"2018-03-07","title":"Elastx keeps offering industry leading DDoS protection","case_slug":"elastx keeps offering industry leading ddos protection","intro":"We have chosen to work with Baffin Bay Networks to protect our Cloud Infrastructure and our customers by using their DDoS Protection and Threat Intelligence platform.","body":[{"type":"body_text","text":"### Baffin Bay Networks\n\nFounded in 2017 by leading industry experts, Baffin Bay Networks is a Swedish tech start-up poised to disrupt the cloud-based cybersecurity market. Their globally distributed Threat Protection Centers protect customers from the most advanced Distributed Denial of Service and Web Application attacks, and discovers and protects against known and unknown malware and exploits.\n\n*“As a cloud services platform provider, we need to offer the best solutions possible also from a security perspective. Baffin Bay Networks allow us to do just that.”* - Henrik Grankvist, COO, Elastx.\n\n*“We were very happy when we found Baffin Bay Networks as their services are a perfect match to our needs. They understand our business model and provide the tools so we can offer enterprise class security services in an agile and automated way”* - Joakim Öhman, CEO, Elastx\n\n### Threat Protection, DDoS Protection, and Web Application Protection\n\nWe will be able to offer Threat Protection, DDoS Protection, and Web Application Protection to our customers as well as for our own Cloud infrastructure. Baffin Bay Networks offers a unique combination of technical expertise and exceptional capacity and technology to ensure that the customers always maintain safe and efficient online functionality.\n\n*“No company or organization should be defenseless against advanced cyber threats. Our mission is to build the world’s best Threat Protection Platform: one that is easy to use for any company or service provider to protect their key assets and services. We’re a team of gifted cyber security experts, passionate about building a world-beating Threat Protection Platform that helps our customers to mitigate all cyber threats.”* - Baffin Bay Networks mission\n\nThey will be providing protection from more than 250 different types of DDoS attacks. All with measures that are easy to implement and deliver results from day one.\n\n*“Companies and organizations use web applications such as banking, e-commerce and trading platforms to enhance the scope and functionality of their businesses. Some of those applications may be developed in-house and some are purchased externally. Organizations’ IT security functions also face the challenge of keeping pace with a rapidly evolving threat landscape, as new vulnerabilities are disclosed. We prevent attackers from exploiting these vulnerabilities, and provide real-time logging, reporting, and statistics on attempted attacks.”* - Baffin Bay Networks about their Web Application Protection.\n\n**Available now:** The Elastx cloud platform including the increased security capabilities from Baffin Bay Networks.\n\nRead more about Baffin Bay Networks [here](https://www.baffinbaynetworks.com/#about).\n\nRead the press release [here](http://www.mynewsdesk.com/bbn/pressreleases/elastx-selects-baffin-bay-networks-as-their-ddos-protection-and-threat-intelligence-platform-2380436).\n\nRead more about Elastx [here](/en/about-elastx)."}],"slug":"elastx-keeps-offering-industry-leading-ddos-protection","url":"/en/news/elastx-keeps-offering-industry-leading-ddos-protection","lang":"en","filename":"elastx-keeps-offering-industry-leading-ddos-protection","image":"/assets/uploads/empty_switch_amber.jpeg"},{"date":"2017-10-08","title":"From a Single Instance Server to Microservices Freedom","case_slug":"From a Single Instance Server to Microservices Freedom","intro":"Björn Lundén needed scaling options, as their number of users grew and they needed the ability to scale separate components of their app. After meetings with the Elastx team, they were introduced to freedom to innovate and optimize with their Virtuozzo (formerly Jelastic) PaaS.","tags":["elastx","customer"],"body":[{"type":"body_text","text":"Swedish based company Björn Lundén (BL Info) were stuck with a hosting partner that only offered monolithic single instance virtual servers, with a set and static number of preferences.\n\nA monolithic application is where all of the required logic is located within one ‘unit’ (a war, a jar, a single application, one repository). The design philosophy is that the application is responsible not just for a particular task, but perform every step needed to complete a particular function.\n\nMicroservices is a software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other. These services are small, highly decoupled and focus on doing a small task."},{"type":"body_text","text":"In Virtuozzo (formerly Jelastic) each environment within a large, complex and enterprise-ready application represents a service that can be scaled, configured and monitored separately. And if any change is required in the project, it can be easily made in the specific service without any need to rebuild and redeploy the entire application (as it is demanded in monolithic apps). With such structure, the application can even be managed by different teams, when each is responsible only for a certain part.\n\n### Key aspects of microservices are:\n\n* They do one thing or are responsible for one functionality\n* Each microservice can be built by any set of tools or languages since each is independent from others\n* They are truly loosely coupled since each microservice is physically separated from others\n* Relative independence between different teams developing different microservices\n* Easier testing and continuous delivery or deployment\n\n### Enter Elastx PaaS powered by Virtuozzo. Clusterization. Cloud-in-a-box.\n\nHenry Arousell from Björn Lundén chatted with us about the transition to the platform and the benefits of microservices for continuous scalable deployments.\n\n### Tell us about Björn Lundén\n\nBjörn Lundén Information AB started started in 1987 as a book publishing company specializing in books about law and economics. We make hard-to-understand- things easy. The company also does lectures on these subjects.\n\nThe last 15 or so years, we have also added software to our line of products and services and the last 3 years we have offered the product BL Ekonomi. It is a service, offered as a subscription (per month/year) that allows the user to manage their small business accounting and invoicing needs.\n\n### How many total users do you have currently?\n\nWe have about 11,200 licences out of which about 3,600 are active users on a regular basis, that all in all use about 31,500 databases.\n\n### What was the challenge you faced?\n\nWell, many. But certainly one of the hardest was to find a reliable partner that, not only shared our view of QOS (quality of service), but also had the platform and know how to implement it. For us as mainly a developer organisation, finding a partner that complimented us with server and production environment expertise was paramount.\n\n### What was the solution?\n\nThe server side of the application runs on top of two Elastx Virtuozzo PaaS environments:\n\n* One NGINX on top of two tomcat nodes that powers the actual application, written in Java 7\n* The other one is a massive MySQL server that runs more than 31500 organisation databases\n\n### Why Virtuozzo PaaS?\n\nWe met the Elastx crew at Sweden’s biggest Java developer conference Jfokus and kind of took it from there. At the time we were in big need of help with scaling and we were in a situation with a partner that only offered monolithic single instance virtual servers with a set and static number of preferences, which made it very difficult for us to scale as the number of users grew.\n\nAs our software organisation consists mainly of developers, we immediately wanted to get our hands on such an intuitive tool. Not only for our production needs put for a multitude of testing and laboratory level stuff. I know it might sound cheesy, but we really were in awe.\n\n*\"When we saw the Virtuozzo PaaS console we were blown away\" - Henry Arousell*\n\n### What were the results when you switched to Elastx Virtuozzo PaaS?\n\nThe biggest difference in our daily life is probably accessibility to the production environment. We now have access to logs and servers statistics at the tip of our fingertips. No longer do we have to contact a guy at the server center and ask him for such info.\n\nNo, wait, the biggest difference is that as we’ve watched our product grow in customer popularity and the demand for more server power that comes along with that, we have been able to increase this very easily ourselves.\n\n### What was the cost difference?\n\nThat is really hard to say since we have grown tremendously since switching to Elastx Virtuozzo PaaS. But it’s safe to say that it is better than before since we paid a flat rate for our virtual servers, regardless of usage or load.\n\n### Did you see a performance increase?\n\nYes. Very much so. It is hard to say exactly how much time was saved and to us it wasn’t really the unique selling proposition we were looking for. The biggest difference was that we are now able to purchase the correct amount of computing power. With a static server setup, we would either have too little power, or to have enough, we would have to pay such amounts of money that it killed the business case. When we know the alternative, it’s really hard to understand that a 24/7/365 flat rate was ever considered a reasonable arrangement.\n\nBut, of course. being able to purchase the right amount of power when needed has enabled us to provide a much better customer experience at an affordable price!\n\n### Highlights of using the platform\n\nWell, the BL Ekonomi server core is, in itself, a monolithic design. As it happens though, we are in the process of adding to it a series of microservices to add new and extended features. In this process we are involving the people at Elastx to help us find the optimal solution for providing a smooth continuous integration environment to the developers.\n\n### Shortest time to implementation compared to other providers or solutions you have tested or used in the past?\n\nWhen first releasing on the Elastx/Virtuozzo platform, we did a one or two day workshop with the guys at Elastx to show them our product and set it up correctly leveraging all the new possibilities of load balancing and having separated the Tomcat and MySQL nodes from the previous production setup. As easy as pie.\n\n**Thanks Henry!**\n\nMicroservices are, in a way, return to basics with simple applications that are fulfilling today’s needs for complexity by working together through utilization of each others APIs.\n\nThe container system in Elastx Virtuozzo PaaS Cloud goes hand in hand with the microservices architecture pattern. One of the recent technologies underlying microservices Docker, a technology for packaging applications and deploying them in containers."}],"slug":"from-a-single-instance-server-to-microservices-freedom","url":"/en/news/from-a-single-instance-server-to-microservices-freedom","lang":"en","filename":"björn-lundén-gick-över-till-microservices","image":"/assets/uploads/object_servers.jpeg"},{"date":"2017-10-08","tags":["elastx","customer"],"title":"Response times reduced from 250-300 to 3 milliseconds","case_slug":"Response times reduced from 250 300 to 3 milliseconds","intro":"Reforce International AB were in need of an IT platform with high performance which could easily scale up in capacity. Today Reforce use Elastx Virtuozzo (formerly Jelastic) PaaS as a backend.","body":[{"type":"body_text","text":"### Who are Reforce International AB?\n\n– We are one of the leading RaaS companies (Result as a Service). Briefly, one can say that we help companies achieve their strategies more quickly, by offering accelerated strategy and business execution. Since our launch, we have had 96% success in our projects, and it is thanks to our business strategists and our proprietary software, ReExecute.\n\n### How did your platform solution look like before and what were the challenges?\n\n– We began developing our product in 2012 and we were an early adopter of PaaS. Our first challenge was that we did not have the expertise inhouse to manage the technology. At the time, there was no Nordic alternative, so we turned to an American platform. Our second challenge was to find a partner of higher performance. With the Atlantic Ocean between us we experienced problems with stability, performance and long response times, which led to our customers perceiving our system as slow. For us, every millisecond makes a difference. Therefore, we started looking for a new partner that was closer to us geographically. In spring 2013 we came in contact with Elastx, in May the same year, we started the first test runs.\n\n– Elastx has become our partner and security, says Dan Mattsson, VP of Product Development at Reforce.\n\n### Why did you choose Elastx as partner?\n\n– It was mainly because of their platform performance. Earlier, we had encountered problems with new releases. We run new releases every week and it is expensive to do them during the night, which was the case when clients were up working late. Nowadays, we can make a painless release while customers are working. Elastx is a partner that can offer both stability and high competence."},{"type":"body_text","text":"### What noticeable improvements/differences have you seen?\n\n– Even as early as during the test runs, Elastx successfully reduce our response time from 250-300 milliseconds to 3 milliseconds, which have facilitated our work considerably. Elastx has become our partner and security. With a high level of expertise in server and infrastructure level, they can help us with the parts we do not want to deal with ourselves. The performance works flawlessly thanks to Elastx.\n\n### How much have you saved in terms of resources thanks to Elastx solution?\n\n– It is hard to measure in figures, but easier in efficiency and resources. We get value for our money in terms of competence. We do not make changes in the environment very often, but when we do, we can, thanks to Elastx, make it effective. The alternative, to have the competence inhouse would not be worth it. With traditional operation comes administration, now we can avoid that and instead focus on what we do best.\n\n*“Elastx support is magically fast and spot on.”*\n\n### How have you experienced the support and cooperation with Elastx?\n\n– It has worked very well. One thing that we really appreciate with Elastx is the quality of their support and how fast they are with responding. We often have urgent matters which require knowledge that we do not have inhouse. Now we also receive support in Swedish and the time difference is not a problem anymore. We know that Elastx put high demands on their support, and our experience of it is magically fast and spot on.\n\n### What are your thoughts of the future?\n\n– Elastx are continuously working with our development, as well as their own, and we put our full trust in them. Elastx are moving more and more to become a full service supplier, which we look favorably on. They also communicate upcoming changes at an early stage which makes it easier for us.\n\n**Thanks, Dan!**\n\nRead more at [www.reforceinternational.com](https://www.reforceinternational.com)"}],"slug":"response-times-reduced-from-250-300-to-3-milliseconds","url":"/en/news/response-times-reduced-from-250-300-to-3-milliseconds","lang":"en","filename":"svarstiderna-minskades-från-250-300-till-3-millisekunder","image":"/assets/uploads/case-study-elastx-reforce-molnplattform.jpg"},{"date":"2017-10-08","title":"Key Cloud Benefits for Emakina","case_slug":"Key Cloud Benefits for Emakina","intro":"Recently Emakina, a digital marketing agency, shared with us their feedback about using Elastx Virtuozzo PaaS. Today we are glad to share our interview with Fredrik Larsson, Senior Developer from Emakina.","tags":["elastx","customer"],"body":[{"type":"body_text","text":"### What was the challenge you faced?\n\nIn our early days we had opted for going with a managed VPS at a Swedish web hosting company. This worked well for several years as it gave us what we needed at that time - a place where we easily could host our clients and our own content. When we as a company and our client base grew and the projects became more advanced we ran into problems with performance, if one application got hammered all our clients suffered. We had to do something about it.\n\n### What was the solution?\n\nWith Elastx Virtuozzo (formerly Jelastic) PaaS we found what we needed. We feel a lot safer now knowing that every application is running in it’s own container totally separated from everything else. The ability to control exactly how much base performance and how much we want it to scale if needed has allowed us to handle any spikes our applications may encounter.\n\nWe also still have what we loved from the managed VPS, a fully configured environment at our fingertips, ready to be deployed at moments notice, without having to go through the tedious setup process ourselves, but at the same time allowing us to tweak the internal settings if needed.\n\n### Why Elastx Virtuozzo PaaS?\n\nWe were looking for cloud hosting for our apps. We tried using Heroku and similar platforms, but working mostly in PHP was not appropriate for us. Elastx Virtuozzo PaaS gave us a much easier way of deploying our PHP apps while still being a PaaS. Being a Swedish host also gives us great performance and low latency since all our customers are based in the Nordics.\n\nWhat also changed was how we deploy our applications. Before we’d simply drag and drop the files in the FTP, but with the ability to tie an environment to a specific GIT-repository we quickly realised that this is it. We’re continuing to develop our processes around this way of deploying.\n\n### How have you set up your environment?\n\nA fairly standard setup for our medium applications with a small amount of reserved cloudlets to keep the cost down during periods of low traffic but with enough scaling limit to handle high loads. This particular project is a mobile targeted newsletter that goes out about once every month to about 3-400 000 users, which gives a huge spike of traffic just following the sendout. We’ve also used the VPS feature to host an instance of Elasticsearch to enable text searching."},{"type":"body_text","text":"### What was the cost difference?\n\nThe main difference for us is that we are now able to see exactly what each application cost. Before we were stuck with the same cost, no matter how many or few projects were up and running at a particular moment in time. This provides flexibility where we can simply turn everything off once a client’s campaign is over and turn it back on with a simple click during the next campaign period.\n\n### Did you see a performance increase?\n\nThe main benefit for us is that the applications are no longer sharing the same resources, one site getting hammered no longer slows down everything else. Also earlier we’d have to pay for a pretty beefy server to be able to handle the big loads, now we just go in, pull a slider to set scaling limits and don’t worry about performance.\n\n### Highlights of using the platform?\n\nThe ability to do almost whatever we want if we have to. Need an instance of some application not available as a package to run on the platform? Just throw up a VPS and configure it yourself. Hosting a simple WordPress blog? A few clicks and you got a platform running PHP and MySQL up in seconds.\n\n**Thanks, Fredrik! It cannot be said better :)**\n\nWe are glad to know that Elastx PaaS platform with its automatic scalability, container orchestration and an advanced level of workload mobility became a proven technology for a great variety of companies like Emakina.\n\nRead more [emakina.com](https://emakina.com)"}],"slug":"key-cloud-benefits-for-emakina","url":"/en/news/key-cloud-benefits-for-emakina","lang":"en","filename":"topp-3-fördelarna-med-molnet-för-emakina","image":"/assets/uploads/emakina_01.jpg"},{"date":"2017-07-08","tags":["elastx","customer"],"title":"Digitalist - 50% Reduction in Build Times and Blazingly Fast!","case_slug":"Digitalist 50 Reduction in Build Times and Blazingly Fast","intro":"Digital services firm Digitalist made the switch to Elastx Virtuozzo (formerly Jelastic) PaaS and experienced some very positive improvements.","body":[{"type":"body_text","text":"Digitalist Sweden’s (previous Wunderkraut) mission is to improve the business of their customers with digital tools, providing end-to-end digital services from developing strategy, providing consultancy and training to service design, development and support. They have 150 staff across 9 countries serving some of Europe’s best-known brands and forward-thinking governments, and their solid processes, business focus and great communications are as important as their technical knowledge in making these projects a success.\n\nSome of the best available digital tools they use include Agile business development methods, agile project management and open source software. What makes them different though is their consultative, collaborative approach. They use their experience to help their clients define the right questions before starting work on the answers, building a clear strategy with them. They work in long term, highly collaborative relationships with their customers, forming ‘one team’ that operates in a borderless way across the organisations. Their agile approach means that they meet the deadlines, stay within the budget and reach the goals of a project — three things that are sadly rare in digital projects. Their customer testimonials show our repeated success at this.\n\n### What is Digitalist?\n\nDigitalist is a leading European web agency focusing on bringing value to online investments through agile development and open source technologies.\n\nWe help companies with the 4 stages of a website lifecycle.\n\nSimplified: Motivate, Prepare, Develop & Improve your site.\n\nTo do this with a high amount of quality and a fast time to market we offer hosting for our clients.\n\n### Why Drupal?\n\nDrupal is a very mature, secure and flexible framework for building and continually improving online businesses. Drupal is Open Source which means that its free to use and to change.\n\n### Who are the people behind the project?\n\nWunderkraut is an entrepreneurial driven company and all owners also work as employees in the company. We have a strong focus on Agile methodologies and we also have a strong belief in Open Source software.\n\n### Who were you previously hosted with and what was the challenge?\n\nWe have worked with most types of vendors like local ISP´s, Larger IT infrastructure vendors, Inhouse IT Departments and Cloud solutions like Amazon and Linode.\n\nThe main challenge is always the lack of understanding of the process behind taking care of a site. Therefore we have moved most of our stuff to the cloud before we found Elastx.\n\nThe worst case is always the internal IT departments where people think that they are saving money since they already have the hardware and the resources. It always ends up in costly and complicated processes and results in expensive workarounds. That often leads to bad performance and low levels of security.\n\n### Elastx Virtuozzo (formerly Jelastic) PaaS are now powering many of Digitalist’s client sites. What notable improvements have you seen?\n\nThe key benefits for us:\n\n* Flexibility\n* Performance\n* Real and competent people behind the product\n* Stability\n\n### Can you share a typical environment set-up with us?\n\nWe have a LAMP stack adopted to Drupal with:\n\n* Nginx (proxy cache and load balancer)\n* Apache\n* MySQL\n* Memcache\n\nWe are also working on a Jetty container to support horizontal scaling of Apache Solr (today we use a reserved instance).\n\n### How many hours/resources have you saved each day since adopting the platform?\n\nWe have seen a 50% reduction of build times on our development server, since it scales its resources when they are needed. We get the performance when we need it!\n\nWe can also limit the number of build environments since one environment can scale when it’s needed. Before, we had to have a lot of instances just to distribute the load on peak hours. That meant more costs in terms of instances, but it also meant that we had to spend more time on managing the environments.\n\n### How has the support been compared to previous providers?\n\nThe simple answer: A lot better! Elastx are really dedicated and you really get attention when you need it. They also have been able to provide us with knowledge on a deeper level that we haven’t found in any other company we worked with.\n\n### Why does Drupal perform so well when it’s powered by Elastx Virtuozzo PaaS?\n\nDrupal is built on PHP. That means that under a heavy load and when your caches are cold you are really dependent on CPU. There are many ways to cache things in Drupal and therefore you work with several components to support this. (Opcode caches, Memcache, Database caches etc)\n\nDrupal’s flexibility and every sites’ unique set of requirement means that every site has their own kind of characteristics in terms of resource usage. The standard solution so far has been to throw enough resources on a “box” and hope that it works. With Elastx you don’t have to worry about this, it will scale where and when its needed. Of course, you will have to watch out for components that use a lot of resources (since it costs more), but its a lot better to do that than trying to solve a problem while your site is down.\n\n### Any plans to move other Digitalist clients to the platform?\n\nYes, we plan to move as many sites as possible to Elastx cloud platform. We still have to improve some things in our platform to really support all clients. It’s easier and more fun to maintain and all our clients using it are so happy over the increased performance!\n\n### Lastly, what new features would you like to see on our platform?\n\nA performance shared file system solution. It would also be nice to control the local firewalls (to support communication between our management servers and our instances).\n\nRead more at [digitalist.se](https://digitalist.se)"}],"slug":"digitalist-50-reduction-in-build-times-and-blazingly-fast","url":"/en/news/digitalist-50-reduction-in-build-times-and-blazingly-fast","lang":"en","filename":"digitalist-valde-elastx-som-hostingleverantör","image":"/assets/uploads/article-hero-18.jpg"},{"title":"Getting started with application backups to Swift","case_slug":"Getting started with application backups to Swift","intro":"Swift cloud object storage is ideal for storing your application backups. Here are 3 ways to move your backups to the cloud.","body":[{"type":"body_text","text":"**Swift cloud object storage is ideal for storing your application backups. Here are 3 ways to move your backups to the cloud.**\n\nSwift is part of the OpenStack software platform. Objects are written to multiple disk drives spread throughout storage nodes in the cluster. All of this logic is managed by OpenStack so you as a user don’t have to think about how data is stored physically. Our (Elastx) Swift cluster span over more than one datacenter, which means all data stored is georedundant.\n\nIn the examples we will upload a PostgreSQL database dump to Swift, but it can be any type of file you want to back up. You can use Swift for other purposes other than backup, you can even serve static websites directly from Swift, but that is out of the scope for this post.\n\nSwift can automatically delete an object after a specified amount of seconds passed. This means we don’t even need to manage objects due to expire in our code. Examples are shown with both permanent storage (default behaviour) and automatic expiration.\n\nIf you haven’t already, start by sourcing your openrc.sh to set up the environment in your shell. You can download your own openrc file by navigating to the “API Access” tab located in “Compute/Access & Security” when logged on to the OpenStack dashboard.\n\n`source ~/openrc.sh`\n\n## python-swiftclient\n\npython-swiftclient can be installed through most package managers and runs on any OS with Python >=2.6 support. Consult https://www.swiftstack.com/docs/integration/python-swiftclient.html for more information how to install.\n\nFirst we create a container in Swift to hold all our backups:\n\n`swift post pg_backup`\n\nThen we upload our database dump to the newly created container:\n\n`swift upload pg_backup mydb.sql`\n\nIf we want Swift to automatically expire a file we pass the header ‘X-Delete-After’ during the upload:\n\n`swift upload --header ‘X-Delete-After: 3600’ pg_backup mydb.sql`\n\nThis will delete the object after 3600 seconds (1h).\n\nWe can list all objects in the container using \\`swift list pg_backup\\`. To read object metadata (size, modified, expiration, etc) we use:\n\n`swift stat pg_backup mydb.sql`\n\n## cURL\n\nSwift API is RESTful which means you can use cURL to upload and download objects in storage using HTTP protocol. cURL is available in all popular package managers. We also use jq to process the return json, so make sure you install that as well.\n\nWe need to authenticate first and save the X-Auth-Token header data and Swift URL we receive in order to use all subsequent commands. Syntax is different between API versions. Example is made for v3.\n\n```\nOS_AUTH_URL=https://ops.elastx.cloud:5000/v3/auth/tokens\ntmpheader=$(mktemp)\ntmpjson=$(mktemp)\ncurl -sf -D $tmpheader -o $tmpjson -H \"Content-Type: application/json\" -d '{ \"auth\": {\n \"identity\": {\n \"methods\": [\"password\"],\n \"password\": {\n \"user\": {\n \"name\": \"'\"${OS_USERNAME}\"'\",\n \"domain\": { \"id\": \"default\" },\n \"password\": \"'\"${OS_PASSWORD}\"'\"\n }\n }\n },\n \"scope\": {\n \"project\": {\n \"name\": \"'\"${OS_TENANT_NAME}\"'\",\n \"domain\": { \"id\": \"default\" }\n }\n }\n }\n}' $OS_AUTH_URL\nOS_TOKEN=$(grep 'X-Subject-Token' $tmpheader | awk '{printf $2}' | tr -d \"\\r\")\nSWIFT_URL=$(cat $tmpjson | jq '.token.catalog[] | select(.name == \"swift\") | .endpoints[] | select(.interface == \"public\" ) | .url' | tr -d '\"' )\nrm $tmpheader\nrm $tmpjson\n```\n\nYou can add code above to the bottom of your openrc file so you auth directly when sourcing it.\n\nCreate container in Swift to hold backups:\n\n`curl -sf -H \"X-Auth-Token: $OS_TOKEN\" -X PUT $SWIFT_URL/pg_backup`\n\nUpload database dump to a Swift container:\n\n`curl -sf -H \"X-Auth-Token: $OS_TOKEN\" -H \"Content-Type: application/x-sql\" --data-binary \"@mydb.sql\" -X PUT $SWIFT_URL/pg_backup/mydb.sql`\nUpload database dump with 3600 seconds (1h) automatic expiration to a Swift container:\n\n`curl -sf -H \"X-Auth-Token: $OS_TOKEN\" -H \"X-Delete-After: 3600\" -H \"Content-Type: application/x-sql\" --data-binary \"@mydb.sql\" -X PUT $SWIFT_URL/pg_backup/mydb.sql`\n\nWhen done communicating we want to revoke the token as we don’t need it any longer:\n\n`curl -H \"X-Auth-Token: $OS_TOKEN\" -H \"X-Subject-Token: $OS_TOKEN\" -X DELETE $OS_AUTH_URL`\n\n## Rclone\n\n\nRclone is built in Go and comes as a single binary file. Syntax is very much alike rsync. It supports many different types of cloud storage which makes data migration simple. You can download it at .\n\nCreate a file called rclone.conf with following content:\n\n```\n[swift-remote]\ntype = swift\nuser = $OS_USERNAME\nkey = $OS_PASSWORD\nauth = $OS_AUTH_URL\ndomain = default\ntenant = $OS_TENANT_NAME\n```\n\n\nWhen we want to refer to Swift we will now use 'swift-remote:' and rclone will handle auth for us (assuming openrc has been sourced first).\n\nCreate a container in Swift:\n\n`rclone --config rclone.conf mkdir swift-remote:pg_backup`\n\n\nUpload db dump to pg_backup container:\n\n`rclone --config rclone.conf copy mydb.sql swift-remote:pg_backup`\n\n\nDefault behaviour is to copy entire directory contents unless source is a file.\n\n~~At the time of writing (2017-04-07) the current version of rclone (v1.36) does not support expiration header for Swift. It's scheduled for v1.37 ([Github issue #59](https://github.com/ncw/rclone/issues/59)) or later. Meanwhile you can delete old files with:~~\n\n`rclone --config rclone.config --min-age 1h delete swift-remote:pg_backup/`\n\nUpdate 2020-11-16: Since [v1.52](https://rclone.org/changelog/#v1-52-0-2020-05-27) rclone support sending additonal headers, so you can now set expiration date on file uploads:\n\n`rclone --config rclone.conf --header-upload \"X-Delete-After: 3600\" copy mydb.sql swift-remote:pg_backup`\n\n\n## Summary\n\n\nThat's it! My personal recommendation is to use python-swiftclient or OpenStack CLI client (not shown in examples) where available. Both are maintained by the OpenStack community and are the most supported software out there.\n\nRclone can be useful if you use more than one cloud storage platform, or at least want the possibility to be able to. Rclone is the cloud equivalent to rsync, a software known to be excellent when it comes to copy large amounts of data and files.\n\ncURL can be run almost anywhere (it even works on AmigaOS, DOS and OS/2!) but require more knowledge of Swift REST API.\n\nIf you have any questions or want to know more about how OpenStack can help your company contact me or a colleague and we'll be in touch!\n \n\nAndreas Holmsten"}],"tags":["elastx","swift","backup","openstack"],"slug":"getting-started-with-application-backups-to-swift","url":"/en/news/getting-started-with-application-backups-to-swift","lang":"en","filename":"getting-started-with-application-backups-to-swift","date":"2017-04-07","image":"/assets/uploads/follow_qr_to_openstack.jpg"},{"date":"2016-10-08","tags":["elastx","customer"],"title":"To make this possible we use bleeding edge Technology","case_slug":"To make this possible we use bleeding edge Technology","intro":"Imagine being able to see your healthcare records directly on your computer or smart phone and also being able to share this information with your doctor - wouldn’t that be great?","body":[{"type":"body_text","text":"Chorus develops new tools to provide healthcare professionals, patients and families access to healthcare data in an entirely new way not seen before.\n\nSwedish healthcare is facing major challenges. We are getting older and healthcare costs are increasing. The healthcare sector also has a variety of different IT systems and medical record systems with direct access only. This means that the integration costs has soared and less money is left to actually develop new features to take the burden of healthcare professionals and residents. Secure solutions, where data from existing systems are available with the patients permission is therefore increasingly important.\n\n*“To make this possible we use bleeding edge Technology. High-Availability and security are also top priority. There can be no down-time. Simple as that!” - Casper Winsnes, CEO Chorus*\n\n### Chorus and ELASTX\n\nFredrik Henriques is Head of Technology at Chorus and got a recommendation to try ELASTX Virtuozzo (formerly Jelastic) PaaS in Spring 2013.\n\nSince time-to-market is a very important factor ELASTX Virtuozzo PaaS really caught our attention. The possibility to quickly create and also clone entire IT-environments is great. We work with micro-services and open-source so the tools we use works well with their platform\n\nFredrik continues. When ELASTX launched their Openstack IaaS in 2014 I was keen to try it out. Reason being that we want to stay up to date with latest technologies and with their Infrastructure-as-a-Service we can spin-up our OS of choice. Currently we are using CoreOS and Rancher. ELASTX is a great fit for us since we are very technical ourselves and when we are in need of assistance it’s highly technical problem solving. ELASTX support is outstanding.\n\nELASTX is excited to be part of Chorus important mission to ease the pain from health care.\n\n### About Chorus\n\nSince Chorus inception in 2007 they have had two commandments. First, to make the complex things easier and more understandable and second to work in areas where they believe that new ideas and ways of working is genuinely useful. Chorus has developed a number of tools to make healthcare more accessible to patients and relatives needs!\n\nLearn more at [chorus.se](https://chorus.se)"}],"slug":"to-make-this-possible-we-use-bleeding-edge-technology","url":"/en/news/to-make-this-possible-we-use-bleeding-edge-technology","lang":"en","filename":"vi-arbetar-med-den-absolut-senaste-tekniken","image":"/assets/uploads/article-hero-19.jpg"},{"date":"2016-09-20","title":"Automating infrastructure with Terraform on OpenStack","case_slug":"Automating infrastructure with Terraform on OpenStack","tags":["openstack","terraform"],"intro":"Infrastructure as a Service (IaaS) is the most basic cloud model, giving you the ability to create your own infrastructure resources such as networks, firewalls, load balancers, different kind of storage and of course servers.\n","body":[{"type":"body_text","text":"There are other tools that can be used to orchestrate OpenStack such as the official tool called Heat, which works pretty much the same as AWS Cloudformation. They both work with “Stacks” which is basically a stack with resources (servers, security groups, databases, etc). Both Heat and Cloudformation are fantastic way to do orchestration. But both are tools for either OpenStack or Amazon Web Services.\n\nWith Terraform you orchestrate cloud services – DNS, to IaaS providers, CDN, etc. As the tool itself isn’t connected to any specific provider you also don’t get locked in. Please see a list of official providers in the [documentation](https://www.terraform.io/docs/providers/).\n\nExcept for the fact that Terraform allows you to orchestrate many different cloud services, it’s also pretty easy to learn. And in our experiences a lot faster than the tools mentioned above.\n\nBy the way, If you live in Stockholm and want to meet others that loves Terraform (and more HashiCorp stuff), join the meetup group [Stockholm Hashicorp User Group](https://www.meetup.com/Stockholm-Hashicorp-User-Group/)\n\n## A “simple” web service architecture\n\nYou can get the source for this Terraform template on [Github](https://github.com/elastx/terraform-demo) and study it to the teeth.\n\nOur architecture will consist of two server groups “web” and “db”. Each group having 2 compute instances (virtual machines). The groups will have a policy on them called “anti-affinity”, this is best practice and will tell OpenStack scheduler that this group may never have the compute instances on the same physical hardware, in this case to make sure your web and db instances stay redundant.\n\nEach compute instance will run CentOS 7 and with cloud-config we’ll change the default username to “elastx”.\n\nThree security groups (sg) will be created, demo-ssh-sg, demo-web-sg and demo-db-sg. The first sg will be on all instances allowing “everyone” to ssh. The web-sg will be on the web servers allowing port 80 and 443 (http,https) and db-sg will be on the database instances having port 3306 (mysql) allowed from the web network, keeping it isolated and secured.\n\nSSH keypair called “demo_rsa” will be upploaded as well.\n\nThis is a very traditional network topology for demo purposes, but it shows you roughly how the environment will look like."},{"type":"body_image","src":"/assets/uploads/automating-infrastructure-bild.png"},{"type":"body_text","text":"As mentioned earlier, go ahead over to our [github repository](https://github.com/elastx/terraform-demo) to study the [Terraform template](https://github.com/elastx/terraform-demo/blob/master/terraform.tf). To run it yourself, [contact us](mailto:hello@elastx.se) in order to create an account.\n\n/Tobias Jakobsson"}],"slug":"automating-infrastructure-with-terraform-on-openstack","url":"/en/news/automating-infrastructure-with-terraform-on-openstack","lang":"en","filename":"automating-infrastructure-with-terraform-on-openstack","image":"/assets/uploads/switch_exhaust.jpeg"},{"date":"2015-12-23","tags":["pipeline"],"title":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment (part 2)","case_slug":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment 2","body":[{"type":"body_text","text":"Choose your tools wisely:\n\nWhatever tools you have in place now, or whatever tools you choose to switch, the same rules apply:\n\n* [Grok](http://www.urbandictionary.com/define.php?term=grok) your tools\n* Use your tools the way they were intended to be used first, before you start experiementing\n* Avoid building from scratch. Most problems are already solved and someone has probably open sourced some code to solve exactly the problem you’re stuck on (or at least 90% of it).\n* Always choose open source before proprietary\n* Always check that open source software you want to use has healthy and active community around them\n* Be wary of moth-balled open source projects, unless you’re ready to start supporting it yourself\n* If you choose to switch, make sure the new tool will solve your problem with the old tool\n\n**Source Control:**\n\nEnsuring control over changes in your VCS is pretty much the crux of the issue in Continuous Delivery. The most important axiom of CD is to keep your master branch releasable at all times. The [wikipedia page about Continuous Delivery](https://en.wikipedia.org/wiki/Continuous_delivery) is actually the best at describing this. Out of that singular focus everything else will flow.\n\nThe converse is also true. If you are frequently breaking master, or your master branch is in an unknown state, then you have no hope of doing continuous delivery until you [fix that problem](http://failblog.cheezburger.com/thereifixedit).\n\nWhat this means in practice, is that you want as much as possible of your verification to happen on the feature branch before merging to master. This means any type of building, compiling or packaging, deploying the application, automated testing and even manual testing if necessary. Code review should also be a part of any change before merging to master is allowed.\n\nLastly, you need to ensure that developers are actually integrating their new changes with the latest on master (i.e. the “integration” part of continuous integration).\n\nIf you’re using git and your branching look like [this](http://aht.github.io/whatisgit/wide-gitk.gif), the point is to get it looking more like [this](http://3.bp.blogspot.com/-Yh5iOYTth3A/U6fhWafzqRI/AAAAAAAABs0/xujlkjiYFjQ/s1600/gitk19.png) (the way Linus intended).\n\nIf your situation is this bad, then my preferred means of doing this with git is to use a [fork/pull-request model](https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow) and force [fast-forward only merging](http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly) to master. What this means is that everyone must [rebase](http://nathanleclaire.com/blog/2014/09/14/dont-be-scared-of-git-rebase/) their branches every time something is merged to master and if something gets messed up, it’s isolated to the fork. Furthermore, by rebasing, it ensures that everyone is actually doing the “integration” part of Continuous Integration, reducing the state of entropy on the HEAD of master, and gets you that much closer toward your goal of Continuous Delivery. The beauty is that this system will take care of itself and you can get back to work to fix anything else that needs attention.\n\nHowever, be prepared that this is a pretty drastic measure to take, and there will defeinitely be some developers who kick and scream that you’re [killing ther mojo](https://www.youtube.com/watch?v=gEuw2mgLRuQ). Don’t let them win! Enforce it and set the bar to make sure everyone lives by the same rules.\n\n* [Git](https://git-scm.com/): Even though Linus has reportedy [soured on it](http://typicalprogrammer.com/linus-torvalds-goes-off-on-linux-and-git/) himself, I rank git as the gold standard of source control. It’s mature and there are lots of people familiar with it. Branching and merging is simple and fast (but unfortunately easy to make a mess of as explained above). The ability to rewrite history with rebase is incredibly powerful. But git is a complex tool and if you’re not comfortable on the command line you might struggle.\n* [Mercurial](https://www.mercurial-scm.org/): Another fine distributed version control system, which at first appearance is similar to git in many ways, but with some [notable differences](https://jhw.dreamwidth.org/1868.html) in terms of how it handles branching and merging. It is arguably easier to learn than git, but you will still need to grok it to use it properly (just as any tool). Branching and merging work seemlessy and mercurial will work just fine with this workflow.\n* [Subversion](https://subversion.apache.org/): If you have [nightmares about branching](http://blog.codinghorror.com/software-branching-and-parallel-universes/) in subversion then you should probably switch to git or mercurial. [Centralised VCSs are kind of an old and outdated idea](https://www.youtube.com/watch?v=_yQlKEq-Ueg), if you’re looking at Continuous Delivery, even though they are still heavily in use. However, if you’re happy with it and have all the kinks worked out in a clean way that allows isolating changes so they can be tested before merging, then it should be workable.\n\n**A note on code reviews:**\n\nThe importance of an integrated solution with code review and source control cannot be understated. Popular services such as github and bitbucket have contributed significantly to making this a part of Best Practice. If your code review tool is a separate piece of infrastructure tacked onto a VCS such as ReviewBoard or Crucible, leaving it possible for changes to circumvent the requirements for review and testing then this is a good reason to switch to something with tighter controls.\n\n**Orchestration:**\n\nBy orchestration I mean creating cloud resources (Virtual Machines, containers, etc), configuring cloud services (DNS, Networking, Load Balancing, etc), and preferrably also service discovery or communication between those resources when they are created or destroyed.\n\nOrchestration in cloud computing is a confusing topic because [there isn’t really a consistent definition ](https://www.flexiant.com/2012/09/18/what-is-cloud-orchestration/)of what “orchestration” actually means. The situation become even more complicated when you start looking at the plethora of tools out there that might at first seem like alternatives to each other, but the reality is it’s very much a venn diagram of overlapping functionality that you just won’t actually realise until you try them out. With that in mind, here is a list of tools that I do see as alternatives to one another.\n\n* [Cloudformation](https://aws.amazon.com/cloudformation/): this is a declarative, full-featured framework for managing cloud resource on AWS. It’s a powerful and flexible tool but it can get complicated pretty quickly as your usage of AWS expands (which is even more reason to use it). Some caution is required however, because even if you think you are a wiz at pushing buttons in the AWS console, [cloudformation knows more about how to deploy AWS services than you do](http://harish11g.blogspot.dk/2014/08/amazon-cloudformation-templates-automation-Amazon-CFT-AWS-top-best-practices-tips.html), and you will surprised if you make the wrong assumption. If you’re using EC2 then you should probably be using cloudformation, and sooner, rather than later, because it is not possible to add existing resources (such as an ec2-instance) into a cloudformation-managed stack.\n* [Heat](http://docs.openstack.org/developer/heat/): is OpenStack’s equivalent of Cloudformation. For the most-part, Heat is compatible with the Cloudformation syntax, so if you’re familiar with one, it’s pretty straight forward to learn the other. One of the benefits of Heat is that it has a bit more flexibility managing resources with operations such as abandon-stack that will allow you delete a stack without removing all the created resources, and later you can adopt-stack once you have fixed whatever you needed to. This is a pretty nice feature to help you through the learning curve of learning Heat templates because it can shorten the feedback loop, but you’re doing something wrong if that’s a function you need to use in production.\n* [Terraform](https://terraform.io/docs/index.html): is another declarative orchestration framework with the major benefit that it’s platform agnostic. So if you are in a hybrid cloud scenario, for example you have a private OpenStack cloud, but also run on AWS, then Terraform is probably something to look into. If you’re looking to avoid vendor lock-in then Terraform might be the way to go.\n* [Docker-compose](https://docs.docker.com/compose/): is Docker’s declarative resource orchestration framework and is definitely worth a look if you’re interested in running containers. Compose has a [layered aproach to handling different envrionments](https://docs.docker.com/compose/extends/#different-environments), whereby you start with a base configuration that all your environments start with and then you override certain values to turn it into dev/ci/stage/prod. That’s a really nice way to help maintain consistency between envrionments.\n\nI chose all the above because I’m a big fan of the [declarative](https://en.wikipedia.org/wiki/Declarative_programming)-style system definitions (as opposed to [imperative](https://en.wikipedia.org/wiki/Imperative_programming)). I think that is the right way to think about systems and build systems. But there are also other methodologies and lots of options for you to choose from (for example PaaS is a wholly different type of orchestration, which technically fits into my definition above)\n\n**Configuration Management:**\n\nThis is something often mixed in with “orchestration”. In some ways that is understandable, because a declarative orchestration template such as what is used in all of the tools listed above is essentially just a yaml or json file. Once that file is stored in version control it becomes a part of your Definitive Media Library (to use an ITIL term). This turns it into an essential part of the configuration definition of your environment required to reproduce it.\n\nAlthough we can view Configuration Management and Orchestration as a single topic, doing so makes the landscape of tools more confusing to understand. We’re better off dividing the two areas so that we can discuss them separately, which means we need an appropriate definition of Configuration Management to complement our understanding of Orchestration.\n\nSo when I talk about Configuration Management, I’m really referring to internal configuration of a compute resource, such as a virtual machine or container. This means software such as programming languages and frameworks, components such as apache and nginx, databases if they are running on the compute instances (as opposed to some *aaS like [DynamoDB](https://aws.amazon.com/documentation/dynamodb/) or [Trove](https://wiki.openstack.org/wiki/Trove), in which case it falls more under “orchestration”), all the way down to files, folders and permissions.\n\n* [Docker](https://www.docker.com/sites/default/files/WP-%20Definitive%20Guide%20To%20Containers.pdf): by now Docker is many things, but the technology at it’s core is really a method of shipping containers from one host OS to another host OS. What’s inside the container remains the same regardless of the host Operating System, meaning that it solves the Configuration Management problem of how your application runs in different environments. [Containers are not a new idea](https://www.youtube.com/watch?v=coFIEH3vXPw) at all but it was only with Docker that their real potential has finally been unlocked by bringing conscensus in the industry that this is now [without question the future of virtualization](https://www.opencontainers.org/). If you have not started containerizing your application yet and you’re still fumbling around with Virtual Machines, then you better put it on your road map, or be prepared to let the competition run you over.\n* [Ansible](http://www.ansible.com/how-ansible-works): is probably the simplest Configuration Management tool out there for managing Virtual Machines. It’s extremely lightweight ([the tar.gz is less that 1MB!](https://github.com/ansible/ansible/releases)), and works simply by ssh or winrm/powershell in the case of Windows when in push mode, or alternatively run it in pull-mode if you want to do things like auto-scaling. It’s written in python and is “batteries included” meaning that it has a load of inbuilt modules that work of the box. Ansible is [idempotent (or trivial to make idempotent), but it is not convergent](https://groups.google.com/forum/#!msg/ansible-project/WpRblldA2PQ/lYDpFjBXDlsJ). Ansible Tower is their paid-for version which includes a centralised control and monitoring server that can be a real benefit in Production. The downsides of Ansible is that is can be slow compared to the alternatives, and you can occasionally run into cryptic errors that are difficult to deciper.\n* [Chef](https://www.chef.io/)/[Puppet](https://puppetlabs.com/): are the opposites of Ansible in many ways considering they do the same job. From [the point since they initially diverged 8 or so years ago](https://news.ycombinator.com/item?id=911391), they basically spent the next 6 years copying each other’s features, to the point today where they are [stretching to differentiate themselves](https://www.scriptrock.com/articles/puppet-vs.-chef-revisited) once more. They’re both complicated, big, and bloated, but powerful (which is not necessarily a good thing as that power tends to be abused rather than limited to what it was intended to do). Their server-agent architecture is possible to run without the server, however you need to replace it with something else, which, in all the cases I’ve seen ends up being Ansible. Both the chef-server and puppet-master are almost 500MB which gets you no where without the rest of the software you need. If you’re already using puppet or chef then that’s fine, however if you’re new to virtualisation then choose Ansible.\n* [Saltstack](http://saltstack.com/): Where Chef and Puppet played the game of competitively diverging and converging over time, in many ways the same goes for Ansible and Saltstack. To begin with, Salt was a similar server-agent architecture to Chef and Puppet, but [salt-ssh](https://blog.logentries.com/2015/04/salt-ssh-for-remote-execution-of-states-and-modules/) is obviously mimmicking the agentless architecture of Ansible. In the server-agent mode, Saltstack is faster than Ansible, and in all modes simpler to use and learn than Chef and Puppet, without suffering from the bloat.\n\nTo summarise the points above, the primary decision to make regarding Configuration Management is whether you’re at a stage where you’re [ready to make the leap to containerize](https://labs.ctl.io/how-to-migrate-legacy-applications-into-docker-containers/) your applications or not. So how on earth do you make that decision?\n\n1. First of all you need to want to be on the bleeding edge of technology, because you feel you can handle the risk.\n2. Secondly, your organisation also needs to be mature enough from a team work and culture perspective as well as having some senior technical people who know enough about things like microservices, SOA and REST architectures.\n3. Lastly, you should not be spending the majority of your time firefighting or struggling with technical debt.\n\nIf you meet those 3 criteria then you’re in a good place to begin containerizing your applications. If you only fit 1 or 2 then you’re on the margins and maybe you want to wait another 6-12 months so that you can fix some of your issues and wait for the other tools in the space to mature. If you don’t fit any of those areas then forget it, you have way bigger problems to solve.\n\nIf you are not ready to containerize but you want still want to virtualize in VMs then you will still need a tool like Ansible or Saltstack. I recommend against Chef or Puppet, unless you already have people who know these tools and know how to build systems with them without making a bigger mess. Even in that case, keep it in mind that we already can see the end of the road of the Virtual Machine. And yes I really mean that. [The only reason why we ended up with Virtual Machines as the dominant virtualisation technology is because of Windows](https://www.youtube.com/watch?v=coFIEH3vXPw), and now even [Microsoft has seen the light](http://venturebeat.com/2015/11/19/microsoft-launches-new-windows-server-2016-preview-with-hyper-v-containers/). That means you really want to carefully consider how much time and effort you are going to spend on something that is at best, at stop-gap.\n\nWe’ll finish off this long series in Part 3"}],"intro":"This is the second of three posts about building an advanced deployment pipeline. ","slug":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-2","url":"/en/news/building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-2","lang":"en","filename":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-part-2","image":"/assets/uploads/8442_without_stripes.png"},{"title":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment (part 3)","case_slug":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment 3","intro":"This is the last of three posts about building an advanced deployment pipeline.","body":[{"type":"body_text","text":"CI servers themselves aren’t really much more than glorified remote script runners and just because you might have a CI server setup with some automated tests does not mean you are doing continuous integration. As I mentioned in part 2, the “integration” part of continuous integration actually occurs as the source-control level whereby developers are [merging/rebasing their changes with the latest from the master branch](https://www.thoughtworks.com/continuous-integration). That integration should occur at least once per day, and then automated tests should be run to see if anything has broken. Only if every developer is doing this regularly can you say that you are actually doing continuous integration.\n\n* [Jenkins](https://jenkins-ci.org/): is still the most mature and wide spread CI server around today. It has an active community with [over 1000 available plugins](https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Pluginsbytopic) giving it unrivalled flexiblility and functionality. This is really great, because most problems are not new, so it’s likely that whatever your case maybe, there is likely already a plugin to help you. It’s written in java and fully open source. If you have a complex system, it’s almost a certainty that jenkins will be able to handle it. On the downside, jenkins can be a pretty complicated beast to configure, and the user interface is pretty clunky and ugly. A few of the key pieces of sowftware and plugins I always use with jenkins to build deployment pipelines are [swarm plugin](https://wiki.jenkins-ci.org/display/JENKINS/Swarm%20Plugin), [jenkins job builder](http://docs.openstack.org/infra/jenkins-job-builder/) and [cloudbees flow plugin](https://wiki.jenkins-ci.org/display/JENKINS/Build%20Flow%20Plugin).\n* [Thoughtworks Go](https://www.go.cd/): I think it’s a bit surprising that not many people seem to have heard of Go, considering the guys who wrote the book on [Continuous Delivery](http://www.amazon.com/dp/0321601912?tag=contindelive-20), [Jez Humble](https://www.thoughtworks.com/profiles/jez-humble) and [Martin Fowler](http://www.martinfowler.com/) both work for Thoughtworks. No surprises that Go is designed “out of the box” to be suitable for Continuous Delivery and building Deployment Pipelines. Since they [open sourced the product in 2014](https://www.thoughtworks.com/news/go-continuous-delivery-now-available-as-free-open-source), you can see how healthy the [community is on github](https://github.com/gocd/gocd). It has probably one of the nicest interfaces for any CI tool out there, however the rate of issues being created versus resolved is something to keep an eye on.\n* [Team City](https://www.jetbrains.com/teamcity/): If you have used one of [Jetbrains IDEs](https://www.jetbrains.com/products.html#type=ide), then you’re probably are also familiar with Team City. They produce good tools that are popular with many developers. It’s no surprise that Team City is a solid CI server as well, which of course integrates seamlessly with your IDE. You can create dependencies via build chaining between individual jobs to set up a deployment pipeline. Team City is a capable tool and free for a small number of build and agents, however it is closed source and if you’re expecing to run at large scale it’s going to become pretty [expensive](https://www.jetbrains.com/teamcity/buy/).\n\n**Artifact Repository:**\n\nProbably one of the least exciting topics is the storing and retrieving of build artifacts, however it is important, especially for compiled languages. If releasing to production means pulling the latest changes and then compiling, I’m sorry but you’re just doing it wrong. [Each time you compile your source, you’re more likely to end up with a different binary ](http://superuser.com/questions/639351/does-recompiling-a-program-produce-a-bit-for-bit-identical-binary)than not, even if the source has not changed. That means it is possible for the runtime execution of your program to be different, and thus any [testing and verification can really only be garanteed for a certain binary](http://blog.arungupta.me/build-binaries-only-once-continuous-deployment/), and not the source commit.\n\nIf you’re using something like php then this is potentially less of an issue, however since [Facebook started turning PHP into Java](http://hhvm.com/), then even that is probably not true in all cases.\n\n* [Artifactory](https://www.jfrog.com/open-source/#os-arti): is a flexible repository that in it’s free version can store java packages such as jar, war and ear, but in it’s paid-for version can also mirror node npm, python pypi and ruby gems as well as OS packages such as rpm and deb! It also integrates with CI servers such as Jenkins and Team City and Go. It’s open source, which is nice, but to get all the goodies, you will need to [fork over the cash](https://www.jfrog.com/artifactory/buy-now/).\n* [Nexus](http://www.sonatype.com/nexus/solution-overview/nexus-repository): will do pretty much all the same things that Artifactory will do, however the additional language support for other package types come in the open source version. It will also integrate with all the major Ci servers and is actually a bit cheaper then Artifactory.\n* [rpm](https://wiki.centos.org/HowTos/CreateLocalMirror)/[deb](http://www.aptly.info/) mirrors (and other OS packages): I mention this separately, because just like controlling your application dependencies is important, so are controlling your OS dependencies. We’ve probably all been in the situation where the depenency we were downloading somewhere off the internet went missing, or when we got an update that unexpectedly broke the build or brought down production (because we didn’t test it - oops!).\n\n**Workflow Visualisation:**\n\nPerhaps workflow tools might seem like an afterthought in the context of a Deployment Pipeline, but unfortuantely this is not so. When the topic comes up about how to manage releases, what is the definition of “done”? These tools are a necessary link in the chain.\n\n* [Jira](https://www.atlassian.com/software/jira): Atlassian Jira is a popular and powerful issue management and workflow visualisation tool. It is highly configurable which means it’s great for handling all sorts of agile and ITIL-style processes to fit your organisation, but that is also often [where it also goes wrong](https://jira.atlassian.com/secure/attachment/67504/Workflow%20with%20labels.png), resulting in a configuration nightmare. Jira’s power is also it’s curse, however when used correctly it is a fine and effective tool with a lot of in-built features and reports. It supports both scrum and kanban, but is unfortunately opinionated in these areas so if you are using some kind of blended “scrumban” then you might run into trouble. Jira can be integrated with quite a few different tools, but of course works best if you stick to the Atlassian suite.\n* [Trello](https://trello.com/): is a lightweight cloud service for Kanban-style worfklow management. If you don’t want the hassle of complex workflows and just want to get stuff done then Trello could be a good fit for you, if you can live without customizations. Through other services such as [Zapier](https://zapier.com/), you can integrate different services with Trello so that you can get a high level overview of progress. You can also upgrade the service to business class to get access to [power-ups](https://trello.com/power-ups).\n* [Kanban vs Scrum and DevOps](https://www.scriptrock.com/blog/devops-kanban-match-heaven): I feel that its worth making a note that in my experience, Scrum is not ideal for doing Continuous Delivery and DevOps. Things like time-boxed sprints, backlog grooming, sprint planning, and stakeholder demos all start to feel quite restrictive in their format and routine, especially when you want the flexibility of releasing every day. [Kanban is better suited for Continuous Delivery](http://www.swiftkanban.com/kanban/kanban-for-devops-continuous-delivery/), and I’d go out on a limb to say that I think the DevOps community as a whole is moving towards support of Kanban over Scrum.\n\n**Monitoring and metrics:**\n\nThere are basically two forms of monitoring and metrics that are important. You have read-time monitoring that you need to react to incidents from from production events, and then you have metrics for analytical and statistical purposes (aka [Business Intelligence](https://en.wikipedia.org/wiki/Business_intelligence)) that can come from either (or both) log files or database reports .\n\n* [Prometheus](http://prometheus.io/): is an open source monitoring tool built by Soundcloud. I first learned about Prometheus from my friend [Matthias Grüter](https://www.youtube.com/watch?v=Z0LlilNpX1U&feature=youtu.be) about it at a Stockholm DevOps meetup and thought it looked quite impressive. It seemed like it actually offered something new and better things than a lot of the other monitoring tools which had been around for a while like [Nagios](https://www.nagios.org/) and [Graphite](http://graphite.wikidot.com/start). It has instrumentation for lots of different languages, support for different frontends and backends and is easy to setup. Maybe it won’t do everything you want but it certainly should be a good start.\n* [ELK](https://www.elastic.co/products): meaning [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html), [Logstash](https://www.elastic.co/guide/en/logstash/current/introduction.html) and [Kibana](https://www.elastic.co/guide/en/kibana/current/introduction.html), which is a powerful set of tools to perform logfile analysis. ELK is gaining wide acceptance because they work well and are open source with a vibrant community. Logstash will handle almost any log you can ship to it, such as web logs, database logs, syslogs and windows event logs which can then be stored, and parsed by elastic search and finally displayed by Kibana. Even though it’s 3 separate components, they are all designed to integrate seemlessly with each other. Compare this to a paid, closed source service like Splunk, it’s hard to imagine that they will survive too much longer without doing something drastic.\n* [Pentaho](http://www.pentaho.com/): Is an open source BI platform that offer a [free community edition](http://community.pentaho.com/) as well as an enterprise product with [lots of heavy stuff](http://www.pentaho.com/product/product-overview). If your needs aren’t met by the free version, then at least you’ll get to try and feel the product to see if you need all the power that’s offered in the paid version. I’m not sure what their pricing and licensing is like but there aren’t too many companies in this space that offer products that look as good as this, are open source, with free community editions.\n\nIf you made it this far, I hope it has been a worthwhile read. At some point in the near future I hope to be able to open source some code to show how all these pieces can be assembled, but we’ll see how that goes. Obviously the amount of work involved to get the basics up and running is not really something that you can whip up in just a weekend.\n\nAnyways, if there are certain areas where you wished to have more information and option about tooling, maybe [this list will help you](https://github.com/kahun/awesome-sysadmin). Otherwise if you have questions or comments you can shoot me an email."}],"tags":["pipeline"],"slug":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-3","url":"/en/news/building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-3","lang":"en","filename":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-part-3","date":"2015-12-23","image":"/assets/uploads/8442_without_stripes.png"},{"date":"2015-12-18","title":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment (Part 1)","case_slug":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment","tags":["pipeline"],"body":[{"type":"body_text","text":"Back in May I did a presentation about [“The search for the Holy Grail”](https://www.youtube.com/watch?v=1I7zyaQ3Dak) at a DevOps meetup in Stockholm. The alternative name for the presentation could have been the title in this blog post, but that wouldn’t have allowed me to make gratuitous monty python references.\n\n**Lets sort through the buzzwords and really clarifiy exactly what we’re talking about:**\n\n* Portable: meaning portable between cloud providers (AWS and Elastx’s Openstack platform for example)\n* Scalable: meaning that adding more people/teams/components to the system should have no worse than a linear relationship between the mean time to commit/push and deploy to production\n* Reusable: meaning the software architectue to build the Deployment Pipeline should be achieved in such a way to make code generic and adaptable to other systems with minimal modification.\n\nIt’s a fair question to ask is it really possible or even worthwhile to build such a Deployment Pipeline?\n\nObviously my answer to that is a resounding, “yes”! And the explanation is simple, really. Since I’ve been working with Continuous Delivery and DevOps for almost 4 years, the basic problems that need solving are essentially the same every time in the following order of importance:\n\n1. Fix branching and merging so there is clear and simple model with a clean master and features branches with only a single path to merge changes to master\n2. Setup CI and code reviews as a gateway through which all branches must pass before merging to master\n3. Introduce tools for configuration management and automated deployment as a means to reproduce the system\n4. Continue to hammer points 1, 2, and 3 into everyone’s heads while improving automated testing and configuration management until you can rebuild the system from scratch\n\nSo if it’s basically the same thing every time then why not simplify the whole process and build a pipeline with interchangable parts that will save you a whole load of work down the line?\n\nThat’s what I thought.\n\nThis is going to be a tools and workflow-focussed post, so I want begin with an overview of the tools available to us to build our pipeline. At the end of the article I’ll explain about how to look out for some of the anti-patterns I’ve encountered concerning these types of discussions.\n\nProbably one of the most complex things in the DevOps space is keeping track of the explosion of tooling in the last few years. An interesting overview of this is portrayed in [Xebialabs periodic table of devops tools](https://xebialabs.com/periodic-table-of-devops-tools/).\n\nElements in a periodic table, where everything sits in a neat little box, paints an overly simplistic picture however, because the reality is that tools in the DevOps landscape requires a [venn-diagram](https://en.wikipedia.org/wiki/Venn_diagram) something more like this…"},{"type":"body_image","src":"/assets/uploads/deployment-pipeline-blog-pt1.png"},{"type":"body_text","text":"Unfortunately this diagram is only for illustrative purposes. I do think that it would be a worthy exercise to invest the time to properly classify the categories and the overlap between them (at the very least you’d probably be on the front page of hacker news for a couple of days).\n\nBut that will have to wait for another day.\n\nLet’s go through what I think are the critical elements of Deployment Pipeline:\n\n1. Source Control (including code review)\n2. Orchestration (of cloud resources and service discovery)\n3. Configuration Management (for consistent and reproducible environments)\n4. Continuous Integration (including automated testing)\n5. Artifact repository (for storing and fetching build artifacts)\n6. Workflow Visualisation (a dashboard where “done” means released to production)\n7. Monitoring and metrics (to ensure both speed and quality are improving)\n\nThat is a big list of stuff to cover so we’re going to have to sharpen our knife and fork before we [eat this elephant](http://www.pickthebrain.com/blog/how-to-eat-an-elephant/).\n\nBefore we get into specific tooling, I want to take a moment to note how others are tackling this problem. There are many degrees of freedom so in case you don’t find my approach useful, hopefully one of these other methods might be more suitable for your use case.\n\n(I have no affiliations to any of the companies or services listed below.)\n\n1. The first project requiring mention is [software-factory](https://github.com/redhat-cip/software-factory). It is based on very sophisticated tooling used in the [Openstack](http://www.openstack.org/software/) project where a lot of my ideas and inspiration come from (Elastx runs Openstack). Software Factory gets full marks for scalability, however it is not portable because it’s tied to Openstack. It is packaged in such a way to make it reusable, but the tools are tightly integrated, so if you want to swap one of them for something in your existing toolchain, you’re going to have some work to do.\n2. [Codeship](https://codeship.com/features) have created a company around supplying “Deployment Pipelines as a Service”, which I think is a pretty good business model - it clearly demonstrates that there is a lot of similarities between one deployment pipeline and another. They also have a great technical blog with lots of good tips focussed on CI/CD and automation. They definitely earns points for flexibility and reusability as you can , but they are not very portable because you’re tied to AWS (or Heroku, which also runs on AWS).\n3. [Distelli](https://www.distelli.com/features) are a competitor to Codeship with another “DPaaS” that appear more flexible in your options around [endpoint portability](https://www.distelli.com/deployment). They look like they have a fairly flexible “plug and play” architecture, but I don’t know how scalable their solution is for building complex environments.\n4. I always find myself especially attracted to [Travis-ci](http://docs.travis-ci.com/user/getting-started/) and [Github](https://github.com/features) because of their sweet integration with lots of flexibility and it’s free for open-source projects. However if you have a complex application you’re going to run into problems.\n\nThat’s just a few alternative approaches to solving this problem and one of those might be more suitable depending on your situation.\n\nIf your business is primarily focussed on a [CMS](http://www.rackspace.com/knowledge_center/article/cms-comparison-drupal-joomla-and-wordpress) to run a website and you don’t really have to worry about more complexity than [scaling](http://www.virtualizationadmin.com/blogs/lowe/news/scale-up-vs-scale-out-the-key-differences-333.html) and pushing content to a caching layer or [CDN](http://www.cdnreviews.com/cdn-comparison/), then your usecase may fit better into one of the options listed above. That is unless you’re doing something really crazy like orchestrating [Docker containers with Kubernetes on Apache Mesos](http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html) and running some Big Data business analytics on your [Hadoop-Elasticsearch backend](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/arch.html). And if that’s your usecase then that is totally badass, but it would be pretty weird if you didn’t already have a deployment pipeline working pretty smoothly if you’re at that scale already.\n\nAnother way to phrase the above, is if you’re building your services on a PaaS and you’re living in a happy place then there’s probably no reason to make trouble for yourself. However if you have to worry about IaaS, network segmentation, patching operating systems, a mixed environment with fixed hardware and virtualised services, data protection or legal compliance like [PCI-DSS](https://www.pcisecuritystandards.org/security_standards/), [EFPIA](http://www.efpia.eu/), or [HIPAA](http://www.hhs.gov/ocr/privacy/hipaa/understanding/index.html), then you should continue reading.\n\nIn Part 2 we’ll get into tool selection with pros and cons…"}],"intro":"This is the first of three posts about building an advanced deployment pipeline.","slug":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment","url":"/en/news/building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment","lang":"en","filename":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-part-1","image":"/assets/uploads/8442_without_stripes.png"},{"date":"2015-06-23","intro":"We’ve created a simple one-click Wordpress install for Virtuozzo (formerly Jelastic) that’s prepared with some basic fine tuning both for Nginx, PHP and Wordpress itself. It comes preinstalled with w3 total cache plugin and Hannari Blue theme.","body":[{"type":"body_text","text":"Please [log in](https://app.jelastic.elastx.net/) or [sign-up](https://app.jelastic.elastx.net/?signup=true) to Virtuozzo (formerly Jelastic) and find the ELASTX Wordpress install in the marketplace, search for ELASTX and you will find it. If you’re a new user, there will be steps registering for our service. Don’t worry, you can try this for free by contacting our support and ask for a Development account. No strings attached.\n\nWhile the environment is created in the background, follow the [video](https://youtu.be/FxK-1H0xqyM) for the simple steps to take full advantage of the caching capabilities Wordpress have.\n\nIf you want to join the collaboration on making this one click-install of Wordpress better, don’t hesitate to checkout the code for it and contribute!\n\nCheck it out at "}],"tags":["wordpress"],"title":"Simple, yet performant Wordpress sites on Virtuozzo PaaS","case_slug":"Simple yet performant Wordpress sites on Jelastic PaaS","slug":"simple-yet-performant-wordpress-sites-on-jelastic-paas","url":"/en/news/simple-yet-performant-wordpress-sites-on-jelastic-paas","lang":"en","filename":"simple-yet-performant-wordpress-sites-on-jelastic-paas","image":"/assets/uploads/empty_switch_off.jpeg"},{"date":"2015-06-15","title":"Get Drupal 8 up and running within 5 minutes","case_slug":"Get Drupal 8 up and running within 5 minutes","tags":["drupal"],"intro":"Drupal 8 is about to reach its first release candidate. We are very excited and would like to celebrate it, therefore we have created a one click installer so you can test Drupal 8 in our Virtuozzo (formerly Jelastic) platform!","body":[{"type":"body_text","text":"Please [log in](https://app.jelastic.elastx.net/) or [sign-up](https://app.jelastic.elastx.net/?signup=true) to Virtuozzo (formerly Jelastic) and find the ELASTX Wordpress install in the marketplace, search for ELASTX and you will find it. If you don't have an account already you will get instructions how to apply for a free Development account. Just tell us that you would like to try Drupal 8 and we will get you started. You can upgrade to a full version of our Virtuozzo PaaS at any given time and pay with your credit card. Enjoy."}],"slug":"get-drupal-8-up-and-running-within-5-minutes","url":"/en/news/get-drupal-8-up-and-running-within-5-minutes","lang":"en","filename":"get-drupal-8-up-and-running-within-5-minutes","image":"/assets/uploads/article-hero-10.jpg"},{"date":"2015-03-31","title":"Backups in OpenStack cloud","case_slug":"Backups in OpenStack cloud","tags":["backup"],"intro":"I have worked a lot with backups over the years and one thing I have learned is that managing a traditional backup solution often takes too much time, consumes too much resources and costs too much.\n\nBut what about backups in a cloud environment? I still have important data and a lot of servers that need to be up and running. True, but if you do it right from the beginning you can take a fundamentally different approach to data protection and like many other things in the cloud it is a huge improvement.\n\nThese are my general rules that I will describe more in details.","body":[{"type":"body_text","text":"\n## 1. Do not use a traditional backup system\n\nYou do not need it, It will save you time and money not to use it.\n\n## 2. Make servers stateless\n\nStore no data that you cannot recreate on the server if you do not have to, especially not the root disk. A web application server should be stateless but a database server is stateful. When you have a stateful server make the root drive stateless and store you stateful data on an Openstack Volume. If your logs are important to you, do it right from the beginning and ship them to a log server instead of backing them up. \n\n## 3. Automate server provisioning\n\nIf you automate the server provisioning:\n\na. you do not need to backup the OS, application and configuration files.\n\nb. you get the best documentation you can have of a server and you should store the recipe in source code repo.\n\nc. it will take just minutes to provision new servers for recovery, horizontal scaling and test.\n\n## 4. Use native application backup tools\n\nMy experience is that you will get the most reliable and efficient backups by using the applications native backup feature. Another advantage is also that the application owner will have control over both the backup and recovery procedures. Schedule the backups locally in the application or in the OS with unix/linux cron or windows task scheduler. Don’t forget to copy the data to a storage that is in a different availability zone or region. The best solution is normally to use an object storage solution like Swift or similar. Modern applications often have plugins to support object storage natively.\n\n## 5. Monitor your backups\n\nThe challenge with locally scheduled backups is that you do not have a central location where you can monitor that all backups are successfully running. But there is an easy solution for that as well. Use an event monitoring solution that you provide your self or you can use a SaaS like Pushmon, Opsgenie Heartbeat or similar. \n\n## 6. Automated recovery tests\n\nWhen you automate the provisioning it is very easy to test recovery as well. And if you can use it in your daily work, that's even better. One example is to build test environments using the production backup data. If you automate and schedule it to get it done, and then use event monitoring to make sure it works.\n\n## Provisioning\n\nThere are many ways of provisioning a server but I will just show you how you can use the native OpenStack tools to do it. If it is a single server you can just use Nova and if there are multiple servers and/or other infrastructure services required you can use Heat.\n\nWe will use a single server and nova in this example to keep it simple.\n\nWe create a simple nginx web server on CentOS. Below is the nova command used and the user_data input file.\n\n`nova boot webserver01 --flavor m1.small --image centos-6.5-20140117.0 --key-name my-keypair --user-data ./WebServer.user_data`\n\n## Server (Instance) Backup and recovery\n\nYes there are occasions where you cannot or its not practical to automate server provisioning or to do native application backups. No worries, there is a very easy way to do server backups in OpenStack. In this example I am using the cli client but you can do it with the API also.\n\nIf you have a stateless server and you need to recover the node you can just do a rebuild and the server will be up and running again with the same IP, name, config etc.\n\nWith the nova rebuild command you will just enter the server name and image name. If you have verified that the user_data is compatible with another OS you can even use this to change or upgrade the OS version:\n\n`nova rebuild webserver01 centos-6.5-20140117.0`\n\nIf you want to create a backup you will use the nova backup command:\n\n`usage: nova backup `\n\nIf we want to make daily backups with a 1 week retention and a weekly backup with a 4 weeks retentions this is what should schedule in cron for example.\n\n``0 2 * * 6 nova backup webserver01 webserver01-daily-`date +%F` daily 6``\n\n``0 2 * * 1,2,3,4,5,7 nova backup webserver01 webserver01-weekly-`date +%F` weekly 4``\n\nWhen running a backup a snapshot is made of the server and that snapshot is then stored as an image in Openstack. The backup image can then be used to boot new servers or to recover the current one.\n\nHere is how to recover the server from a backup image:\n\n`nova rebuild webserver01 webserver01-daily-2014-12-12`\n\n## Volume backup and recovery\n\nThere are volume snapshot and backup functions available as well. If you have stateful data on a Volume and there is no advantage in using the native application backup then you should look at the OpenStack Volume backup functionality.\n\n**Create a volume**\n\nIn this example we will create a 10GB volume and attach it to our database server.\n\n`usage: cinder create [--display-name ] `\n\n`cinder create --display-name dbserver-vol01 10`\n\n`usage: nova volume-attach []`\n\n`nova volume-attach dbserver01 c0e9e951-e33f-488e-a605-95d0ecc728e1 /dev/vdb`\n\nCreate a filesystem, mount it and add some data.\n\n**Snapshot a volume**\n\nA snapshot is great for creating fast point in time copies of volumes. It will help you to protect your data from logical errors but it will probably not help you if the physical disk system will break.\n\nTo create a snapshot on a volume that is attached to a server you need to add the “--force True” option.\n\n`usage: cinder snapshot-create [--force ] [--display-name ] `\n\n`cinder snapshot-create --force True --display-name dbserver-vol01-snapshot dbserver-vol01`\n\nTo attach a snapshot to a server you need first create a volume from the snapshot.\n\n`cinder create --snapshot-id 4e87739b-4369-430d-bcdb-034b0f0ff861 --display-name vol-from-snapshot 10`\n\nAnd then you can attach it as a new device or replace the current device.\n\n`nova volume-attach dbserver01 de75c58b-f311-4e7b-acc7-ae9c2cfeb18b /dev/vdc`\n\n**Backup a volume**\n\nThe backup feature is a lot like the server (instance) Root disk backup. It will make a copy of the volume and store it as file in but now directly on Swift object storage.\n\nThe problem with the current backup feature is that you cannot make a backup of an attached volume. But if you first do a snapshot, then you can make a backup from that snapshot but in the process you need to convert the snapshot to a volume first. Not very straight forward but it works.\n\nHere is the procedure:\n\nCreate a snapshot\n\n`cinder snapshot-create dbserver-vol01 --force True --display-name dbserver-vol01-backup`\n\nCreate a volume from the snapshot\n\n`cinder create --snapshot-id 0ac15385-6fdb-42df-b779-2869c4518ed5 --display-name dbserver-vol01-backup 10`\n\nMake a volume backup\n\n`cinder backup-create --container volume-backup --display-name dbserver-vol01-2015-03-30 dbserver-vol01-backup`\n\nHere is how you can list your backups\n\n`cinder backup-list`\n\nTo recover a volume you need to detach the volume from the server, recover the data and then reattach the volume again.\n\n`nova volume-detach dbserver c0e9e951-e33f-488e-a605-95d0ecc728e1`\n\n`cinder backup-restore --volume-id dbserver-vol01 e8ec1eee-a9cb-4135-b8a9-21173bd965db`\n\n`nova volume-attach dbserver c0e9e951-e33f-488e-a605-95d0ecc728e1`\n\nFrom nova you need to specify the volume id and your need the backup id in the cinder backup-restore command.\n\n## To sum it up\n\nDo it right from the beginning and you can retire your old backup solutions, you won't even need to use the built in backup features in OpenStack."}],"slug":"backups-in-openstack-cloud","url":"/en/news/backups-in-openstack-cloud","lang":"en","filename":"backups-in-openstack-cloud","image":"/assets/uploads/server_detail.jpg"},{"date":"2014-10-31","title":"Spin up CoreOS on OpenStack using Vagrant","case_slug":"Spin up CoreOS on OpenStack using Vagrant","tags":["coreos","openstack","vagrant"],"intro":"CoreOS is a new Linux distribution that has been rearchitected to provide features needed to run modern infrastructure stacks. The strategies and architectures that influence CoreOS allow companies like Google, Facebook and Twitter to run their services at scale with high resilience.","body":[{"type":"body_text","text":"Just this week I tried out Vagrant Openstack plugin to spin up CoreOS instances on [Elastx ops:IaaS](https://elastx.se/en/openstack/).\n\n## Findings & issues\n\n1. Don’t destroy your Vagrant created Openstack images via Openstack. If you do, all your vagrant command will say ”Instance could not be found” and there seems to be no sane remedy. (Cleaning your ~/.vagrant.d/ files works.) The same thing has happened a few times when I do vagrant destroy where the instance is removed according to Openstack but lingers in Vagrant’s metadata.\n2. Be thorough with your preparations, especially ssh configurations. If you skip key name, public and private keys they’ll be generated in Openstack that then can’t be exported.\n3. Vagrant Openstack plugin (at least the one used in this post) is not ”official” according to Openstack foundation. More info [here](https://github.com/mitchellh/packer/issues/776). Thanks Philip!\n\n## Running it\n\nWhen up and running you do:\n\n`$ vagrant up --provider=openstack`\n\ninstead of\n\n`$ vagrant up`\n\nafter that everything is as usual. In addition you have plugin specific command line additions:\n\n```\n$ vagrant openstack --help\nUsage: vagrant openstack command\nAvailable subcommands:\n image-list List available images\n flavor-list List available flavors\n network-list List private networks in project\n floatingip-list List floating IP and floating IP pools\n volume-list List existing volumes\n```\n\n## Installing the plugin\n\nThere are a few different Vagrant Openstack plugin choices like vagrant-openstack and vagrant-openstack-plugin all available to be installed. The one to use are however [vagrant-openstack-provider](https://github.com/ggiamarchi/vagrant-openstack-provider).\n\nTo install:\n\n`$ vagrant plugin install vagrant-openstack-provider`\n\nTo only run the plugin command line additions you only need a Vargantfile with Openstack connection configuration like the one below:\n\n```\nrequire 'vagrant-openstack-provider'\nVagrant.configure('2') do |config|\n config.vm.box = 'openstack'\n config.vm.provider :openstack do |os|\n os.openstack_auth_url = 'https://ops.elastx.net:5000/v2.0/tokens'\n os.username = ''\n os.password = ''\n os.tenant_name = ''\n end\nend\n```\n\n## Provisioning\n\nIn order to provision a real coreos node we need more preparations and a more complete Vagrantfile. We need:\n\n* to generate key pair to import to Openstack for use with vagrant ssh\n* username for login to image\n* figure out floating IP pool to be able to connect at all to the provisioned instance\n\nLet's go to work:\n\n1. Generate key pairs. \n\n\n```\n $ ssh-keygen -t rsa -f test.key\n $ ssh-keygen -y -f test.key > test.key.pub\n```\n\n Put your private and public key files next to your Vagrantfile.\n\n2. Import public key into Openstack. Login to Openstack and import public key under Access & Security, Key Pairs.\n3. Figuring out IP pool might actually be easier via vagrant than through the Openstack GUI.\n\n\n```\n$ vagrant openstack floatingip-list\n+-------------------+\n| Floating IP pools |\n+-------------------+\n| test-net-01 |\n+-------------------+\n```\n\n**So let’s look at a full sample Vagrantfile:**\n\n```\n# \\-_\\- mode: ruby -_-\n\n# \\# vi: set ft=ruby :\n\nrequire 'vagrant-openstack-provider'\n\nVagrant.require_version \">= 1.6.0\"\n\nVagrant.configure(\"2\") do |config|\n config.vm.box = \"doesnt_really_matter_but_is_required\"\n config.vm.box_version = \">= 308.0.1\"\n config.ssh.username = ''\n config.ssh.private_key_path = '/test.key.pub'\n\n config.vm.provider :openstack do |os|\n os.openstack_auth_url = 'https://ops.elastx.net:5000/v2.0/tokens'\n os.username = ''\n os.password = ''\n os.tenant_name = ''\n os.flavor = 'm1.small'\n os.image = 'coreos-444.4.0'\n os.floating_ip_pool = 'test-net-01>'\n os.keypair_name = 'test'\n os.public_key_path = '/test.key'\n end\n\n config.vm.define vm_name = \"coreos-vagrant-01\" do |config|\n config.vm.hostname = vm_name\n config.vm.network :private_network, ip: \"172.17.8.101\"\n end\nend\n```\n\nYou’re now good to go.\n\n```\n$ vagrant up --provider=openstack\nBringing machine 'coreos-vagrant-01' up with 'openstack' provider...\n==> coreos-vagrant-01: Finding flavor for server...\n==> coreos-vagrant-01: Finding image for server...\n==> coreos-vagrant-01: Launching a server with the following settings...\n==> coreos-vagrant-01: -- Tenant : elastx.se\n==> coreos-vagrant-01: -- Name : coreos-vagrant-01\n==> coreos-vagrant-01: -- Flavor : m1.small\n==> coreos-vagrant-01: -- FlavorRef : bff4c362-1a64-4895-bcbe-89b437815934\n==> coreos-vagrant-01: -- Image : coreos-444.4.0\n==> coreos-vagrant-01: -- ImageRef : be5c10e9-80c5-4646-80bf-c0123243a4ef\n==> coreos-vagrant-01: -- KeyPair : test\n==> coreos-vagrant-01: Waiting for the server to be built...\n==> coreos-vagrant-01: Using floating IP 88.80.174.175\n==> coreos-vagrant-01: Waiting for SSH to become available...\n/Users/gungus/.vagrant.d/gems/gems/vagrant-openstack-provider-0.4.1/lib/vagrant-openstack-provider/action/create_server.rb:340:in `initialize': Network is unreachable - connect(2) (Errno::ENETUNREACH)\n```\n\n… I ignore the error, ssh seems to work fine.\n\n```\n$ vagrant ssh coreos-vagrant-01\nCoreOS (beta)\ncore@coreos-vagrant-01 ~ $\n```\n\nThat's all for now. Feel free to send us an email at if you have any questions. If you are interested in trying our ops:IaaS please [contact support](https://support.elastx.se/). Good luck with spinning up your coreos instances. \n"}],"slug":"spin-up-coreos-on-openstack-using-vagrant","url":"/en/news/spin-up-coreos-on-openstack-using-vagrant","lang":"en","filename":"spin-up-coreos-on-openstack-using-vagrant","image":"/assets/uploads/glowing_400g_switch.jpeg"},{"date":"2013-12-16","tags":["wordpress"],"title":"Easy High Availability","case_slug":"Easy High Availability","intro":"If you application is business critical and you think high availability and disaster protection is too complex and expensive then this is for you. We will show you how to setup two sites and use a couple of SaaS services for site fail-over and monitoring. These powerful tools will cost you the massive amount of about $10 per month.","body":[{"type":"body_image","src":"/assets/uploads/easy-high-availability-1.png"},{"type":"body_text","text":"\n\nSo this is how it will work. By using a CDN (Content Delivery Network) and a monitoring system that will tell the CDN where to send the traffic we will get a powerful and easy to use solution.\n\n1. Create sites\n\nYou will create two sites preferably at two different vendors or regions to make sure both sites never will be down at the same time. A good example would be two Virtuozzo (formerly Jelastic) sites, Elastx in Sweden and one of the other 14 sites available around the world. Deploy you application on both sites and set up data replication between the sites.\n\nYou can choose to have an Active/Active site or Active/Passive.\n\nI will not cover database replication now but I will get back with a new blog post discussing some options and recommendations. Please contact us if you want to discuss options on data synchronization as it depends a lot on the application and database you use.\n\nIn this example we created two sites with the following URL and IP.\n\n[http://site1.jelastic.elastx.net](http://site1.jelastic.elastx.net/) 88.80.172.123\n\n[http://site2.jelastic.elastx.net](http://site2.jelastic.elastx.net/) 88.80.172.124\n\nAnd the main site address we will use is [http://jelastic.se](http://jelastic.se/)\n\n1. Configure CDN\n\nWe need an external service to redirect traffic to the sites that are available. By using CDN you will at the same time also get better response times from other geographic regions. In this example we are using Cloudflare. They have a free plan and also a very nice price model for their other plans. They also have good geographic coverage with a local site here in Sweden .\n\n1. Register an account at \n2. Sign in and follow the the instruction on how you transfer your domain to Cloudflare, it will take up to 24h. You have to transfer you domain to give Cloudflare control of your records and also to protect it from dns attacks.\n3. Save the account the API key for later use, you will find it under “Account”."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-2.png"},{"type":"body_text","text":"We are using public IP addresses in this example but you can use shared IP and add CNAME records instead of A records.\n\n\\- If you want to configure an Active/Active solution add two A records with the same name.\n\n\\- If you want a Active/Passive solution you will add a single A record.\n\nYou can add a CNAME for www or other."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-3.png"},{"type":"body_text","text":"Now you need to get the ID for your DNS entries using the Cloudflare API. Here is how you do it using Curl which available for every OS. Replace tkn with your API key and also email and z which is the domain name.\n\n`curl https://www.cloudflare.com/api_json.html -d \"a=rec_load_all\" -d \"tkn=1f2eedac1c9bd7071f459effd8ad313bb38da\" -d \"email=joakim.ohman@elastx.se\" -d \"z=jelastic.se\"`\n\nYou can pipe the result to “python -mjson.tool” to get a pretty output.\n\nFind the “rec_id” for the domain record(s).\n\n1. Configure Monitoring\n\nYou want to monitor your sites and we also need a way to take action when a site goes down. In this example we are using site24x7 which is the only SaaS monitoring service I have found where you can configure custom URL alert actions and is a good and stable service.\n\n1. Register an account at \n\nYou need to use at least a Professional account to have custom alerts. You can start with the free 30 day trial.\n\n1. Configure Custom alert actions that will modify the DNS entries at Cloudflare to make traffic only to be directed to available sites.\n\nSelect the “Alerts” tab and then “Add Action”.\n\nFor an A/A solution create 4 actions with the following configuration.\n\nConfigure all actions with: URL: \n\nForm Submission Method: POST"},{"type":"body_image","src":"/assets/uploads/easy-high-availability-4.png"},{"type":"body_text","text":"For an A/A solution create 2 actions with the following configuration."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-5.png"},{"type":"body_text","text":"You can also try this using curl just to make sure it works before we try the hole solution. Here is one example.\n\n`curl http://javaenv.jelastic.elastx.net/api_json.html -d \"a=rec_edit\" -d \"tkn=1f2eedac1c9bd7071f459effd8ad313bb38da\" -d \"id=101849947\" -d \"email=joakim.ohman@elastx.se\" -d \"z=jelastic.se\" -d \"type=A\" -d \"name=failed\" -d \"content=88.80.172.123\" -d \"service_mode=1\" -d \"ttl=1\"`\n\nThen check the Cloudflare records and make sure it was updated as expected.\n\n1. Create monitoring for both sites.\n\nSelect the “Home” tab, “New Monitor” and then “Website”.\n\nFor an A/A solution create 2 actions with the following configuration."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-6.png"},{"type":"body_text","text":"\n\n1. Now test it.\n\nTake down site 1, wait for the alert and check Cloudflare that the DNS record was updated as expected.\n\nBring the site 1 up and check the DNS record again.\n\nIf everything worked as planned then you are done!\n\nNow you have High Availability and Disaster Protection!\n\nJoakim Öhman"}],"slug":"easy-high-availability","url":"/en/news/easy-high-availability","lang":"en","filename":"easy-high-availability","image":"/assets/uploads/article-hero-13.jpg"},{"date":"2013-03-18","title":"Now in Sweden - True autoscaling and volume discounts","case_slug":"Now in Sweden True autoscaling and volume discounts","intro":"“With these changes Virtuozzo (formerly Jelastic) gets even further ahead of its competition and becomes extremely cost effective for small and large-scale production deployments alike.” - Ruslan Synytsky, COO, Virtuozzo","tags":["wordpress"],"body":[{"type":"body_text","text":"“Fair pricing is vital to make any business sustainable yet cost-effective for customers,” says Ruslan Synytsky, COO, Virtuozzo (formerly Jelastic), “Last year we became the first PaaS to launch vertical scaling of servers and true utility-based pricing. With our latest release, we are adding ability to set both minimal and maximal scaling limits, get volume discounts, and immediately see the projected costs the moment you create or change Virtuozzo environments.\n\nThe new model contrasts sharply with conventional industry pricing practices. Developers will find it more attractive than existing models for a number of reasons:\n\n**Free is never free**: Many supposedly competitive cloud hosting vendors attract developers by making it easy to run new apps for free. But as apps start growing, they need more and more resources. And those additional resources are far from free. In fact, at that point the formerly “free” pricing usually becomes very expensive as pricing is rarely adapted to large scale production use.\n\n**With Virtuozzo, automatic volume discounts provide make scaling cheaper**: Virtuozzo can give automatic discounts for usage depending on the amount of resources that the application consumes. So unlike with the “free to start, expensive to scale” platforms, higher-volume usage is automatically cheaper. Larger applications with higher resource consumption and commitment levels automatically get a lower price, making Virtuozzo perfect for large-scale production use.\n\n**Per-machine pricing is bad for customers**: The computer-based (per-machine) pricing model commonly used in the industry today means that customers often have to overpay, choosing bigger machines than they actually need because they have no other option.\n\n**With Virtuozzo, the only true consumption-based PaaS, customers only pay for the resources they need:** Virtuozzo is the only Platform-as-a-Service on the market today that has built-in automated vertical scaling. This means customers do not have to guess the machine size they need. Unlike with per-machine or other models, they can let Virtuozzo add memory, CPU and disk space on the fly. And they pay only for their actual hourly consumption.\n\n**Virtuozzo customers can now set Lower and upper limits, to get the best price for their usage patterns**: Beginning with Version 1.9, Virtuozzo lets customers specify the lower and higher scaling limits for all the servers they use within the platform. The higher customers set their minimum resource-use level, the bigger discount they get. At the same time, the maximum level still allows for overage in case the application needs to scale up to serve increased usage. This lets customers set limits that optimize price and operational efficiency.\n\n**Virtuozzo provides cost projections each time the customer creates a new environment or makes changes**: Any time a customer changes its configurations, Virtuozzo immediately displays both the minimum and the maximum expected monthly bill. This makes the pricing both fair and predictable.\n\nBelow is our new volume discount pricemodel;"},{"type":"body_image","src":"/assets/uploads/pricemodel-jelastic.png"},{"type":"body_text","text":"## Other features with the new Virtuozzo version 1.9 platform;\n\n**Easier access to application files with FTP/FTPS**: The new version of our platform offers support for the FTP/FTPS protocol. This provides a fast, easy, and reliable way of file sharing and data transfer to and from your servers in Virtuozzo environments.\n\n**Higher performance through file caching**: Virtuozzo 1.9 supports caching in NGINX (for both NGINX-balancer and NGINX PHP server). This improves performance by reducing the access time to the resource and increasing the resource building speed.\n\n**More enterprise-ready servers with built-in TomEE**: We’ve added Apache TomEE, the enterprise edition of Apache Tomcat, to meet demand for a more enterprise-ready server. This provides you with the best technology stack that can be deployed to a simple Java EE container. TomEE is a full-blown app server, but retains the simplicity of Tomcat and avoids architectural overhead.\n\n**More databases with MariaDB 10.0**: Virtuozzo 1.9 is the first PaaS offering MariaDB 10.0, the most advanced database in the SQL world. This database offers developers the advantages of multi-source replication, dynamic columns and MariaDB Galera Cluster. It is even faster and more fault tolerant, and has more flexible database replication, than previous MariaDB versions.\n\nAs you see we always consider our customers’ opinion and try to satisfy all their needs. Thanks to our users we are getting better!\n\nTry our free tier of our platform - Sign up [now](https://app.jelastic.elastx.net/?signup=true)."}],"slug":"now-in-sweden-true-autoscaling-and-volume-discounts","url":"/en/news/now-in-sweden-true-autoscaling-and-volume-discounts","lang":"en","filename":"now-in-sweden-true-autoscaling-and-volume-discounts","image":"/assets/uploads/switch_exhaust.jpeg"}],"footerData":{"se":{"contact":{"title":"Kontakt","address_1":"Katarinavägen 15","address_2":"116 45 Stockholm","rows":[{"row":"VAT: SE556906561701"},{"row":"Telefon: [+46 8 557 728 10](tel:+46855772810)"},{"row":"Epost: [info@elastx.se](mailto:info@elastx.se)"}]},"links":{"title":"Länkar","extra_footer_links":[{"targetBlank":true,"label":"Karriär","link":"https://career.elastx.se/"}]},"social":{"title":"Sociala kanaler","links":[{"label":"LinkedIn","url":"https://www.linkedin.com/company/elastx"},{"label":"YouTube","url":"https://www.youtube.com/@Elastx"}]},"cert_logos":{"logos":[{"logo":"/assets/uploads/iso-27001-1.png","link":"/information-security-policy"},{"logo":"/assets/uploads/iso-14001-1.png","link":"/environment-policy"}]},"external_cert_logos":{"logos":[{"logo":"/assets/uploads/green-web-foundation.png","link":"https://www.thegreenwebfoundation.org/green-web-check/?url=elastx.se"}]},"menulinks":[{"label":"Hem","link":"/"},{"label":"Om oss","link":"/about-elastx"},{"label":"Kontakt","link":"/contact-us"},{"label":"Tjänster","link":"/overview"},{"label":"Priser","link":"/pricing"},{"label":"Partners","link":"/partners"},{"label":"Nyheter","link":"/news"},{"label":"Events","link":"/events"}]},"en":{"contact":{"title":"Contact","rows":[{"row":"VAT: SE556906561701"},{"row":"Phone: [+46 8 557 728 10](tel:+46855772810)"},{"row":"Email: [info@elastx.se](mailto:info@elastx.se)"}],"address_1":"Katarinavägen 15","address_2":"116 45 Stockholm"},"links":{"title":"Links","extra_footer_links":[{"targetBlank":true,"label":"Career","link":"https://career.elastx.se/"}]},"social":{"title":"Social channels","links":[{"label":"LinkedIn","url":"https://www.linkedin.com/company/elastx"},{"label":"YouTube","url":"https://www.youtube.com/@Elastx"}]},"external_cert_logos":{"logos":[{"logo":"/assets/uploads/green-web-foundation.png","link":"https://www.thegreenwebfoundation.org/green-web-check/?url=elastx.se"}]},"cert_logos":{"logos":[{"logo":"/assets/uploads/iso-27001-1.png","link":"/information-security-policy"},{"logo":"/assets/uploads/iso-14001-1.png","link":"/environment-policy"}]},"menulinks":[{"label":"Home","link":"/"},{"label":"About us","link":"/about-elastx"},{"label":"Contact","link":"/contact-us"},{"label":"Services","link":"/overview"},{"label":"Pricing","link":"/pricing"},{"label":"Partners","link":"/partners"},{"label":"News","link":"/news"},{"label":"Events","link":"/events"}]}}}