.\n\nCreate a file called rclone.conf with following content:\n\n```\n[swift-remote]\ntype = swift\nuser = $OS_USERNAME\nkey = $OS_PASSWORD\nauth = $OS_AUTH_URL\ndomain = default\ntenant = $OS_TENANT_NAME\n```\n\n\nWhen we want to refer to Swift we will now use 'swift-remote:' and rclone will handle auth for us (assuming openrc has been sourced first).\n\nCreate a container in Swift:\n\n`rclone --config rclone.conf mkdir swift-remote:pg_backup`\n\n\nUpload db dump to pg_backup container:\n\n`rclone --config rclone.conf copy mydb.sql swift-remote:pg_backup`\n\n\nDefault behaviour is to copy entire directory contents unless source is a file.\n\n~~At the time of writing (2017-04-07) the current version of rclone (v1.36) does not support expiration header for Swift. It's scheduled for v1.37 ([Github issue #59](https://github.com/ncw/rclone/issues/59)) or later. Meanwhile you can delete old files with:~~\n\n`rclone --config rclone.config --min-age 1h delete swift-remote:pg_backup/`\n\nUpdate 2020-11-16: Since [v1.52](https://rclone.org/changelog/#v1-52-0-2020-05-27) rclone support sending additonal headers, so you can now set expiration date on file uploads:\n\n`rclone --config rclone.conf --header-upload \"X-Delete-After: 3600\" copy mydb.sql swift-remote:pg_backup`\n\n\n## Summary\n\n\nThat's it! My personal recommendation is to use python-swiftclient or OpenStack CLI client (not shown in examples) where available. Both are maintained by the OpenStack community and are the most supported software out there.\n\nRclone can be useful if you use more than one cloud storage platform, or at least want the possibility to be able to. Rclone is the cloud equivalent to rsync, a software known to be excellent when it comes to copy large amounts of data and files.\n\ncURL can be run almost anywhere (it even works on AmigaOS, DOS and OS/2!) but require more knowledge of Swift REST API.\n\nIf you have any questions or want to know more about how OpenStack can help your company contact me or a colleague and we'll be in touch!\n \n\nAndreas Holmsten"}],"slug":"getting-started-with-application-backups-to-swift","url":"/se/news/getting-started-with-application-backups-to-swift","lang":"se","filename":"getting-started-with-application-backups-to-swift"},{"date":"2016-10-08","title":"Vi arbetar med den absolut senaste tekniken","case_slug":"To make this possible we use bleeding edge Technology","tags":["elastx","customer"],"intro":"Chorus utvecklar med nya verktyg för att ge vårdpersonal, patienter och anhöriga tillgång till vårddata på ett helt nytt sätt än tidigare. Tänk att kunna se sin journal direkt i datorn/telefonen och att kunna dela denna information med din doktor - visst vore det bra!","body":[{"type":"body_text","text":"Svensk sjukvård står inför stora utmaningar. Vi blir allt äldre och vården kostar allt mer. Vården har dessutom en mängd olika IT-system och journalsystem som ofta har direktkopplingar mellan varandra. Det har gjort att integrationskostnaderna skjutit i höjden och att allt mindre resurser finns för att utveckla nya funktioner för vårdpersonal och invånare. Säkra lösningar, där data från befintliga system görs tillgängliga med patienternas tillåtelse blir därför allt viktigare.\n\n### Chorus samarbete med ELASTX\n\nFredrik Henriques, som är tekniskt ansvarig på Chorus, fick våren 2013 en rekommendation av en utvecklare att testa ELASTX Virtuozzo (tidigare Jelastic) PaaS. Chorus använder sedan 2014 även ELASTX Openstack IaaS(öppet ramverk för infrastruktur som tjänst, ELASTX anm.). “När ELASTX lanserade sin Openstack IaaS så blev jag nyfiken”, berättar Fredrik. “Då vi jobbar med den absolut senaste tekniken, vill man hålla sig uppdaterad och här kan jag spinna upp valfritt OS. Vi labbar en del med t.ex. CoreOS.\n\n*Eftersom “time to market” är väldigt viktigt för oss så är ELASTX Virtuozzo PaaS perfekt. Möjlighet att kunna skapa samt klona hela miljöer underlättar mycket. Att det är anpassat för utvecklare med full GIT integration samt att via kontrollpanelen få full kontroll på alla miljöer är ovärderligt, berättar Fredrik.*\n\nELASTX ser fram emot att fortsätta resan med Chorus och den viktiga utmaning de har för att underlätta för sjukvården!\n\n### Om Chorus\n\nChorus har sedan starten 2007 haft två riktmärken för sitt arbete. Att göra det komplexa lite enklare och mer begripligt samt att arbeta inom områden där de tror att nya idéer och grepp verkligen gör nytta. Chorus har utvecklat ett antal verktyg för att göra hälso- och sjukvården mer anpassad till patienters och anhörigas behov!\n\nLäs mer på [chorus.se](https://chorus.se)"}],"image":"/assets/uploads/article-hero-19.jpg","slug":"to-make-this-possible-we-use-bleeding-edge-technology","url":"/se/news/to-make-this-possible-we-use-bleeding-edge-technology","lang":"se","filename":"vi-arbetar-med-den-absolut-senaste-tekniken"},{"date":"2016-09-20","title":"Automating infrastructure with Terraform on OpenStack","case_slug":"Automating infrastructure with Terraform on OpenStack","intro":"Infrastructure as a Service (IaaS) is the most basic cloud model, giving you the ability to create your own infrastructure resources such as networks, firewalls, load balancers, different kind of storage and of course servers.\n\n","body":[{"type":"body_text","text":"There are other tools that can be used to orchestrate OpenStack such as the official tool called Heat, which works pretty much the same as AWS Cloudformation. They both work with “Stacks” which is basically a stack with resources (servers, security groups, databases, etc). Both Heat and Cloudformation are fantastic way to do orchestration. But both are tools for either OpenStack or Amazon Web Services.\n\nWith Terraform you orchestrate cloud services – DNS, to IaaS providers, CDN, etc. As the tool itself isn’t connected to any specific provider you also don’t get locked in. Please see a list of official providers in the [documentation](https://www.terraform.io/docs/providers/).\n\nExcept for the fact that Terraform allows you to orchestrate many different cloud services, it’s also pretty easy to learn. And in our experiences a lot faster than the tools mentioned above.\n\nBy the way, If you live in Stockholm and want to meet others that loves Terraform (and more HashiCorp stuff), join the meetup group [Stockholm Hashicorp User Group](https://www.meetup.com/Stockholm-Hashicorp-User-Group/)\n\n## A “simple” web service architecture\n\nYou can get the source for this Terraform template on [Github](https://github.com/elastx/terraform-demo) and study it to the teeth.\n\nOur architecture will consist of two server groups “web” and “db”. Each group having 2 compute instances (virtual machines). The groups will have a policy on them called “anti-affinity”, this is best practice and will tell OpenStack scheduler that this group may never have the compute instances on the same physical hardware, in this case to make sure your web and db instances stay redundant.\n\nEach compute instance will run CentOS 7 and with cloud-config we’ll change the default username to “elastx”.\n\nThree security groups (sg) will be created, demo-ssh-sg, demo-web-sg and demo-db-sg. The first sg will be on all instances allowing “everyone” to ssh. The web-sg will be on the web servers allowing port 80 and 443 (http,https) and db-sg will be on the database instances having port 3306 (mysql) allowed from the web network, keeping it isolated and secured.\n\nSSH keypair called “demo_rsa” will be upploaded as well.\n\nThis is a very traditional network topology for demo purposes, but it shows you roughly how the environment will look like."},{"type":"body_image","src":"/assets/uploads/automating-infrastructure-bild.png"},{"type":"body_text","text":"As mentioned earlier, go ahead over to our [github repository](https://github.com/elastx/terraform-demo) to study the [Terraform template](https://github.com/elastx/terraform-demo/blob/master/terraform.tf). To run it yourself, [contact us](mailto:hello@elastx.se) in order to create an account.\n\n/Tobias Jakobsson"}],"tags":["openstack","terraform"],"image":"/assets/uploads/switch_exhaust.jpeg","slug":"automating-infrastructure-with-terraform-on-openstack","url":"/se/news/automating-infrastructure-with-terraform-on-openstack","lang":"se","filename":"automating-infrastructure-with-terraform-on-openstack"},{"date":"2015-12-23","tags":["pipeline"],"title":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment (part 2)","case_slug":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment 2","body":[{"type":"body_text","text":"Choose your tools wisely:\n\nWhatever tools you have in place now, or whatever tools you choose to switch, the same rules apply:\n\n* [Grok](http://www.urbandictionary.com/define.php?term=grok) your tools\n* Use your tools the way they were intended to be used first, before you start experiementing\n* Avoid building from scratch. Most problems are already solved and someone has probably open sourced some code to solve exactly the problem you’re stuck on (or at least 90% of it).\n* Always choose open source before proprietary\n* Always check that open source software you want to use has healthy and active community around them\n* Be wary of moth-balled open source projects, unless you’re ready to start supporting it yourself\n* If you choose to switch, make sure the new tool will solve your problem with the old tool\n\n**Source Control:**\n\nEnsuring control over changes in your VCS is pretty much the crux of the issue in Continuous Delivery. The most important axiom of CD is to keep your master branch releasable at all times. The [wikipedia page about Continuous Delivery](https://en.wikipedia.org/wiki/Continuous_delivery) is actually the best at describing this. Out of that singular focus everything else will flow.\n\nThe converse is also true. If you are frequently breaking master, or your master branch is in an unknown state, then you have no hope of doing continuous delivery until you [fix that problem](http://failblog.cheezburger.com/thereifixedit).\n\nWhat this means in practice, is that you want as much as possible of your verification to happen on the feature branch before merging to master. This means any type of building, compiling or packaging, deploying the application, automated testing and even manual testing if necessary. Code review should also be a part of any change before merging to master is allowed.\n\nLastly, you need to ensure that developers are actually integrating their new changes with the latest on master (i.e. the “integration” part of continuous integration).\n\nIf you’re using git and your branching look like [this](http://aht.github.io/whatisgit/wide-gitk.gif), the point is to get it looking more like [this](http://3.bp.blogspot.com/-Yh5iOYTth3A/U6fhWafzqRI/AAAAAAAABs0/xujlkjiYFjQ/s1600/gitk19.png) (the way Linus intended).\n\nIf your situation is this bad, then my preferred means of doing this with git is to use a [fork/pull-request model](https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow) and force [fast-forward only merging](http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly) to master. What this means is that everyone must [rebase](http://nathanleclaire.com/blog/2014/09/14/dont-be-scared-of-git-rebase/) their branches every time something is merged to master and if something gets messed up, it’s isolated to the fork. Furthermore, by rebasing, it ensures that everyone is actually doing the “integration” part of Continuous Integration, reducing the state of entropy on the HEAD of master, and gets you that much closer toward your goal of Continuous Delivery. The beauty is that this system will take care of itself and you can get back to work to fix anything else that needs attention.\n\nHowever, be prepared that this is a pretty drastic measure to take, and there will defeinitely be some developers who kick and scream that you’re [killing ther mojo](https://www.youtube.com/watch?v=gEuw2mgLRuQ). Don’t let them win! Enforce it and set the bar to make sure everyone lives by the same rules.\n\n* [Git](https://git-scm.com/): Even though Linus has reportedy [soured on it](http://typicalprogrammer.com/linus-torvalds-goes-off-on-linux-and-git/) himself, I rank git as the gold standard of source control. It’s mature and there are lots of people familiar with it. Branching and merging is simple and fast (but unfortunately easy to make a mess of as explained above). The ability to rewrite history with rebase is incredibly powerful. But git is a complex tool and if you’re not comfortable on the command line you might struggle.\n* [Mercurial](https://www.mercurial-scm.org/): Another fine distributed version control system, which at first appearance is similar to git in many ways, but with some [notable differences](https://jhw.dreamwidth.org/1868.html) in terms of how it handles branching and merging. It is arguably easier to learn than git, but you will still need to grok it to use it properly (just as any tool). Branching and merging work seemlessy and mercurial will work just fine with this workflow.\n* [Subversion](https://subversion.apache.org/): If you have [nightmares about branching](http://blog.codinghorror.com/software-branching-and-parallel-universes/) in subversion then you should probably switch to git or mercurial. [Centralised VCSs are kind of an old and outdated idea](https://www.youtube.com/watch?v=_yQlKEq-Ueg), if you’re looking at Continuous Delivery, even though they are still heavily in use. However, if you’re happy with it and have all the kinks worked out in a clean way that allows isolating changes so they can be tested before merging, then it should be workable.\n\n**A note on code reviews:**\n\nThe importance of an integrated solution with code review and source control cannot be understated. Popular services such as github and bitbucket have contributed significantly to making this a part of Best Practice. If your code review tool is a separate piece of infrastructure tacked onto a VCS such as ReviewBoard or Crucible, leaving it possible for changes to circumvent the requirements for review and testing then this is a good reason to switch to something with tighter controls.\n\n**Orchestration:**\n\nBy orchestration I mean creating cloud resources (Virtual Machines, containers, etc), configuring cloud services (DNS, Networking, Load Balancing, etc), and preferrably also service discovery or communication between those resources when they are created or destroyed.\n\nOrchestration in cloud computing is a confusing topic because [there isn’t really a consistent definition ](https://www.flexiant.com/2012/09/18/what-is-cloud-orchestration/)of what “orchestration” actually means. The situation become even more complicated when you start looking at the plethora of tools out there that might at first seem like alternatives to each other, but the reality is it’s very much a venn diagram of overlapping functionality that you just won’t actually realise until you try them out. With that in mind, here is a list of tools that I do see as alternatives to one another.\n\n* [Cloudformation](https://aws.amazon.com/cloudformation/): this is a declarative, full-featured framework for managing cloud resource on AWS. It’s a powerful and flexible tool but it can get complicated pretty quickly as your usage of AWS expands (which is even more reason to use it). Some caution is required however, because even if you think you are a wiz at pushing buttons in the AWS console, [cloudformation knows more about how to deploy AWS services than you do](http://harish11g.blogspot.dk/2014/08/amazon-cloudformation-templates-automation-Amazon-CFT-AWS-top-best-practices-tips.html), and you will surprised if you make the wrong assumption. If you’re using EC2 then you should probably be using cloudformation, and sooner, rather than later, because it is not possible to add existing resources (such as an ec2-instance) into a cloudformation-managed stack.\n* [Heat](http://docs.openstack.org/developer/heat/): is OpenStack’s equivalent of Cloudformation. For the most-part, Heat is compatible with the Cloudformation syntax, so if you’re familiar with one, it’s pretty straight forward to learn the other. One of the benefits of Heat is that it has a bit more flexibility managing resources with operations such as abandon-stack that will allow you delete a stack without removing all the created resources, and later you can adopt-stack once you have fixed whatever you needed to. This is a pretty nice feature to help you through the learning curve of learning Heat templates because it can shorten the feedback loop, but you’re doing something wrong if that’s a function you need to use in production.\n* [Terraform](https://terraform.io/docs/index.html): is another declarative orchestration framework with the major benefit that it’s platform agnostic. So if you are in a hybrid cloud scenario, for example you have a private OpenStack cloud, but also run on AWS, then Terraform is probably something to look into. If you’re looking to avoid vendor lock-in then Terraform might be the way to go.\n* [Docker-compose](https://docs.docker.com/compose/): is Docker’s declarative resource orchestration framework and is definitely worth a look if you’re interested in running containers. Compose has a [layered aproach to handling different envrionments](https://docs.docker.com/compose/extends/#different-environments), whereby you start with a base configuration that all your environments start with and then you override certain values to turn it into dev/ci/stage/prod. That’s a really nice way to help maintain consistency between envrionments.\n\nI chose all the above because I’m a big fan of the [declarative](https://en.wikipedia.org/wiki/Declarative_programming)-style system definitions (as opposed to [imperative](https://en.wikipedia.org/wiki/Imperative_programming)). I think that is the right way to think about systems and build systems. But there are also other methodologies and lots of options for you to choose from (for example PaaS is a wholly different type of orchestration, which technically fits into my definition above)\n\n**Configuration Management:**\n\nThis is something often mixed in with “orchestration”. In some ways that is understandable, because a declarative orchestration template such as what is used in all of the tools listed above is essentially just a yaml or json file. Once that file is stored in version control it becomes a part of your Definitive Media Library (to use an ITIL term). This turns it into an essential part of the configuration definition of your environment required to reproduce it.\n\nAlthough we can view Configuration Management and Orchestration as a single topic, doing so makes the landscape of tools more confusing to understand. We’re better off dividing the two areas so that we can discuss them separately, which means we need an appropriate definition of Configuration Management to complement our understanding of Orchestration.\n\nSo when I talk about Configuration Management, I’m really referring to internal configuration of a compute resource, such as a virtual machine or container. This means software such as programming languages and frameworks, components such as apache and nginx, databases if they are running on the compute instances (as opposed to some *aaS like [DynamoDB](https://aws.amazon.com/documentation/dynamodb/) or [Trove](https://wiki.openstack.org/wiki/Trove), in which case it falls more under “orchestration”), all the way down to files, folders and permissions.\n\n* [Docker](https://www.docker.com/sites/default/files/WP-%20Definitive%20Guide%20To%20Containers.pdf): by now Docker is many things, but the technology at it’s core is really a method of shipping containers from one host OS to another host OS. What’s inside the container remains the same regardless of the host Operating System, meaning that it solves the Configuration Management problem of how your application runs in different environments. [Containers are not a new idea](https://www.youtube.com/watch?v=coFIEH3vXPw) at all but it was only with Docker that their real potential has finally been unlocked by bringing conscensus in the industry that this is now [without question the future of virtualization](https://www.opencontainers.org/). If you have not started containerizing your application yet and you’re still fumbling around with Virtual Machines, then you better put it on your road map, or be prepared to let the competition run you over.\n* [Ansible](http://www.ansible.com/how-ansible-works): is probably the simplest Configuration Management tool out there for managing Virtual Machines. It’s extremely lightweight ([the tar.gz is less that 1MB!](https://github.com/ansible/ansible/releases)), and works simply by ssh or winrm/powershell in the case of Windows when in push mode, or alternatively run it in pull-mode if you want to do things like auto-scaling. It’s written in python and is “batteries included” meaning that it has a load of inbuilt modules that work of the box. Ansible is [idempotent (or trivial to make idempotent), but it is not convergent](https://groups.google.com/forum/#!msg/ansible-project/WpRblldA2PQ/lYDpFjBXDlsJ). Ansible Tower is their paid-for version which includes a centralised control and monitoring server that can be a real benefit in Production. The downsides of Ansible is that is can be slow compared to the alternatives, and you can occasionally run into cryptic errors that are difficult to deciper.\n* [Chef](https://www.chef.io/)/[Puppet](https://puppetlabs.com/): are the opposites of Ansible in many ways considering they do the same job. From [the point since they initially diverged 8 or so years ago](https://news.ycombinator.com/item?id=911391), they basically spent the next 6 years copying each other’s features, to the point today where they are [stretching to differentiate themselves](https://www.scriptrock.com/articles/puppet-vs.-chef-revisited) once more. They’re both complicated, big, and bloated, but powerful (which is not necessarily a good thing as that power tends to be abused rather than limited to what it was intended to do). Their server-agent architecture is possible to run without the server, however you need to replace it with something else, which, in all the cases I’ve seen ends up being Ansible. Both the chef-server and puppet-master are almost 500MB which gets you no where without the rest of the software you need. If you’re already using puppet or chef then that’s fine, however if you’re new to virtualisation then choose Ansible.\n* [Saltstack](http://saltstack.com/): Where Chef and Puppet played the game of competitively diverging and converging over time, in many ways the same goes for Ansible and Saltstack. To begin with, Salt was a similar server-agent architecture to Chef and Puppet, but [salt-ssh](https://blog.logentries.com/2015/04/salt-ssh-for-remote-execution-of-states-and-modules/) is obviously mimmicking the agentless architecture of Ansible. In the server-agent mode, Saltstack is faster than Ansible, and in all modes simpler to use and learn than Chef and Puppet, without suffering from the bloat.\n\nTo summarise the points above, the primary decision to make regarding Configuration Management is whether you’re at a stage where you’re [ready to make the leap to containerize](https://labs.ctl.io/how-to-migrate-legacy-applications-into-docker-containers/) your applications or not. So how on earth do you make that decision?\n\n1. First of all you need to want to be on the bleeding edge of technology, because you feel you can handle the risk.\n2. Secondly, your organisation also needs to be mature enough from a team work and culture perspective as well as having some senior technical people who know enough about things like microservices, SOA and REST architectures.\n3. Lastly, you should not be spending the majority of your time firefighting or struggling with technical debt.\n\nIf you meet those 3 criteria then you’re in a good place to begin containerizing your applications. If you only fit 1 or 2 then you’re on the margins and maybe you want to wait another 6-12 months so that you can fix some of your issues and wait for the other tools in the space to mature. If you don’t fit any of those areas then forget it, you have way bigger problems to solve.\n\nIf you are not ready to containerize but you want still want to virtualize in VMs then you will still need a tool like Ansible or Saltstack. I recommend against Chef or Puppet, unless you already have people who know these tools and know how to build systems with them without making a bigger mess. Even in that case, keep it in mind that we already can see the end of the road of the Virtual Machine. And yes I really mean that. [The only reason why we ended up with Virtual Machines as the dominant virtualisation technology is because of Windows](https://www.youtube.com/watch?v=coFIEH3vXPw), and now even [Microsoft has seen the light](http://venturebeat.com/2015/11/19/microsoft-launches-new-windows-server-2016-preview-with-hyper-v-containers/). That means you really want to carefully consider how much time and effort you are going to spend on something that is at best, at stop-gap.\n\nWe’ll finish off this long series in Part 3"}],"intro":"This is the second of three posts about building an advanced deployment pipeline. ","image":"/assets/uploads/8442_without_stripes.png","slug":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-2","url":"/se/news/building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-2","lang":"se","filename":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-part-2"},{"date":"2015-12-23","tags":["pipeline"],"title":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment (part 3)","case_slug":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment 3","image":"/assets/uploads/8442_without_stripes.png","intro":"This is the last of three posts about building an advanced deployment pipeline.","body":[{"type":"body_text","text":"CI servers themselves aren’t really much more than glorified remote script runners and just because you might have a CI server setup with some automated tests does not mean you are doing continuous integration. As I mentioned in part 2, the “integration” part of continuous integration actually occurs as the source-control level whereby developers are [merging/rebasing their changes with the latest from the master branch](https://www.thoughtworks.com/continuous-integration). That integration should occur at least once per day, and then automated tests should be run to see if anything has broken. Only if every developer is doing this regularly can you say that you are actually doing continuous integration.\n\n* [Jenkins](https://jenkins-ci.org/): is still the most mature and wide spread CI server around today. It has an active community with [over 1000 available plugins](https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Pluginsbytopic) giving it unrivalled flexiblility and functionality. This is really great, because most problems are not new, so it’s likely that whatever your case maybe, there is likely already a plugin to help you. It’s written in java and fully open source. If you have a complex system, it’s almost a certainty that jenkins will be able to handle it. On the downside, jenkins can be a pretty complicated beast to configure, and the user interface is pretty clunky and ugly. A few of the key pieces of sowftware and plugins I always use with jenkins to build deployment pipelines are [swarm plugin](https://wiki.jenkins-ci.org/display/JENKINS/Swarm%20Plugin), [jenkins job builder](http://docs.openstack.org/infra/jenkins-job-builder/) and [cloudbees flow plugin](https://wiki.jenkins-ci.org/display/JENKINS/Build%20Flow%20Plugin).\n* [Thoughtworks Go](https://www.go.cd/): I think it’s a bit surprising that not many people seem to have heard of Go, considering the guys who wrote the book on [Continuous Delivery](http://www.amazon.com/dp/0321601912?tag=contindelive-20), [Jez Humble](https://www.thoughtworks.com/profiles/jez-humble) and [Martin Fowler](http://www.martinfowler.com/) both work for Thoughtworks. No surprises that Go is designed “out of the box” to be suitable for Continuous Delivery and building Deployment Pipelines. Since they [open sourced the product in 2014](https://www.thoughtworks.com/news/go-continuous-delivery-now-available-as-free-open-source), you can see how healthy the [community is on github](https://github.com/gocd/gocd). It has probably one of the nicest interfaces for any CI tool out there, however the rate of issues being created versus resolved is something to keep an eye on.\n* [Team City](https://www.jetbrains.com/teamcity/): If you have used one of [Jetbrains IDEs](https://www.jetbrains.com/products.html#type=ide), then you’re probably are also familiar with Team City. They produce good tools that are popular with many developers. It’s no surprise that Team City is a solid CI server as well, which of course integrates seamlessly with your IDE. You can create dependencies via build chaining between individual jobs to set up a deployment pipeline. Team City is a capable tool and free for a small number of build and agents, however it is closed source and if you’re expecing to run at large scale it’s going to become pretty [expensive](https://www.jetbrains.com/teamcity/buy/).\n\n**Artifact Repository:**\n\nProbably one of the least exciting topics is the storing and retrieving of build artifacts, however it is important, especially for compiled languages. If releasing to production means pulling the latest changes and then compiling, I’m sorry but you’re just doing it wrong. [Each time you compile your source, you’re more likely to end up with a different binary ](http://superuser.com/questions/639351/does-recompiling-a-program-produce-a-bit-for-bit-identical-binary)than not, even if the source has not changed. That means it is possible for the runtime execution of your program to be different, and thus any [testing and verification can really only be garanteed for a certain binary](http://blog.arungupta.me/build-binaries-only-once-continuous-deployment/), and not the source commit.\n\nIf you’re using something like php then this is potentially less of an issue, however since [Facebook started turning PHP into Java](http://hhvm.com/), then even that is probably not true in all cases.\n\n* [Artifactory](https://www.jfrog.com/open-source/#os-arti): is a flexible repository that in it’s free version can store java packages such as jar, war and ear, but in it’s paid-for version can also mirror node npm, python pypi and ruby gems as well as OS packages such as rpm and deb! It also integrates with CI servers such as Jenkins and Team City and Go. It’s open source, which is nice, but to get all the goodies, you will need to [fork over the cash](https://www.jfrog.com/artifactory/buy-now/).\n* [Nexus](http://www.sonatype.com/nexus/solution-overview/nexus-repository): will do pretty much all the same things that Artifactory will do, however the additional language support for other package types come in the open source version. It will also integrate with all the major Ci servers and is actually a bit cheaper then Artifactory.\n* [rpm](https://wiki.centos.org/HowTos/CreateLocalMirror)/[deb](http://www.aptly.info/) mirrors (and other OS packages): I mention this separately, because just like controlling your application dependencies is important, so are controlling your OS dependencies. We’ve probably all been in the situation where the depenency we were downloading somewhere off the internet went missing, or when we got an update that unexpectedly broke the build or brought down production (because we didn’t test it - oops!).\n\n**Workflow Visualisation:**\n\nPerhaps workflow tools might seem like an afterthought in the context of a Deployment Pipeline, but unfortuantely this is not so. When the topic comes up about how to manage releases, what is the definition of “done”? These tools are a necessary link in the chain.\n\n* [Jira](https://www.atlassian.com/software/jira): Atlassian Jira is a popular and powerful issue management and workflow visualisation tool. It is highly configurable which means it’s great for handling all sorts of agile and ITIL-style processes to fit your organisation, but that is also often [where it also goes wrong](https://jira.atlassian.com/secure/attachment/67504/Workflow%20with%20labels.png), resulting in a configuration nightmare. Jira’s power is also it’s curse, however when used correctly it is a fine and effective tool with a lot of in-built features and reports. It supports both scrum and kanban, but is unfortunately opinionated in these areas so if you are using some kind of blended “scrumban” then you might run into trouble. Jira can be integrated with quite a few different tools, but of course works best if you stick to the Atlassian suite.\n* [Trello](https://trello.com/): is a lightweight cloud service for Kanban-style worfklow management. If you don’t want the hassle of complex workflows and just want to get stuff done then Trello could be a good fit for you, if you can live without customizations. Through other services such as [Zapier](https://zapier.com/), you can integrate different services with Trello so that you can get a high level overview of progress. You can also upgrade the service to business class to get access to [power-ups](https://trello.com/power-ups).\n* [Kanban vs Scrum and DevOps](https://www.scriptrock.com/blog/devops-kanban-match-heaven): I feel that its worth making a note that in my experience, Scrum is not ideal for doing Continuous Delivery and DevOps. Things like time-boxed sprints, backlog grooming, sprint planning, and stakeholder demos all start to feel quite restrictive in their format and routine, especially when you want the flexibility of releasing every day. [Kanban is better suited for Continuous Delivery](http://www.swiftkanban.com/kanban/kanban-for-devops-continuous-delivery/), and I’d go out on a limb to say that I think the DevOps community as a whole is moving towards support of Kanban over Scrum.\n\n**Monitoring and metrics:**\n\nThere are basically two forms of monitoring and metrics that are important. You have read-time monitoring that you need to react to incidents from from production events, and then you have metrics for analytical and statistical purposes (aka [Business Intelligence](https://en.wikipedia.org/wiki/Business_intelligence)) that can come from either (or both) log files or database reports .\n\n* [Prometheus](http://prometheus.io/): is an open source monitoring tool built by Soundcloud. I first learned about Prometheus from my friend [Matthias Grüter](https://www.youtube.com/watch?v=Z0LlilNpX1U&feature=youtu.be) about it at a Stockholm DevOps meetup and thought it looked quite impressive. It seemed like it actually offered something new and better things than a lot of the other monitoring tools which had been around for a while like [Nagios](https://www.nagios.org/) and [Graphite](http://graphite.wikidot.com/start). It has instrumentation for lots of different languages, support for different frontends and backends and is easy to setup. Maybe it won’t do everything you want but it certainly should be a good start.\n* [ELK](https://www.elastic.co/products): meaning [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html), [Logstash](https://www.elastic.co/guide/en/logstash/current/introduction.html) and [Kibana](https://www.elastic.co/guide/en/kibana/current/introduction.html), which is a powerful set of tools to perform logfile analysis. ELK is gaining wide acceptance because they work well and are open source with a vibrant community. Logstash will handle almost any log you can ship to it, such as web logs, database logs, syslogs and windows event logs which can then be stored, and parsed by elastic search and finally displayed by Kibana. Even though it’s 3 separate components, they are all designed to integrate seemlessly with each other. Compare this to a paid, closed source service like Splunk, it’s hard to imagine that they will survive too much longer without doing something drastic.\n* [Pentaho](http://www.pentaho.com/): Is an open source BI platform that offer a [free community edition](http://community.pentaho.com/) as well as an enterprise product with [lots of heavy stuff](http://www.pentaho.com/product/product-overview). If your needs aren’t met by the free version, then at least you’ll get to try and feel the product to see if you need all the power that’s offered in the paid version. I’m not sure what their pricing and licensing is like but there aren’t too many companies in this space that offer products that look as good as this, are open source, with free community editions.\n\nIf you made it this far, I hope it has been a worthwhile read. At some point in the near future I hope to be able to open source some code to show how all these pieces can be assembled, but we’ll see how that goes. Obviously the amount of work involved to get the basics up and running is not really something that you can whip up in just a weekend.\n\nAnyways, if there are certain areas where you wished to have more information and option about tooling, maybe [this list will help you](https://github.com/kahun/awesome-sysadmin). Otherwise if you have questions or comments you can shoot me an email."}],"slug":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-3","url":"/se/news/building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-3","lang":"se","filename":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-part-3"},{"date":"2015-12-18","title":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment (Part 1)","case_slug":"Building a portable, scalable, reusable Deployment Pipeline for an arbitrarily complex environment","image":"/assets/uploads/8442_without_stripes.png","image_alt":"","tags":["pipeline"],"body":[{"type":"body_text","text":"Back in May I did a presentation about [“The search for the Holy Grail”](https://www.youtube.com/watch?v=1I7zyaQ3Dak) at a DevOps meetup in Stockholm. The alternative name for the presentation could have been the title in this blog post, but that wouldn’t have allowed me to make gratuitous monty python references.\n\n**Lets sort through the buzzwords and really clarifiy exactly what we’re talking about:**\n\n* Portable: meaning portable between cloud providers (AWS and Elastx’s Openstack platform for example)\n* Scalable: meaning that adding more people/teams/components to the system should have no worse than a linear relationship between the mean time to commit/push and deploy to production\n* Reusable: meaning the software architectue to build the Deployment Pipeline should be achieved in such a way to make code generic and adaptable to other systems with minimal modification.\n\nIt’s a fair question to ask is it really possible or even worthwhile to build such a Deployment Pipeline?\n\nObviously my answer to that is a resounding, “yes”! And the explanation is simple, really. Since I’ve been working with Continuous Delivery and DevOps for almost 4 years, the basic problems that need solving are essentially the same every time in the following order of importance:\n\n1. Fix branching and merging so there is clear and simple model with a clean master and features branches with only a single path to merge changes to master\n2. Setup CI and code reviews as a gateway through which all branches must pass before merging to master\n3. Introduce tools for configuration management and automated deployment as a means to reproduce the system\n4. Continue to hammer points 1, 2, and 3 into everyone’s heads while improving automated testing and configuration management until you can rebuild the system from scratch\n\nSo if it’s basically the same thing every time then why not simplify the whole process and build a pipeline with interchangable parts that will save you a whole load of work down the line?\n\nThat’s what I thought.\n\nThis is going to be a tools and workflow-focussed post, so I want begin with an overview of the tools available to us to build our pipeline. At the end of the article I’ll explain about how to look out for some of the anti-patterns I’ve encountered concerning these types of discussions.\n\nProbably one of the most complex things in the DevOps space is keeping track of the explosion of tooling in the last few years. An interesting overview of this is portrayed in [Xebialabs periodic table of devops tools](https://xebialabs.com/periodic-table-of-devops-tools/).\n\nElements in a periodic table, where everything sits in a neat little box, paints an overly simplistic picture however, because the reality is that tools in the DevOps landscape requires a [venn-diagram](https://en.wikipedia.org/wiki/Venn_diagram) something more like this…"},{"type":"body_image","src":"/assets/uploads/deployment-pipeline-blog-pt1.png"},{"type":"body_text","text":"Unfortunately this diagram is only for illustrative purposes. I do think that it would be a worthy exercise to invest the time to properly classify the categories and the overlap between them (at the very least you’d probably be on the front page of hacker news for a couple of days).\n\nBut that will have to wait for another day.\n\nLet’s go through what I think are the critical elements of Deployment Pipeline:\n\n1. Source Control (including code review)\n2. Orchestration (of cloud resources and service discovery)\n3. Configuration Management (for consistent and reproducible environments)\n4. Continuous Integration (including automated testing)\n5. Artifact repository (for storing and fetching build artifacts)\n6. Workflow Visualisation (a dashboard where “done” means released to production)\n7. Monitoring and metrics (to ensure both speed and quality are improving)\n\nThat is a big list of stuff to cover so we’re going to have to sharpen our knife and fork before we [eat this elephant](http://www.pickthebrain.com/blog/how-to-eat-an-elephant/).\n\nBefore we get into specific tooling, I want to take a moment to note how others are tackling this problem. There are many degrees of freedom so in case you don’t find my approach useful, hopefully one of these other methods might be more suitable for your use case.\n\n(I have no affiliations to any of the companies or services listed below.)\n\n1. The first project requiring mention is [software-factory](https://github.com/redhat-cip/software-factory). It is based on very sophisticated tooling used in the [Openstack](http://www.openstack.org/software/) project where a lot of my ideas and inspiration come from (Elastx runs Openstack). Software Factory gets full marks for scalability, however it is not portable because it’s tied to Openstack. It is packaged in such a way to make it reusable, but the tools are tightly integrated, so if you want to swap one of them for something in your existing toolchain, you’re going to have some work to do.\n2. [Codeship](https://codeship.com/features) have created a company around supplying “Deployment Pipelines as a Service”, which I think is a pretty good business model - it clearly demonstrates that there is a lot of similarities between one deployment pipeline and another. They also have a great technical blog with lots of good tips focussed on CI/CD and automation. They definitely earns points for flexibility and reusability as you can , but they are not very portable because you’re tied to AWS (or Heroku, which also runs on AWS).\n3. [Distelli](https://www.distelli.com/features) are a competitor to Codeship with another “DPaaS” that appear more flexible in your options around [endpoint portability](https://www.distelli.com/deployment). They look like they have a fairly flexible “plug and play” architecture, but I don’t know how scalable their solution is for building complex environments.\n4. I always find myself especially attracted to [Travis-ci](http://docs.travis-ci.com/user/getting-started/) and [Github](https://github.com/features) because of their sweet integration with lots of flexibility and it’s free for open-source projects. However if you have a complex application you’re going to run into problems.\n\nThat’s just a few alternative approaches to solving this problem and one of those might be more suitable depending on your situation.\n\nIf your business is primarily focussed on a [CMS](http://www.rackspace.com/knowledge_center/article/cms-comparison-drupal-joomla-and-wordpress) to run a website and you don’t really have to worry about more complexity than [scaling](http://www.virtualizationadmin.com/blogs/lowe/news/scale-up-vs-scale-out-the-key-differences-333.html) and pushing content to a caching layer or [CDN](http://www.cdnreviews.com/cdn-comparison/), then your usecase may fit better into one of the options listed above. That is unless you’re doing something really crazy like orchestrating [Docker containers with Kubernetes on Apache Mesos](http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html) and running some Big Data business analytics on your [Hadoop-Elasticsearch backend](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/arch.html). And if that’s your usecase then that is totally badass, but it would be pretty weird if you didn’t already have a deployment pipeline working pretty smoothly if you’re at that scale already.\n\nAnother way to phrase the above, is if you’re building your services on a PaaS and you’re living in a happy place then there’s probably no reason to make trouble for yourself. However if you have to worry about IaaS, network segmentation, patching operating systems, a mixed environment with fixed hardware and virtualised services, data protection or legal compliance like [PCI-DSS](https://www.pcisecuritystandards.org/security_standards/), [EFPIA](http://www.efpia.eu/), or [HIPAA](http://www.hhs.gov/ocr/privacy/hipaa/understanding/index.html), then you should continue reading.\n\nIn Part 2 we’ll get into tool selection with pros and cons…"}],"intro":"This is the first of three posts about building an advanced deployment pipeline.","slug":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment","url":"/se/news/building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment","lang":"se","filename":"building-a-portable-scalable-reusable-deployment-pipeline-for-an-arbitrarily-complex-environment-part-1"},{"date":"2015-06-23","intro":"We’ve created a simple one-click Wordpress install for Virtuozzo (formerly Jelastic) that’s prepared with some basic fine tuning both for Nginx, PHP and Wordpress itself. It comes preinstalled with w3 total cache plugin and Hannari Blue theme.","tags":["wordpress"],"body":[{"type":"body_text","text":"Please [log in](https://app.jelastic.elastx.net/) or [sign-up](https://app.jelastic.elastx.net/?signup=true) to Virtuozzo (formerly Jelastic) and find the ELASTX Wordpress install in the marketplace, search for ELASTX and you will find it. If you’re a new user, there will be steps registering for our service. Don’t worry, you can try this for free by contacting our support and ask for a Development account. No strings attached.\n\nWhile the environment is created in the background, follow the [video](https://youtu.be/FxK-1H0xqyM) for the simple steps to take full advantage of the caching capabilities Wordpress have.\n\nIf you want to join the collaboration on making this one click-install of Wordpress better, don’t hesitate to checkout the code for it and contribute!\n\nCheck it out at "}],"title":"Simple, yet performant Wordpress sites on Virtuozzo PaaS","case_slug":"Simple yet performant Wordpress sites on Jelastic PaaS","image":"/assets/uploads/empty_switch_off.jpeg","slug":"simple-yet-performant-wordpress-sites-on-jelastic-paas","url":"/se/news/simple-yet-performant-wordpress-sites-on-jelastic-paas","lang":"se","filename":"simple-yet-performant-wordpress-sites-on-jelastic-paas"},{"date":"2015-06-15","title":"Get Drupal 8 up and running within 5 minutes","case_slug":"Get Drupal 8 up and running within 5 minutes","tags":["drupal"],"image":"/assets/uploads/article-hero-10.jpg","intro":"Drupal 8 is about to reach its first release candidate. We are very excited and would like to celebrate it, therefore we have created a one click installer so you can test Drupal 8 in our Virtuozzo (formerly Jelastic) platform!","body":[{"type":"body_text","text":"Please [log in](https://app.jelastic.elastx.net/) or [sign-up](https://app.jelastic.elastx.net/?signup=true) to Virtuozzo (formerly Jelastic) and find the ELASTX Wordpress install in the marketplace, search for ELASTX and you will find it. If you don't have an account already you will get instructions how to apply for a free Development account. Just tell us that you would like to try Drupal 8 and we will get you started. You can upgrade to a full version of our Virtuozzo PaaS at any given time and pay with your credit card. Enjoy."}],"slug":"get-drupal-8-up-and-running-within-5-minutes","url":"/se/news/get-drupal-8-up-and-running-within-5-minutes","lang":"se","filename":"get-drupal-8-up-and-running-within-5-minutes"},{"date":"2015-03-31","title":"Backups in OpenStack cloud","case_slug":"Backups in OpenStack cloud","image":"/assets/uploads/server_detail.jpg","tags":["backup"],"body":[{"type":"body_text","text":"\n\n## 1. Do not use a traditional backup system\n\nYou do not need it, It will save you time and money not to use it.\n\n## 2. Make servers stateless\n\nStore no data that you cannot recreate on the server if you do not have to, especially not the root disk. A web application server should be stateless but a database server is stateful. When you have a stateful server make the root drive stateless and store you stateful data on an Openstack Volume. If your logs are important to you, do it right from the beginning and ship them to a log server instead of backing them up. \n\n## 3. Automate server provisioning\n\nIf you automate the server provisioning:\n\na. you do not need to backup the OS, application and configuration files.\n\nb. you get the best documentation you can have of a server and you should store the recipe in source code repo.\n\nc. it will take just minutes to provision new servers for recovery, horizontal scaling and test.\n\n## 4. Use native application backup tools\n\nMy experience is that you will get the most reliable and efficient backups by using the applications native backup feature. Another advantage is also that the application owner will have control over both the backup and recovery procedures. Schedule the backups locally in the application or in the OS with unix/linux cron or windows task scheduler. Don’t forget to copy the data to a storage that is in a different availability zone or region. The best solution is normally to use an object storage solution like Swift or similar. Modern applications often have plugins to support object storage natively.\n\n## 5. Monitor your backups\n\nThe challenge with locally scheduled backups is that you do not have a central location where you can monitor that all backups are successfully running. But there is an easy solution for that as well. Use an event monitoring solution that you provide your self or you can use a SaaS like Pushmon, Opsgenie Heartbeat or similar. \n\n## 6. Automated recovery tests\n\nWhen you automate the provisioning it is very easy to test recovery as well. And if you can use it in your daily work, that's even better. One example is to build test environments using the production backup data. If you automate and schedule it to get it done, and then use event monitoring to make sure it works.\n\n## Provisioning\n\nThere are many ways of provisioning a server but I will just show you how you can use the native OpenStack tools to do it. If it is a single server you can just use Nova and if there are multiple servers and/or other infrastructure services required you can use Heat.\n\nWe will use a single server and nova in this example to keep it simple.\n\nWe create a simple nginx web server on CentOS. Below is the nova command used and the user_data input file.\n\n`nova boot webserver01 --flavor m1.small --image centos-6.5-20140117.0 --key-name my-keypair --user-data ./WebServer.user_data`\n\n## Server (Instance) Backup and recovery\n\nYes there are occasions where you cannot or its not practical to automate server provisioning or to do native application backups. No worries, there is a very easy way to do server backups in OpenStack. In this example I am using the cli client but you can do it with the API also.\n\nIf you have a stateless server and you need to recover the node you can just do a rebuild and the server will be up and running again with the same IP, name, config etc.\n\nWith the nova rebuild command you will just enter the server name and image name. If you have verified that the user_data is compatible with another OS you can even use this to change or upgrade the OS version:\n\n`nova rebuild webserver01 centos-6.5-20140117.0`\n\nIf you want to create a backup you will use the nova backup command:\n\n`usage: nova backup `\n\nIf we want to make daily backups with a 1 week retention and a weekly backup with a 4 weeks retentions this is what should schedule in cron for example.\n\n``0 2 * * 6 nova backup webserver01 webserver01-daily-`date +%F` daily 6``\n\n``0 2 * * 1,2,3,4,5,7 nova backup webserver01 webserver01-weekly-`date +%F` weekly 4``\n\nWhen running a backup a snapshot is made of the server and that snapshot is then stored as an image in Openstack. The backup image can then be used to boot new servers or to recover the current one.\n\nHere is how to recover the server from a backup image:\n\n`nova rebuild webserver01 webserver01-daily-2014-12-12`\n\n## Volume backup and recovery\n\nThere are volume snapshot and backup functions available as well. If you have stateful data on a Volume and there is no advantage in using the native application backup then you should look at the OpenStack Volume backup functionality.\n\n**Create a volume**\n\nIn this example we will create a 10GB volume and attach it to our database server.\n\n`usage: cinder create [--display-name ] `\n\n`cinder create --display-name dbserver-vol01 10`\n\n`usage: nova volume-attach []`\n\n`nova volume-attach dbserver01 c0e9e951-e33f-488e-a605-95d0ecc728e1 /dev/vdb`\n\nCreate a filesystem, mount it and add some data.\n\n**Snapshot a volume**\n\nA snapshot is great for creating fast point in time copies of volumes. It will help you to protect your data from logical errors but it will probably not help you if the physical disk system will break.\n\nTo create a snapshot on a volume that is attached to a server you need to add the “--force True” option.\n\n`usage: cinder snapshot-create [--force ] [--display-name ] `\n\n`cinder snapshot-create --force True --display-name dbserver-vol01-snapshot dbserver-vol01`\n\nTo attach a snapshot to a server you need first create a volume from the snapshot.\n\n`cinder create --snapshot-id 4e87739b-4369-430d-bcdb-034b0f0ff861 --display-name vol-from-snapshot 10`\n\nAnd then you can attach it as a new device or replace the current device.\n\n`nova volume-attach dbserver01 de75c58b-f311-4e7b-acc7-ae9c2cfeb18b /dev/vdc`\n\n**Backup a volume**\n\nThe backup feature is a lot like the server (instance) Root disk backup. It will make a copy of the volume and store it as file in but now directly on Swift object storage.\n\nThe problem with the current backup feature is that you cannot make a backup of an attached volume. But if you first do a snapshot, then you can make a backup from that snapshot but in the process you need to convert the snapshot to a volume first. Not very straight forward but it works.\n\nHere is the procedure:\n\nCreate a snapshot\n\n`cinder snapshot-create dbserver-vol01 --force True --display-name dbserver-vol01-backup`\n\nCreate a volume from the snapshot\n\n`cinder create --snapshot-id 0ac15385-6fdb-42df-b779-2869c4518ed5 --display-name dbserver-vol01-backup 10`\n\nMake a volume backup\n\n`cinder backup-create --container volume-backup --display-name dbserver-vol01-2015-03-30 dbserver-vol01-backup`\n\nHere is how you can list your backups\n\n`cinder backup-list`\n\nTo recover a volume you need to detach the volume from the server, recover the data and then reattach the volume again.\n\n`nova volume-detach dbserver c0e9e951-e33f-488e-a605-95d0ecc728e1`\n\n`cinder backup-restore --volume-id dbserver-vol01 e8ec1eee-a9cb-4135-b8a9-21173bd965db`\n\n`nova volume-attach dbserver c0e9e951-e33f-488e-a605-95d0ecc728e1`\n\nFrom nova you need to specify the volume id and your need the backup id in the cinder backup-restore command.\n\n## To sum it up\n\nDo it right from the beginning and you can retire your old backup solutions, you won't even need to use the built in backup features in OpenStack."}],"intro":"I have worked a lot with backups over the years and one thing I have learned is that managing a traditional backup solution often takes too much time, consumes too much resources and costs too much.\n\nBut what about backups in a cloud environment? I still have important data and a lot of servers that need to be up and running. True, but if you do it right from the beginning you can take a fundamentally different approach to data protection and like many other things in the cloud it is a huge improvement.\n\nThese are my general rules that I will describe more in details.","slug":"backups-in-openstack-cloud","url":"/se/news/backups-in-openstack-cloud","lang":"se","filename":"backups-in-openstack-cloud"},{"date":"2014-10-31","title":"Spin up CoreOS on OpenStack using Vagrant","case_slug":"Spin up CoreOS on OpenStack using Vagrant","image":"/assets/uploads/glowing_400g_switch.jpeg","tags":["coreos","vagrant","openstack"],"intro":"CoreOS is a new Linux distribution that has been rearchitected to provide features needed to run modern infrastructure stacks. The strategies and architectures that influence CoreOS allow companies like Google, Facebook and Twitter to run their services at scale with high resilience.","body":[{"type":"body_text","text":"Just this week I tried out Vagrant Openstack plugin to spin up CoreOS instances on [Elastx ops:IaaS](https://elastx.se/en/openstack/).\n\n## Findings & issues\n\n1. Don’t destroy your Vagrant created Openstack images via Openstack. If you do, all your vagrant command will say ”Instance could not be found” and there seems to be no sane remedy. (Cleaning your ~/.vagrant.d/ files works.) The same thing has happened a few times when I do vagrant destroy where the instance is removed according to Openstack but lingers in Vagrant’s metadata.\n2. Be thorough with your preparations, especially ssh configurations. If you skip key name, public and private keys they’ll be generated in Openstack that then can’t be exported.\n3. Vagrant Openstack plugin (at least the one used in this post) is not ”official” according to Openstack foundation. More info [here](https://github.com/mitchellh/packer/issues/776). Thanks Philip!\n\n## Running it\n\nWhen up and running you do:\n\n`$ vagrant up --provider=openstack`\n\ninstead of\n\n`$ vagrant up`\n\nafter that everything is as usual. In addition you have plugin specific command line additions:\n\n```\n$ vagrant openstack --help\nUsage: vagrant openstack command\nAvailable subcommands:\n image-list List available images\n flavor-list List available flavors\n network-list List private networks in project\n floatingip-list List floating IP and floating IP pools\n volume-list List existing volumes\n```\n\n## Installing the plugin\n\nThere are a few different Vagrant Openstack plugin choices like vagrant-openstack and vagrant-openstack-plugin all available to be installed. The one to use are however [vagrant-openstack-provider](https://github.com/ggiamarchi/vagrant-openstack-provider).\n\nTo install:\n\n`$ vagrant plugin install vagrant-openstack-provider`\n\nTo only run the plugin command line additions you only need a Vargantfile with Openstack connection configuration like the one below:\n\n```\nrequire 'vagrant-openstack-provider'\nVagrant.configure('2') do |config|\n config.vm.box = 'openstack'\n config.vm.provider :openstack do |os|\n os.openstack_auth_url = 'https://ops.elastx.net:5000/v2.0/tokens'\n os.username = ''\n os.password = ''\n os.tenant_name = ''\n end\nend\n```\n\n## Provisioning\n\nIn order to provision a real coreos node we need more preparations and a more complete Vagrantfile. We need:\n\n* to generate key pair to import to Openstack for use with vagrant ssh\n* username for login to image\n* figure out floating IP pool to be able to connect at all to the provisioned instance\n\nLet's go to work:\n\n1. Generate key pairs. \n\n\n```\n $ ssh-keygen -t rsa -f test.key\n $ ssh-keygen -y -f test.key > test.key.pub\n```\n\n Put your private and public key files next to your Vagrantfile.\n\n2. Import public key into Openstack. Login to Openstack and import public key under Access & Security, Key Pairs.\n3. Figuring out IP pool might actually be easier via vagrant than through the Openstack GUI.\n\n\n```\n$ vagrant openstack floatingip-list\n+-------------------+\n| Floating IP pools |\n+-------------------+\n| test-net-01 |\n+-------------------+\n```\n\n**So let’s look at a full sample Vagrantfile:**\n\n```\n# \\-_\\- mode: ruby -_-\n\n# \\# vi: set ft=ruby :\n\nrequire 'vagrant-openstack-provider'\n\nVagrant.require_version \">= 1.6.0\"\n\nVagrant.configure(\"2\") do |config|\n config.vm.box = \"doesnt_really_matter_but_is_required\"\n config.vm.box_version = \">= 308.0.1\"\n config.ssh.username = ''\n config.ssh.private_key_path = '/test.key.pub'\n\n config.vm.provider :openstack do |os|\n os.openstack_auth_url = 'https://ops.elastx.net:5000/v2.0/tokens'\n os.username = ''\n os.password = ''\n os.tenant_name = ''\n os.flavor = 'm1.small'\n os.image = 'coreos-444.4.0'\n os.floating_ip_pool = 'test-net-01>'\n os.keypair_name = 'test'\n os.public_key_path = '/test.key'\n end\n\n config.vm.define vm_name = \"coreos-vagrant-01\" do |config|\n config.vm.hostname = vm_name\n config.vm.network :private_network, ip: \"172.17.8.101\"\n end\nend\n```\n\nYou’re now good to go.\n\n```\n$ vagrant up --provider=openstack\nBringing machine 'coreos-vagrant-01' up with 'openstack' provider...\n==> coreos-vagrant-01: Finding flavor for server...\n==> coreos-vagrant-01: Finding image for server...\n==> coreos-vagrant-01: Launching a server with the following settings...\n==> coreos-vagrant-01: -- Tenant : elastx.se\n==> coreos-vagrant-01: -- Name : coreos-vagrant-01\n==> coreos-vagrant-01: -- Flavor : m1.small\n==> coreos-vagrant-01: -- FlavorRef : bff4c362-1a64-4895-bcbe-89b437815934\n==> coreos-vagrant-01: -- Image : coreos-444.4.0\n==> coreos-vagrant-01: -- ImageRef : be5c10e9-80c5-4646-80bf-c0123243a4ef\n==> coreos-vagrant-01: -- KeyPair : test\n==> coreos-vagrant-01: Waiting for the server to be built...\n==> coreos-vagrant-01: Using floating IP 88.80.174.175\n==> coreos-vagrant-01: Waiting for SSH to become available...\n/Users/gungus/.vagrant.d/gems/gems/vagrant-openstack-provider-0.4.1/lib/vagrant-openstack-provider/action/create_server.rb:340:in `initialize': Network is unreachable - connect(2) (Errno::ENETUNREACH)\n```\n\n… I ignore the error, ssh seems to work fine.\n\n```\n$ vagrant ssh coreos-vagrant-01\nCoreOS (beta)\ncore@coreos-vagrant-01 ~ $\n```\n\nThat's all for now. Feel free to send us an email at if you have any questions. If you are interested in trying our ops:IaaS please [contact support](https://support.elastx.se/). Good luck with spinning up your coreos instances. \n"}],"slug":"spin-up-coreos-on-openstack-using-vagrant","url":"/se/news/spin-up-coreos-on-openstack-using-vagrant","lang":"se","filename":"spin-up-coreos-on-openstack-using-vagrant"},{"date":"2013-12-16","title":"Easy High Availability","case_slug":"Easy High Availability","intro":"If you application is business critical and you think high availability and disaster protection is too complex and expensive then this is for you. We will show you how to setup two sites and use a couple of SaaS services for site fail-over and monitoring. These powerful tools will cost you the massive amount of about $10 per month.","body":[{"type":"body_image","src":"/assets/uploads/easy-high-availability-1.png"},{"type":"body_text","text":"So this is how it will work. By using a CDN (Content Delivery Network) and a monitoring system that will tell the CDN where to send the traffic we will get a powerful and easy to use solution.\n\n1. Create sites\n\nYou will create two sites preferably at two different vendors or regions to make sure both sites never will be down at the same time. A good example would be two Virtuozzo (formerly Jelastic) sites, Elastx in Sweden and one of the other 14 sites available around the world. Deploy you application on both sites and set up data replication between the sites.\n\nYou can choose to have an Active/Active site or Active/Passive.\n\nI will not cover database replication now but I will get back with a new blog post discussing some options and recommendations. Please contact us if you want to discuss options on data synchronization as it depends a lot on the application and database you use.\n\nIn this example we created two sites with the following URL and IP.\n\n[http://site1.jelastic.elastx.net](http://site1.jelastic.elastx.net/) 88.80.172.123\n\n[http://site2.jelastic.elastx.net](http://site2.jelastic.elastx.net/) 88.80.172.124\n\nAnd the main site address we will use is [http://jelastic.se](http://jelastic.se/)\n\n1. Configure CDN\n\nWe need an external service to redirect traffic to the sites that are available. By using CDN you will at the same time also get better response times from other geographic regions. In this example we are using Cloudflare. They have a free plan and also a very nice price model for their other plans. They also have good geographic coverage with a local site here in Sweden .\n\n1. Register an account at \n2. Sign in and follow the the instruction on how you transfer your domain to Cloudflare, it will take up to 24h. You have to transfer you domain to give Cloudflare control of your records and also to protect it from dns attacks.\n3. Save the account the API key for later use, you will find it under “Account”."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-2.png"},{"type":"body_text","text":"We are using public IP addresses in this example but you can use shared IP and add CNAME records instead of A records.\n\n\\- If you want to configure an Active/Active solution add two A records with the same name.\n\n\\- If you want a Active/Passive solution you will add a single A record.\n\nYou can add a CNAME for www or other."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-3.png"},{"type":"body_text","text":"Now you need to get the ID for your DNS entries using the Cloudflare API. Here is how you do it using Curl which available for every OS. Replace tkn with your API key and also email and z which is the domain name.\n\n`curl https://www.cloudflare.com/api_json.html -d \"a=rec_load_all\" -d \"tkn=1f2eedac1c9bd7071f459effd8ad313bb38da\" -d \"email=joakim.ohman@elastx.se\" -d \"z=jelastic.se\"`\n\nYou can pipe the result to “python -mjson.tool” to get a pretty output.\n\nFind the “rec_id” for the domain record(s).\n\n1. Configure Monitoring\n\nYou want to monitor your sites and we also need a way to take action when a site goes down. In this example we are using site24x7 which is the only SaaS monitoring service I have found where you can configure custom URL alert actions and is a good and stable service.\n\n1. Register an account at \n\nYou need to use at least a Professional account to have custom alerts. You can start with the free 30 day trial.\n\n1. Configure Custom alert actions that will modify the DNS entries at Cloudflare to make traffic only to be directed to available sites.\n\nSelect the “Alerts” tab and then “Add Action”.\n\nFor an A/A solution create 4 actions with the following configuration.\n\nConfigure all actions with: URL: \n\nForm Submission Method: POST"},{"type":"body_image","src":"/assets/uploads/easy-high-availability-4.png"},{"type":"body_text","text":"For an A/A solution create 2 actions with the following configuration."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-5.png"},{"type":"body_text","text":"You can also try this using curl just to make sure it works before we try the hole solution. Here is one example.\n\n`curl http://javaenv.jelastic.elastx.net/api_json.html -d \"a=rec_edit\" -d \"tkn=1f2eedac1c9bd7071f459effd8ad313bb38da\" -d \"id=101849947\" -d \"email=joakim.ohman@elastx.se\" -d \"z=jelastic.se\" -d \"type=A\" -d \"name=failed\" -d \"content=88.80.172.123\" -d \"service_mode=1\" -d \"ttl=1\"`\n\nThen check the Cloudflare records and make sure it was updated as expected.\n\n1. Create monitoring for both sites.\n\nSelect the “Home” tab, “New Monitor” and then “Website”.\n\nFor an A/A solution create 2 actions with the following configuration."},{"type":"body_image","src":"/assets/uploads/easy-high-availability-6.png"},{"type":"body_text","text":"1. Now test it.\n\nTake down site 1, wait for the alert and check Cloudflare that the DNS record was updated as expected.\n\nBring the site 1 up and check the DNS record again.\n\nIf everything worked as planned then you are done!\n\nNow you have High Availability and Disaster Protection!\n\nJoakim Öhman"}],"image":"/assets/uploads/article-hero-13.jpg","slug":"easy-high-availability","url":"/se/news/easy-high-availability","lang":"se","filename":"easy-high-availability"},{"date":"2013-03-18","title":"Now in Sweden - True autoscaling and volume discounts","case_slug":"Now in Sweden True autoscaling and volume discounts","intro":"“With these changes Virtuozzo (formerly Jelastic) gets even further ahead of its competition and becomes extremely cost effective for small and large-scale production deployments alike.” - Ruslan Synytsky, COO, Virtuozzo","tags":["wordpress"],"body":[{"type":"body_text","text":"“Fair pricing is vital to make any business sustainable yet cost-effective for customers,” says Ruslan Synytsky, COO, Virtuozzo (formerly Jelastic), “Last year we became the first PaaS to launch vertical scaling of servers and true utility-based pricing. With our latest release, we are adding ability to set both minimal and maximal scaling limits, get volume discounts, and immediately see the projected costs the moment you create or change Virtuozzo environments.\n\nThe new model contrasts sharply with conventional industry pricing practices. Developers will find it more attractive than existing models for a number of reasons:\n\n**Free is never free**: Many supposedly competitive cloud hosting vendors attract developers by making it easy to run new apps for free. But as apps start growing, they need more and more resources. And those additional resources are far from free. In fact, at that point the formerly “free” pricing usually becomes very expensive as pricing is rarely adapted to large scale production use.\n\n**With Virtuozzo, automatic volume discounts provide make scaling cheaper**: Virtuozzo can give automatic discounts for usage depending on the amount of resources that the application consumes. So unlike with the “free to start, expensive to scale” platforms, higher-volume usage is automatically cheaper. Larger applications with higher resource consumption and commitment levels automatically get a lower price, making Virtuozzo perfect for large-scale production use.\n\n**Per-machine pricing is bad for customers**: The computer-based (per-machine) pricing model commonly used in the industry today means that customers often have to overpay, choosing bigger machines than they actually need because they have no other option.\n\n**With Virtuozzo, the only true consumption-based PaaS, customers only pay for the resources they need:** Virtuozzo is the only Platform-as-a-Service on the market today that has built-in automated vertical scaling. This means customers do not have to guess the machine size they need. Unlike with per-machine or other models, they can let Virtuozzo add memory, CPU and disk space on the fly. And they pay only for their actual hourly consumption.\n\n**Virtuozzo customers can now set Lower and upper limits, to get the best price for their usage patterns**: Beginning with Version 1.9, Virtuozzo lets customers specify the lower and higher scaling limits for all the servers they use within the platform. The higher customers set their minimum resource-use level, the bigger discount they get. At the same time, the maximum level still allows for overage in case the application needs to scale up to serve increased usage. This lets customers set limits that optimize price and operational efficiency.\n\n**Virtuozzo provides cost projections each time the customer creates a new environment or makes changes**: Any time a customer changes its configurations, Virtuozzo immediately displays both the minimum and the maximum expected monthly bill. This makes the pricing both fair and predictable.\n\nBelow is our new volume discount pricemodel;"},{"type":"body_image","src":"/assets/uploads/pricemodel-jelastic.png"},{"type":"body_text","text":"## Other features with the new Virtuozzo version 1.9 platform;\n\n**Easier access to application files with FTP/FTPS**: The new version of our platform offers support for the FTP/FTPS protocol. This provides a fast, easy, and reliable way of file sharing and data transfer to and from your servers in Virtuozzo environments.\n\n**Higher performance through file caching**: Virtuozzo 1.9 supports caching in NGINX (for both NGINX-balancer and NGINX PHP server). This improves performance by reducing the access time to the resource and increasing the resource building speed.\n\n**More enterprise-ready servers with built-in TomEE**: We’ve added Apache TomEE, the enterprise edition of Apache Tomcat, to meet demand for a more enterprise-ready server. This provides you with the best technology stack that can be deployed to a simple Java EE container. TomEE is a full-blown app server, but retains the simplicity of Tomcat and avoids architectural overhead.\n\n**More databases with MariaDB 10.0**: Virtuozzo 1.9 is the first PaaS offering MariaDB 10.0, the most advanced database in the SQL world. This database offers developers the advantages of multi-source replication, dynamic columns and MariaDB Galera Cluster. It is even faster and more fault tolerant, and has more flexible database replication, than previous MariaDB versions.\n\nAs you see we always consider our customers’ opinion and try to satisfy all their needs. Thanks to our users we are getting better!\n\nTry our free tier of our platform - Sign up [now](https://app.jelastic.elastx.net/?signup=true)."}],"image":"/assets/uploads/switch_exhaust.jpeg","slug":"now-in-sweden-true-autoscaling-and-volume-discounts","url":"/se/news/now-in-sweden-true-autoscaling-and-volume-discounts","lang":"se","filename":"now-in-sweden-true-autoscaling-and-volume-discounts"}],"footerData":{"se":{"contact":{"title":"Kontakt","address_1":"Katarinavägen 15","address_2":"116 45 Stockholm","rows":[{"row":"VAT: SE556906561701"},{"row":"Telefon: [+46 8 557 728 10](tel:+46855772810)"},{"row":"Epost: [info@elastx.se](mailto:info@elastx.se)"}]},"links":{"title":"Länkar","extra_footer_links":[{"targetBlank":true,"label":"Karriär","link":"https://career.elastx.se/"}]},"social":{"title":"Sociala kanaler","links":[{"label":"LinkedIn","url":"https://www.linkedin.com/company/elastx"},{"label":"YouTube","url":"https://www.youtube.com/@Elastx"}]},"cert_logos":{"logos":[{"logo":"/assets/uploads/iso-27001-1.png","link":"/information-security-policy"},{"logo":"/assets/uploads/iso-14001-1.png","link":"/environment-policy"}]},"external_cert_logos":{"logos":[{"logo":"/assets/uploads/green-web-foundation.png","link":"https://www.thegreenwebfoundation.org/green-web-check/?url=elastx.se"}]},"menulinks":[{"label":"Hem","link":"/"},{"label":"Om oss","link":"/about-elastx"},{"label":"Kontakt","link":"/contact-us"},{"label":"Tjänster","link":"/overview"},{"label":"Priser","link":"/pricing"},{"label":"Partners","link":"/partners"},{"label":"Nyheter","link":"/news"},{"label":"Events","link":"/events"}]},"en":{"contact":{"title":"Contact","rows":[{"row":"VAT: SE556906561701"},{"row":"Phone: [+46 8 557 728 10](tel:+46855772810)"},{"row":"Email: [info@elastx.se](mailto:info@elastx.se)"}],"address_1":"Katarinavägen 15","address_2":"116 45 Stockholm"},"links":{"title":"Links","extra_footer_links":[{"targetBlank":true,"label":"Career","link":"https://career.elastx.se/"}]},"social":{"title":"Social channels","links":[{"label":"LinkedIn","url":"https://www.linkedin.com/company/elastx"},{"label":"YouTube","url":"https://www.youtube.com/@Elastx"}]},"external_cert_logos":{"logos":[{"logo":"/assets/uploads/green-web-foundation.png","link":"https://www.thegreenwebfoundation.org/green-web-check/?url=elastx.se"}]},"cert_logos":{"logos":[{"logo":"/assets/uploads/iso-27001-1.png","link":"/information-security-policy"},{"logo":"/assets/uploads/iso-14001-1.png","link":"/environment-policy"}]},"menulinks":[{"label":"Home","link":"/"},{"label":"About us","link":"/about-elastx"},{"label":"Contact","link":"/contact-us"},{"label":"Services","link":"/overview"},{"label":"Pricing","link":"/pricing"},{"label":"Partners","link":"/partners"},{"label":"News","link":"/news"},{"label":"Events","link":"/events"}]}}}