A private openQRM Cloud use-case for a developer team
In my post about devopsdays I left out Matt Rechenburg's presentation on OpenQRM, a tool for provisioning appliances within any kind of virtualisation, or just on bare metal. The company that created QRM stopped developing it just after Matt had convinced the owners to make the product open source. Matt stayed on as coordinator for the open source project. Currently OpenQRM does not have commerical support, but the developers are available on a time and materials basis.
2009-11-09
2009-11-03
2009-11-01
Devopsdays 2009
I returned home from Devopsday late saturday night, and having been digitally challenged during the trip I will try to summarise my impressions of the weekend over the next couple of days.
Devopsdays was a small conference about a couple of emerging themes combining Development and Operations:
The first theme is the realisation that if you want to build a scalable infrastructure, you need to automate deployment and administration of that infrastructure and the applications that run on it. System configurations becomes just another type of code to be developed, tested, integrated and deployed. Deployment becomes Release, Configuration becomes development, the ITIL processes for Incident, Problem Management become debugging. Change management becomes release management.
Second, another recent development in the last ten years has been the advent of Agile. I've only recently encountered the Agile Development movement, and although it's far from a Silver Bullet, it does appear to address some of the essential issues in software engineering in terms of Fred Brooks' original analysis.
Originally intended as a light-weight alternative to the waterfall model of software development, Agile transposes the stages of the waterfall model into concurrent processes, introducing feedback everywhere. Requirements analysis continues long after coding starts. Rapid prototyping, user stories, continuous integration, test first design are just a few methods used in Agile to shorten the feedback loop for developers. But while methods and processes are important, the real focus of the Agile movement is on communication and collaboration, in the end making developers and users jointly responsible for the end result.
The Devops Concept (for want of a better name) is about merging the two approaches and how to apply Agile principles to System Administration and how to get people in Operations and Development to collaborate on deployment.
At the conference, the two day programme was split in two: Talks and Presentations in the morning, and free-form discussion/presentations in OpenSpace format in the afternoon.
I would say that the talks and discussions focused on three themes:
Lindsay Holmwood explained his work on cucumber-nagios - combining 'cucumber', a tool/language for expressing tests in almost human readable scripts with the nagios monitoring tool, resulting in behaviour-driven monitoring. This was a very fast-paced presentation.
Teyo Tyree of Reductive Labs talked about the principles behind Practical Infrastructure Automation, referring to the "James White" Manifesto on Infrastructure (now up at github). Of course he focused on tools like Cfengine, Chef, and Reductive Labs' own Puppet, but he also sketched the challenges for the big enterprise with a multitude of services, commercial application stacks, and many platforms. He strongly suggest to start with baby steps: Implement Configuration tools like Puppet in a reporting state first, and use its reporting mechanism to create a history of change from within the system. Leverage the legacy CMDB, and work within established change control policies.
Agile coach Rachel Davies focused on the Agile principles, methods and tools, in particular about User stories and how to use those to identify non-functional requirements (requirements that do not add measurable value to the product, but that improve the product by reducing risk.)
Mattias Skarin presented a case study of using Kanban that helped to get Operations and Development collaborating closer. The key to Kanban is twofold: visualise task planning and put a hard limit on the amount of work in progress. The important thing here is that there is no single best design for a Kanban board - the team has to create what's best for them. After a couple of iterations, or sprints, the team may decide to add a category of work, or drop a phase from the progress axis.
Chris Read of Thoughtworks told us about Build pipelines, and how to take Continuous Integration several steps further into Continuous Deployment.
During Openspace, Jochen Maes started a hot discussion about the merits of distributed version control - and how to minimise the risk of branching: Even when developing in their own copy of the repository, he expects his developers to check in frequently, and at the same time rebase with the main repository at least once an hour. This way, individual developers run their own tests frequently, and then merge in any updates to the main source tree from other sources into their own copy without polluting the upstream. This requires strict discipline, but the result is that any time a change causes the build to fail, you can always fall back to the previous build. Also, because merging code in a distributed version control system like git or mercurial involves merging the complete history (and not just the current state), it is easy to identify which code change was responsible.
UPDATE: It so happens that George Neville-Neil just posted an article about this in his Kode Vicious column at ACM's Queue: Merge Early, Merge Often.
[edited]
Devopsdays was a small conference about a couple of emerging themes combining Development and Operations:
The first theme is the realisation that if you want to build a scalable infrastructure, you need to automate deployment and administration of that infrastructure and the applications that run on it. System configurations becomes just another type of code to be developed, tested, integrated and deployed. Deployment becomes Release, Configuration becomes development, the ITIL processes for Incident, Problem Management become debugging. Change management becomes release management.
Second, another recent development in the last ten years has been the advent of Agile. I've only recently encountered the Agile Development movement, and although it's far from a Silver Bullet, it does appear to address some of the essential issues in software engineering in terms of Fred Brooks' original analysis.
Originally intended as a light-weight alternative to the waterfall model of software development, Agile transposes the stages of the waterfall model into concurrent processes, introducing feedback everywhere. Requirements analysis continues long after coding starts. Rapid prototyping, user stories, continuous integration, test first design are just a few methods used in Agile to shorten the feedback loop for developers. But while methods and processes are important, the real focus of the Agile movement is on communication and collaboration, in the end making developers and users jointly responsible for the end result.
The Devops Concept (for want of a better name) is about merging the two approaches and how to apply Agile principles to System Administration and how to get people in Operations and Development to collaborate on deployment.
At the conference, the two day programme was split in two: Talks and Presentations in the morning, and free-form discussion/presentations in OpenSpace format in the afternoon.
I would say that the talks and discussions focused on three themes:
- (Open Source) Tools for automating IT Operations
- Collaboration between Development and Operations
- Agile methods and principles for Operations.
Lindsay Holmwood explained his work on cucumber-nagios - combining 'cucumber', a tool/language for expressing tests in almost human readable scripts with the nagios monitoring tool, resulting in behaviour-driven monitoring. This was a very fast-paced presentation.
Teyo Tyree of Reductive Labs talked about the principles behind Practical Infrastructure Automation, referring to the "James White" Manifesto on Infrastructure (now up at github). Of course he focused on tools like Cfengine, Chef, and Reductive Labs' own Puppet, but he also sketched the challenges for the big enterprise with a multitude of services, commercial application stacks, and many platforms. He strongly suggest to start with baby steps: Implement Configuration tools like Puppet in a reporting state first, and use its reporting mechanism to create a history of change from within the system. Leverage the legacy CMDB, and work within established change control policies.
Agile coach Rachel Davies focused on the Agile principles, methods and tools, in particular about User stories and how to use those to identify non-functional requirements (requirements that do not add measurable value to the product, but that improve the product by reducing risk.)
Mattias Skarin presented a case study of using Kanban that helped to get Operations and Development collaborating closer. The key to Kanban is twofold: visualise task planning and put a hard limit on the amount of work in progress. The important thing here is that there is no single best design for a Kanban board - the team has to create what's best for them. After a couple of iterations, or sprints, the team may decide to add a category of work, or drop a phase from the progress axis.
Chris Read of Thoughtworks told us about Build pipelines, and how to take Continuous Integration several steps further into Continuous Deployment.
During Openspace, Jochen Maes started a hot discussion about the merits of distributed version control - and how to minimise the risk of branching: Even when developing in their own copy of the repository, he expects his developers to check in frequently, and at the same time rebase with the main repository at least once an hour. This way, individual developers run their own tests frequently, and then merge in any updates to the main source tree from other sources into their own copy without polluting the upstream. This requires strict discipline, but the result is that any time a change causes the build to fail, you can always fall back to the previous build. Also, because merging code in a distributed version control system like git or mercurial involves merging the complete history (and not just the current state), it is easy to identify which code change was responsible.
UPDATE: It so happens that George Neville-Neil just posted an article about this in his Kode Vicious column at ACM's Queue: Merge Early, Merge Often.
[edited]
Subscribe to:
Posts (Atom)