At the time of this post, the UK is making steps to exit from an unprecedented lockdown measures for the Coronavirus. Much of the UK workforce are still making efforts to work-from-home with mainly key workers operating – at risk – in public. Many industries have shut down completely. Consequently, many businesses are reflecting on what happens next and how do we better mitigate future pandemic events?
Terraform 0.12 in recent years has emerged as the de facto standard with regards to defining and managing cloud infrastructure. It is one of four primary tools offered by HashiCorp, (Terraform, Vault, Consul and Nomad) and underpins the workflows that make up their Cloud Operating Model.
Since its first release in 2014, the wider Terraform community has embraced frequent releases and this past year has been no exception. HashiCorp announced the release of Terraform 0.12 in May 2019 and as of writing this post the official release is 0.12.9.
Creating and managing a Public Key Infrastructure (PKI) could be a very straightforward task if you use appropriate tools. In this blog post, I’ll cover the steps to easily set up a PKI with Vault from HashiCorp, and use it to secure a Kafka Cluster.
While Prometheus has fast become the standard for monitoring in the cloud, making Prometheus highly available can be tricky. This blog post will walk you through how to do this using the open source tool Thanos.
As traditional operations has embraced the concept of code, it has benefited from ideas already prevalent in developer circles such as version control. Version control brings the benefit that not only can you see what the infrastructure was, but you can also get reviews of changes by your peers before the change is made live; known to most developers as Pull Request (PR) reviews.
Google Cloud Functions is the Google Cloud Platform (GCP) function-as-a-service offering. It allows you to execute your code in response to event triggers – HTTP, PubSub and Storage. While it currently only supports Node.js code for execution, it has proved very useful for running low-frequency operational tasks and other batch jobs in GCP.
The recent 0.10.0 release of HashiCorp Terraform, saw a significant change to the way Providers are managed. Specifically, the single open source code repository for Terraform has been divided into core and multiple provider repositories.
DevOps has swept the tech landscape. Now, many are discovering the benefits of programmable infrastructure. I have been lucky to work on many projects where we’ve taken advantage of tools such as Terraform, Ansible, or Chef.
Sometimes, it can be difficult to write automated tests for parts of your application due to complexities introduced by an external dependency. It may be flaky or have some sort of rate limiting, or require sensitive information which we don’t want to expose outside of our production environment. To get around this, teams might take the approach of manually stubbing the service or using mocks – but the former is tedious and error prone, whereas the latter doesn’t test collaboration at all.
Several of us from the OpenCredo team were in attendance at the inaugural EU edition of the DevOps Enterprise Summit conference. We have been big fans of the two previous US versions, and have watched the video recordings of talks (2014, 2015) with keen interest as many of our DevOps transformation clients are very much operating in the ‘enterprise’ space.
Businesses exist to make money; their purpose isn’t just to generate revenue, but to create profits, now and in the future. Generating profits means delivering products or services that people want to buy. The creation of what people want is the entire purpose of delivery pipelines. (NB: The rest of this article will use ‘product’ to refer to both products and services.)
Many of our clients are currently implementing applications using a ‘microservice’-based architecture. Increasingly we are hearing from organisations that are part way through a migration to microservices, and they want our help with validating and improving their current solution. These ‘microservices checkup’ projects have revealed some interesting patterns, and because we have experience of working in a wide-range of industries (and also have ‘fresh eyes’ when looking at a project), we are often able to work alongside teams to make significant improvements and create a strategic roadmap for future improvements.
DevOps is 2016’s tech holy grail – unified development and operations, both working to deliver what the business needs, quickly, reliably, and adaptably. Done well, DevOps transforms the way organisations work; it helps break down barriers between tech teams, and between technology and the rest of the business. Good DevOps is the antidote to increasing segmentation and specialisation within companies. With the promised benefits, is it any wonder that senior managers are pushing for it in organisations spanning all sizes and industries?
Good consulting is, by its nature, an act of collaboration. We recently helped a company with a variety of challenges – some architecture, some coding, some systems, some people, some process (normal consultancy challenges) – unique to this client. During the project, we formalised some things we had thought before, but which had never crystallised – all the work we did was transformative. Whether it’s a code review, process review, DevOps implementation, or outright transformation, the primary goal is the same – improving flow. Flow (sometimes known as throughput) is the movement of raw materials through a system to become finished goods. It’s analogy in the service industry is the movement of customer requirements through to usable solution. And we help improve it.
In the rush to embrace DevOps, many organisations seek out tools to help them achieve DevOps nirvana; the magical tools that will unify Development and Operations, stop the infighting, and ensure collaboration. This search for tools to solve problems exists in many domains, but seems particularly prevalent in IT (it may be real, or a reflection of my exposure to IT). The temptation to embrace new tools as a panacea is high, because the problems in IT seem so pervasive and persistent.
I was privileged to be invited to speak on the topic of “Defining DevOps” by the London Technology Transformation Network, alongside a long-time friend and Devoxx conference contributor Dan Hardiker. I had a great time presenting at the event, and the questions and feedback we received after the main talks was superb – I took away lots to think about, and I believe the audience did too.
It was once again a privilege to present at the annual ‘muCon 2015‘ microservices conference held in London (at the shiny new Skillsmatter CodeNode venue). Based on feedback fro talks I gave earlier in the year, I presented a completely new version of my ‘The Business Behind Microservices‘ talk, which focuses on the organisational and people side of implementing a microservice-based application.
In some companies, the inevitable rapidly became accepted as the way to do things, and both development and IT operations worked together to figure out how to collaborate on building systems that satisfied development’s desire for change, and operations desire for stability. Outsourcing infrastructure, and all it implied, gave rise to Devops – the unification of business needs, developer delivery, and operational capacity – but it also gave rise to something else, in companies where the operations teams weren’t quite as quick to move – Shadow IT.
DevOps, Cloud and Microservices: “All Hail the Developer King/Queen”
Last week Steve Poole and I were once again back at the always informative JAX London conference talking about DevOps and the Cloud. This presentation built upon our previous DevOps talk that was presented last year, and focused on the experiences that Steve and I had encountered over the last year (the slides for our 2014 “Moving to a DevOps” mode talk can be found on SlideShare, and the video on Parleys).
Microservices, Debugging Containers and Software Development Methodologies
Once again I’m privileged to be speaking at the premier Java conference, JavaOne in San Francisco. This year I will be presenting (at least) three conferences sessions: “Building a Microservice Ecosystem”, “Debugging Java Apps in Containers” and “Thinking, Fast and Slow, with Software Development”. I say ‘at least’ three talks as I usually get press-ganged volunteered into helping out at other talks and BoF sessions, but this is simply a sign of the great community spirit and a large group of friends involved with this conference!
DevOps is transformative. This (hopefully) won’t be true forever, but it is for now. While the modern management practices of separating development and operations (and to a lesser extent, everyone else) prevail, the tearing down of the walls that separate them will remain transformative. In company after company, management and front-line staff are coming to realise that keeping functions separate, which are inherently interdependent, is a model for blame, shifted responsibility, and acrimony. It’s easy to divvy-up a company up based on function. To many people, it seems the most logical way to do it. Ops does operations, Dev does development, Marketing markets, etc. It seems much harder to do it any other way. So why do it?
The drive for change comes from, everywhere. Within IT departments, developers are asking for more control, and want faster response times than can be provided by traditional gatekeeper IT services. Management hears about new technologies allowing more rapid deployments, better ROI, reduction in costs, increased efficiency, and increased scalability, and they want it for their customers. On top of that, systems age, technologies lose their lustre, and technical debt builds up, while it simultaneously gets harder to recruit people from a shrinking talent pool, all of which creates a case for change from within the technology itself.
Recently I was working on a project that was using SaltStack for configuration management and Consul for service discovery. It occurred to me that using Consul’s key/value store would be great place to store data needed for my Salt runs, but unfortunately Consul was not supported in SaltStack as an official data store at that point in time. Being an open source project however, this provided an excellent opportunity to contribute back and this blog post looks to provide some details on how this works, as well as a practical demo on how you can take advantage of Consul as an external data store.
If you are operating in the programmable infrastructure space, you will hopefully have come across Terraform, a tool from HashiCorp which is primarily used to manage infrastructure resources such as virtual machines, DNS names and firewall settings across a number of public and private providers (AWS, GCP, Azure, …).
Last week I was privileged to be able to present my “Thinking Fast and Slow with Software Development” talk at the inaugural Software Circus conference in Amsterdam. The conference was amazing, and I’ll write more about this later, but in this post I was keen to share the presentation slides and the thinking behind this talk…
For years, OpenCredo has been working with organisations to help them introduce new technologies, and more effective development practices, to their IT teams. This has met with a great deal of success, and we have worked with a variety of companies of various sizes. During these projects, we have consistently noticed that the changes we make reach beyond IT in their impact and effects.
Working with OpenCredo clients, I’ve noticed that even if you are one of the few organisations that can boast ‘Infrastructure as Code’, perhaps it’s only true of your VMs, and likely you have ‘bootstrap problems’. What I mean by this, is that you require some cloud-infrastructure to already be in place before your VM automation can go to work.
Recently I have started looking into SaltStack as a solution that does both config management and orchestration. It is a relatively new project started in 2011, but it has a growing fanbase among Sys Admins and DevOps Engineers. In this blog post I will look into Salt as a promising alternative, and comparing it to Puppet as a way of exploring its basic set of features.
A common issue with Puppet manifests is a clash of resource definitions that appears in the Puppet log file as: ‘Duplicate definition: File[resource-name] is already defined; cannot redefine at…’
This issue is likely to occur when building a service stack from reusable Puppet modules and you try to alter a resource that has already been defined in modules that the service depends on.
Building a service from reusable Puppet modules has a couple of benefits. It saves you time when you don’t have to write new modules every time you build a new service. It also improves the quality when you build your service from modules that are trusted, tried and tested.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.