92 items found: Search results for "github" in all categories x
June 1, 2022 | Open Source, Software Consultancy
Watch the talk ‘The Zen School of Github Actions’ by our Lead Consultant, Jonathan Ruckwood from Devoxx UK 2022
September 21, 2016 | DevOps
Sometimes, it can be difficult to write automated tests for parts of your application due to complexities introduced by an external dependency. It may be flaky or have some sort of rate limiting, or require sensitive information which we don’t want to expose outside of our production environment. To get around this, teams might take the approach of manually stubbing the service or using mocks – but the former is tedious and error prone, whereas the latter doesn’t test collaboration at all.
May 18, 2023 | Blog, Kubernetes
Check out Matthew Revell-Gordon’s latest blog as he explores building a local Kubernetes test cluster to better mimic cloud-based deployments, using Colima, Kind, and MetalLB.
Read Matt Farrow’s blog as he explores the potential for using Open Policy Agent to filter and mask data being sent to and read from Apache Kafka.
January 31, 2022 | Blog, Data Engineering
There are two camps of Graph database, one side is RDF, where they are strict with their format, and somewhat limited for their extensibility. The other side is LPG, where they can define labels to the relationships. With its recent extension, RDF now allows users to add properties, thus becoming RDF*. In this blog, Ebru explores the structural and performance differences between LPG and RDF*.
December 5, 2021 | Cloud, Kubernetes
Kubernetes’ second release in 2021, version 1.22, has been out for a little while now and with 1.23 on its way, we thought we’d take a look back. Kubernetes 1.22 was a highly comprehensive release with 53 enhancements in all three graduation levels: 13 features have graduated to stable, 24 enhancements reached beta status, and 16 new features have been accepted into the alpha stage.
The latest version has some noteworthy security features such as running Kubelet without root access, pod security policies, and seccomp. There are also a couple of deprecated and removed APIs. In this blog, we’ll discuss the significant changes in v1.22, as well as how to handle the removed APIs.
November 4, 2021 | Kubernetes
We always read that ‘security is everyone’s responsibility’. For any organisation, big or small, security should always be the primary concern—not a mere afterthought. In terms of Kubernetes, securing a cluster is challenging because it has so many moving parts and, apart from securing our Kubernetes environment, we also want to control what an end-user can do in our cluster.
To achieve these goals, we can start with the built-in features provided by Kubernetes like Role-Based Access Control (RBAC), Network Policies, Secrets Management, and Pod Security Policies (PSP). But we know these features are not enough. For example, we may want specific policies like ‘all pods must have specific labels’. And even if we have the policies in place, the next big question is how to enforce them on our Kubernetes cluster in an easy and repeatable manner.
In this blog post, we’ll address this challenge and other questions pertaining to OPA and how it can integrate into Kubernetes.
September 2, 2021 | Blog, Cloud, Kubernetes
July 20, 2021 | Blog, Data Engineering, Kafka
Message and event-driven systems provide an array of benefits to organisations of all shapes and sizes. At their core, they help decouple producers and consumers so that each can work at their own pace without having to wait for the other – asynchronous processing at its best.
In fact, such systems enable a whole range of messaging patterns, offering varying levels of guarantees surrounding the processing and consumption options for clients. Take for example the publish/subscribe pattern, which enables one message to be broadcast and consumed by multiple consumers; or the competing consumer pattern, which enables a message to be processed once but with multiple concurrent consumers vying for the honour—essentially providing a way to distribute the load. The manner in which these patterns are actually realised however, depends a lot on the technology used, as each has its own approach and unique tradeoffs.
In this article we will explore how this all applies to RabbitMQ and Apache Kafka, and how these two technologies differ, specifically from a message consumer’s perspective.
May 19, 2021 | DevOps, Hashicorp, Open Source, Terraform Provider
Developing a Terraform provider is a great thing for a company to do as it allows customers to quickly integrate a product with their existing systems with very little friction. During development, occasionally there might be bugs and issues to fix, and it can be quite difficult to work out what is causing them. In this post, I outline how you can attach a debugger such as Delve to a Terraform provider to save time when solving these issues.
April 20, 2021 | Data Engineering, Machine Learning, Software Consultancy
Our recent client was a Fintech who had ambitions to build a Machine Learning platform for real-time decision making. The client had significant Kubernetes proficiency, ran on the cloud, and had a strong preference for using free, open-source software over cloud-native offerings that come with lock-in. Several components were spiked with success (feature preparation with Apache Beam and Seldon for model serving performed particularly strongly). Kubeflow was one of the next technologies on our list of spikes, showing significant promise at the research stage and seemingly a good match for our client’s priorities and skills.
That platform slipped down the client’s priority list before completing the research for Kubeflow, so I wanted to see how that project might have turned out. Would Kubeflow have made the cut?
December 11, 2020 | Cloud, Cloud Native, Kubernetes, Microservices
“WebAssembly is a safe, portable, low-level code format designed for efficient execution and compact representation.” – W3C
In this blog, I’ll cover the different applications of Wasm and WASI, some of the projects that are making headway, and the implications for modern architectures and distributed systems.
October 29, 2020 | Cloud, Kubernetes, Open Source
While working with a client recently, we experienced some issues when attempting to make use of NLB external load balancer services when using AWS EKS. I wanted to investigate whether these issues had been fixed in the upstream GitHub Kubernetes repository, or if I could fix it myself, contributing back to the community in the process.
September 22, 2020 | AWS, Blog, Cassandra, Cloud, DevOps, Open Source
With the upcoming Cassandra 4.0 release, there is a lot to look forward to. Most excitingly, and following a refreshing realignment of the Open Source community around Cassandra, the next release promises to focus on fundamentals: stability, repair, observability, performance and scaling.
We must set this against the fact that Cassandra ranks pretty highly in the Stack Overflow most dreaded databases list and the reality that Cassandra is expensive to configure, operate and maintain. Finding people who have the prerequisite skills to do so is challenging.
Writing your own Kafka source connectors with Kafka Connect. In this blog, Rufus takes you on a code walk, through the Gold Verified Venafi Connector while pointing out the common pitfalls
February 20, 2019 | DevOps, Hashicorp, Kafka, Open Source
Creating and managing a Public Key Infrastructure (PKI) could be a very straightforward task if you use appropriate tools. In this blog post, I’ll cover the steps to easily set up a PKI with Vault from HashiCorp, and use it to secure a Kafka Cluster.
February 5, 2019 | Cloud Native, DevOps, Kubernetes, Microservices, Open Source
While Prometheus has fast become the standard for monitoring in the cloud, making Prometheus highly available can be tricky. This blog post will walk you through how to do this using the open source tool Thanos.
May 31, 2018 | DevOps
As traditional operations has embraced the concept of code, it has benefited from ideas already prevalent in developer circles such as version control. Version control brings the benefit that not only can you see what the infrastructure was, but you can also get reviews of changes by your peers before the change is made live; known to most developers as Pull Request (PR) reviews.
February 14, 2018 | Cloud
AWS Announced a few new products for use with containers at RE:Invent 2017 and of particular interest to me was a new Elastic Container Service(ECS) Launch type, called Fargate
Prior to Fargate, when it came to creating a continuous delivery pipeline in AWS, the use of containers through ECS in its standard form, was the closest you could get to an always up, hands off, managed style of setup. Traditionally ECS has allowed you to create a configured pool of “worker” instances, with it then acting as a scheduler, provisioning containers on those instances.
January 23, 2018 | Data Engineering, DevOps
Machine Learning is a hot topic these days, as can be seen from search trends. It was the success of Deepmind and AlphaGo in 2016 that really brought machine learning to the attention of the wider community and the world at large.
January 11, 2018 | Data Engineering
The last few years have seen Python emerge as a lingua franca for data scientists. Alongside Python we have also witnessed the rise of Jupyter Notebooks, which are now considered a de facto data science productivity tool, especially in the Python community. Jupyter Notebooks started as a university side-project known as iPython in circa 2001 at UC Berkeley.
October 24, 2017 | Data Engineering
Cockroach Labs, the creators of CockroachDB are coming to London for the first time since their 1.0 GA Release in May 2017. They will be taking time to talk about “The Hows & Whys of a Distributed SQL Database” at the Applied Data Engineering meetup, hosted and run by us here at OpenCredo.
We have been interested in CockroachDB for a while now, including publishing our initial impressions of the release on our blog. We thought this would be the perfect time to do a bit of a Q&A before the event! I posed Raphael Poss, a core Software Engineer at Cockroach Labs a few questions.
August 9, 2017 | Cloud, DevOps, Terraform Provider
The recent 0.10.0 release of HashiCorp Terraform, saw a significant change to the way Providers are managed. Specifically, the single open source code repository for Terraform has been divided into core and multiple provider repositories.
August 8, 2017 | Cassandra
Recently, the sad news has emerged that Basho, which developed the Riak distributed database, has gone into receivership. This would appear to present a problem for those who have adopted the commercial version of the Riak database (Riak KV) supported by Basho.
This blog is written exclusively by the OpenCredo team. We do not accept external contributions.
June 15, 2017 | Data Engineering
CockroachDB is a distributed SQL (“NewSQL”) database developed by Cockroach Labs and has recently reached a major milestone: the first production-ready, 1.0 release. We at OpenCredo have been following the progress of CockroachDB for a while, and we think it’s a technology of great potential to become the go-to solution for a having a general-purpose database in the cloud.
May 9, 2017 | Cassandra
Data analytics isn’t a field commonly associated with testing, but there’s no reason we can’t treat it like any other application. Data analytics services are often deployed in production, and production services should be properly tested. This post covers some basic approaches for the testing of Cassandra/Spark code. There will be some code examples, but the focus is on how to structure your code to ensure it is testable!
This blog is written exclusively by the OpenCredo team. We do not accept external contributions.
May 2, 2017 | Cassandra, Data Engineering
My recent blogpost I explored a few cases where using Cassandra and Spark together can be useful. My focus was on the functional behaviour of such a stack and what you need to do as a developer to interact with it. However, it did not describe any details about the infrastructure setup that is capable of running such Spark code or any deployment considerations. In this post, I will explore this in more detail and show some practical advice in how to deploy Spark and Apache Cassandra.
April 13, 2017 | Terraform Provider
Recently I’ve been doing a lot with Terraform; having briefly flirted with it in the past, it’s only now with v0.8.x that I’ve been seriously stepping out with it (and Azure, since you asked). In the main I think it’s great, especially as it means I don’t have to yak-shave with the AWS and Azure CLIs. However, I have started to bang my head against some of Terraform’s limitations, specifically around HCL (Hashicorp Configuration Language) – used to define infrastructure in the Terraform .tf files.
March 23, 2017 | Cassandra, Data Analysis, Data Engineering
In recent years, Cassandra has become one of the most widely used NoSQL databases: many of our clients use Cassandra for a variety of different purposes. This is no accident as it is a great datastore with nice scalability and performance characteristics.
However, adopting Cassandra as a single, one size fits all database has several downsides. The partitioned/distributed data storage model makes it difficult (and often very inefficient) to do certain types of queries or data analytics that are much more straightforward in a relational database.
March 23, 2017 | Data Engineering, Machine Learning
On previous blog posts we have provided examples of different types of acceptance tests coverage, UI, API and Performance. One area where automation is often lacking is around validating the security of the application under test. This has been discussed in the post on non functional testing You Are Ignoring Non-functional Testing. With this post we will enhance the automation framework to quickly check for some common security flaws.
March 20, 2017 | DevOps
DevOps has swept the tech landscape. Now, many are discovering the benefits of programmable infrastructure. I have been lucky to work on many projects where we’ve taken advantage of tools such as Terraform, Ansible, or Chef.
January 26, 2017 | Data Engineering
Suppose you are given the task of writing code that fulfils the following contract:
This blog is written exclusively by the OpenCredo team. We do not accept external contributions.
January 24, 2017 | Cloud
This blog aims to provide an end to end example of how you can automatically request, generate and install a free HTTPS/TLS/SSL certificate from Let’s Encrypt using Terraform. Let’s Encrypt is a free, automated, and open certificate authority (CA) aiming to make it super easy (and free – did I say free!) for people to obtain HTTPS (SSL/TLS) certificates for their websites and infrastructure. Under the hood, Let’s Encrypt implements and leverages an emerging protocol called ACME to make all this magic happen, and it is this ACME protocol that powers the Terraform provider we will be using. For more information on how Let’s Encrypt and the ACME protocol actually work, please see how Let’s Encrypt works.
January 23, 2017 | Data Analysis
More often than not, people who write Go have some sort of opinion on its error handling model. Depending on your experience with other languages, you may be used to different approaches. That’s why I’ve decided to write this article, as despite being relatively opinionated, I think drawing on my experiences can be useful in the debate. The main issues I wanted to cover are that it is difficult to force good error handling practice, that errors don’t have stack traces, and that error handling itself is too verbose.
October 13, 2016 | Data Analysis
In Lisp, you don’t just write your program down toward the language, you also build the language up toward your program. As you’re writing a program you may think “I wish Lisp had such-and-such an operator.” So you go and write it. Afterward you realize that using the new operator would simplify the design of another part of the program, and so on. Language and program evolve together…In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient – Paul Graham, Programming Bottom-Up
September 27, 2016 | Cassandra, Data Engineering
If there is one thing to understand about Cassandra, it is the fact that it is optimised for writes. In Cassandra everything is a write including logical deletion of data which results in tombstones – special deletion records. We have noticed that lack of understanding of tombstones is often the root cause of production issues our clients experience with Cassandra. We have decided to share a compilation of the most common problems with Cassandra tombstones and some practical advice on solving them.
August 26, 2016 | Kubernetes
This post is the last of a series of three tutorial articles introducing a sample, tutorial project, demonstrating how to provision Kubernetes on AWS from scratch, using Terraform and Ansible. To understand the goal of the project, you’d better start from the first part.
August 26, 2016 | Kubernetes
This post is the second of a series of three tutorial articles introducing a sample, tutorial project, demonstrating how to provision Kubernetes on AWS from scratch, using Terraform and Ansible. To understand the goal of the project, you’d better start from the first part.
August 26, 2016 | Kubernetes
This post is the first of a series of three tutorial articles introducing a sample, tutorial project, demonstrating how to provision Kubernetes on AWS from scratch, using Terraform and Ansible.
August 24, 2016 | Cassandra
At OpenCredo we are seeing an increase in adoption of Apache Cassandra as a leading NoSQL database for managing large data volumes, but we have also seen many clients experiencing difficulty converting their high expectations into operational Cassandra performance. Here we present a high-level technical overview of the major strengths and limitations of Cassandra that we have observed over the last few years while helping our clients resolve the real-world issues that they have experienced.
July 3, 2016 | DevOps
Several of us from the OpenCredo team were in attendance at the inaugural EU edition of the DevOps Enterprise Summit conference. We have been big fans of the two previous US versions, and have watched the video recordings of talks (2014, 2015) with keen interest as many of our DevOps transformation clients are very much operating in the ‘enterprise’ space.
June 24, 2016 | Software Consultancy
Akka has been designed with a Java API from the very first version. Though widely adopted, as a Java developer I think Akka has been mainly a Scala thing… until recently. Things are changing and Akka is moving to a proper Java 8 support.
June 3, 2016 | Software Consultancy
In this post, I’m going to take something extremely simple, unfold it into something disconcertingly complex, and then fold it back into something relatively simple again. The exercise isn’t entirely empty: in the process, we’ll derive a more powerful (because more generic) version of the extremely simple thing we started with. I’m describing the overall shape of the journey now, because programmers who don’t love complexity for its own sake often find the initial “unfolding” stage objectionable, and then have trouble regarding the eventual increase in fanciness as worth the struggle.
May 31, 2016 | Kubernetes
Do you ever wake up and think to yourself: oh geez, Kubernetes is awesome, but I wish I could browse and edit my services and replication controllers using the file system? No? Well, in any case, this is now possible.
April 29, 2016 | Software Consultancy
In this post, I’ll demonstrate an alternative API which uses some of the advanced language features of the new Kotlin language from Jetbrains. As Kotlin is a JVM-based language, it interoperates seamlessly with Concursus’s Java 8 classes; however, it also offers powerful ways to extend their functionality.
April 28, 2016 | Software Consultancy
In a conventional RDBMS-with-ORM system, we are used to thinking of domain objects as mapped to rows in database tables, and of the database as a repository where the current state of every object exists simultaneously, so that what we get when we query for an object is the state that object was in at the time the query was issued. To perform an update, we can start a transaction, retrieve the current state of the object, modify it, save it back again and commit. Transactions move the global state of the system from one consistent state to another, so that the database transaction log represents a single, linear history of updates. We are therefore able to have a very stable, intuitive sense of what it means to talk about the “current state” of any domain object.
April 5, 2016 | Software Consultancy
This post is part of a series which introduce key concepts in successful test automation. Each post contains sample code using the test-automation-quickstart project, a sample Java test automation framework available from Github.
April 2, 2016 | Terraform Provider
When it comes to automating the creation of infrastructure in cloud providers, Terraform (version at time of writing 0.6.14) has become one of my core go to tools in this space. It provides a fantastic declarative approach to describing the resources you want, and then takes care of making it so for you, keeping track of the state in either a local file or a remote store of some sort. Various bits of sensitive data is often provided as input to terraform.
March 29, 2016 | Software Consultancy
Acceptance test suites generally are used for UI and API testing, and we have covered both these approaches in our Test Automation Quickstart project. However, an application may, for example, send registration or expiration warning emails. Often, tests related to this are left to manual testing, instead of putting them into an automated test suite.
However, there’s no need to check emails manually: it suffers from all the same problems as other manual testing. It’s slow, expensive, and inconsistent. There are many libraries available to interact with email through code – this post will focus on how to use them within an automated test suite.
March 17, 2016 | Software Consultancy
In order to be able to regularly release an application, your automated tests must be set up to give you fast and reliable feedback loop. If bugs are only found during a long and expensive multi-service or end-to-end test run, it can be a hinderance to fast delivery. Unfortunately I have often seen this problem in development environments: a huge suite of clunky, flaky and slow end-to-end tests which test the full functionality of the application as opposed to being more lightweight and reflecting basic user journeys. This produces the “ice cream cone” anti-pattern of test coverage, where unit tests aren’t providing the kind of coverage and feedback they need to.
March 14, 2016 | Software Consultancy
Test automation provides fast feedback on regressions. In order to achieve this tests need to execute quickly, something which becomes more of a problem as test suites grow. This is especially true of tests which exercise a user interface where the interaction with the system is slower.
A good way to address this is to have your tests execute in parallel rather than consecutively. Given sufficient resources this allows your execution time to remain low almost indefinitely as more scenarios are added to the suite.
March 3, 2016 | Software Consultancy
In this post, I’ll be sharing some React/Redux boilerplate code that Vince Martinez and I have been developing recently. It’s primarily aimed at developers who are familiar with the React ecosystem, so if you are new to React and/or Redux, you might like to have a look at Getting Started with React and Getting Started with Redux.
March 3, 2016 | Software Consultancy
JetBrains (the people behind IntelliJ IDEA) have recently announced the first RC for version 1.0 of Kotlin, a new programming language for the JVM. I say ‘new’, but Kotlin has been in the making for a few years now, and has been used by JetBrains to develop several of their products, including Intellij IDEA. The company open-sourced Kotlin in 2011, and have worked with the community since then to make the language what it is today.
January 26, 2016 | Data Engineering
In this second post about Hazelcast and Spring, I’m integrating Hazelcast and Spring-managed transaction for a specific use case: A transactional Queue. More specifically, I want to make the message polling, of my sample chat application, transactional.
January 18, 2016 | Software Consultancy
Last time in this series I summarised all the Akka Persistence related improvements in Akka 2.4. Since then Akka 2.4.1 has been released with some additional bug fixes and improvements so perhaps now is a perfect time to pick up this mini-series and introduce some other new features included in Akka 2.4.x.
January 15, 2016 | Software Consultancy
So, you’ve started to hear a lot about React, the Javascript library developed by Facebook, but is it something you need to investigate? It’s time to distil the signal from the noise, position React amongst its rivals, and provide an indication of where it currently would – and would not – be a suitable fit.
January 8, 2016 | Microservices
Many of our clients are in the process of investigating or implementing ‘microservices’, and a popular question we often get asked is “what’s the most common mistake you see when moving towards a microservice architecture?”. We’ve seen plenty of good things with this architectural pattern, but we have also seen a few recurring issues and anti-patterns, which I’m keen to share here.
December 1, 2015 | Software Consultancy
This post introduce some of the basic features of Hazelcast, some of its limitations, how to embed it in a Spring Boot application and write integration testings. This post is intended to be the first of a series about Hazelcast and its integration with Spring (Boot). Let’s start from the basics.
November 25, 2015 | Microservices
In May a 1.0 release of RAML (RESTful API Markup Language) has been announced delivering a few much welcome additions in the RAML 1.0 specification. This major release marks an important milestone in the evolution of RAML and indicates the team behind the specification is confident this release delivers the comprehensive set of tools for developing RESTful APIs. I’ve been using RAML 0.8 for several months now and have enjoyed the simplicity and productivity it offers for designing and documenting APIs. I must say I’m quite pleased with the changes introduced in the new release and would like to review those I consider particularly useful.
November 23, 2015 | Software Consultancy
Over a year ago, my colleague Tristan posted on the OpenCredo blog about a test automation quick start framework. It’s a prepackaged framework you can clone and get going with testing instantly, rather than wasting your time rebuilding your framework every single project. We have used this framework successfully used on many of our internal projects, and it relies upon a Java, Cucumber-JVM and Selenium stack.
November 4, 2015 | Software Consultancy
Writing reusable roles for Ansible is not an easy task but one that’s worth doing. This post should walk you through the basics of writing reusable roles with dependencies backed by public and private git repositories.
November 3, 2015 | Software Consultancy
My JavaOne experience was rather busy this year, what with three talks presented in a single day! The first of these talks “Debugging Java Apps in Containers: No Heavy Welding Gear Required” was delivered with my regular co-presenter Steve Poole, from IBM, and we shared our combined experiences of working with Java and Docker over the past year.
November 1, 2015 | Microservices
To use or not to use hypermedia (HATEOAS) in a REST API, to attain the Level 3 of the famous Richardson Maturity Model. This is one of the most discussed subjects about API design.
The many objections make sense (“Why I hate HATEOAS“, “More objections to HATEOAS“…). The goal of having fully dynamic, auto-discovering clients is still unrealistic (…waiting for AI client libraries).
However, there are good examples of successful HATEOAS API. Among them, PayPal.
October 31, 2015 | Microservices
Over the past few weeks I’ve been writing an OpenCredo blog series on the topic of “Building a Microservice Development Ecosystem”, but my JavaOne talk of the same title crept up on me before I managed to finish the remaining posts. I’m still planning to finish the full blog series, but in the meantime I thought it would be beneficial to share the video and slides associated with the talk, alongside some of my related thinking. I’ve been fortunate to work on several interesting microservice projects at OpenCredo, and we’re always keen to share our knowledge or offer advice, and so please do get in touch if we can help you or your organisation.
September 24, 2015 | Microservices
Unless you’ve been living under a (COBOL-based) rock for the last few years, you will have no doubt heard of the emerging trend of microservices. This approach to developing ‘loosely coupled service-oriented architecture with bounded contexts’ has captured the hearts and minds of many developers. The promise of easier enforcement of good architectural and design principles, such as encapsulation and interface segregation, combined with the availability to experiment with different languages and platforms for each service, is a (developer) match made in heaven.
September 20, 2015 | Microservices
Over the past five years I have worked within several projects that used a ‘microservice’-based architecture, and one constant issue I have encountered is the absence of standardised patterns for local development and ‘off the shelf’ development tooling that support this. When working with monoliths we have become quite adept at streamlining the development, build, test and deploy cycles. Development tooling to help with these processes is also readily available (and often integrated with our IDEs). For example, many platforms provide ‘hot reloading’ for viewing the effects of code changes in near-real time, automated execution of tests, regular local feedback from continuous integration servers, and tooling to enable the creation of a local environment that mimics the production stack.
September 14, 2015 | DevOps, Hashicorp, Open Source
Recently I was working on a project that was using SaltStack for configuration management and Consul for service discovery. It occurred to me that using Consul’s key/value store would be great place to store data needed for my Salt runs, but unfortunately Consul was not supported in SaltStack as an official data store at that point in time. Being an open source project however, this provided an excellent opportunity to contribute back and this blog post looks to provide some details on how this works, as well as a practical demo on how you can take advantage of Consul as an external data store.
August 26, 2015 | Cloud
Unless you’ve been living under a rock for the last year, you’ll undoubtedly know that microservices are the new hotness. An emerging trend that I’ve observed is that the people who are actually using microservices in production tend to be the larger well-funded companies, such as Netflix, Gilt, Yelp, Hailo etc., and each organisation has their own way of developing, building and deploying.
August 19, 2015 | Software Consultancy
Memoization is a technique whereby we trade memory for execution speed. Suppose you have a function which
In this scenario, it may make sense to “remember” the output returned for each distinct input in a hash map, and replace function execution with a lookup in the hash map.
August 18, 2015 | Software Consultancy
In this post, the last in the New Tricks With Dynamic Proxies series (see part 1 and part 2), I’m going to look at using dynamic proxies to create bean-like value objects to represent records. The basic idea here is to have some untyped storage for a collection of property values, such as an array of Object
s, and a typed wrapper around that storage which provides a convenient and type-safe access mechanism. A dynamic proxy is used to convert calls on getter and setter methods in the wrapper interface into calls which read and write values in the store.
August 10, 2015 | Cloud, Software Consultancy
As a company, we at OpenCredo are heavily involved in automation and devOps based work, with a keen focus on making this a seamless experience, especially in cloud based environments. We are currently working within HMRC, a UK government department to help make this a reality as part of a broader cloud broker ecosystem project. In this blog post, I look to provide some initial insight into some of the tools and techniques employed to achieve this for one particular use case namely:
With pretty much zero human intervention, bar initiating a process and providing some inputs, a development team from any location, should be able to run “something”, which, in the end, results in an isolated, secure set of fully configured VM’s being provisioned within a cloud provider (or providers) of choice.
August 5, 2015 | Data Analysis, Data Engineering
A few weeks ago, we thought about building a Google analytics dashboard to give us easy access to certain elements of our Google Analytics web traffic. We saw some custom dashboards for bloggers, but nothing quite right for our goal, since we wanted the data on a big screen for everyone in the office to view.
July 28, 2015 | Microservices
Recently I co-presented a talk at Goto Amsterdam on lessons learnt whilst developing with a Microservices architecture; one being the importance of defining and documenting your API contracts as early as possible in the development cycle. During the talk I mentioned a few API documentation tools that I’d used and, based on feedback and questions from attendees, I realised that this topic merited a blog post. So, the purpose of this is to introduce 5 tools which help with designing, testing and documenting APIs.
July 14, 2015 | Software Consultancy
Building simple proxies
In the previous post I introduced Java dynamic proxies, and sketched out a way they could be used in testing to simplify the generation of custom Hamcrest matchers. In this post, I’m going to dive into some techniques for implementing proxies in Java 8. We’ll start with a simple case, and build up towards something more complex and full-featured.
July 13, 2015 | Software Consultancy
Why Use Dynamic Proxies?
Dynamic proxies have been a feature of Java since version 1.3. They were widely used in J2EE for remoting. Given an abstract interface, and a concrete implementation of that interface, a call to some method on the interface can be made “remote” (i.e. cross-JVM) by creating two additional classes. The first, a “marshalling” implementation of the interface, captures the details of the call in the source JVM and serializes them over the network. The second, an “unmarshalling” endpoint, receives the serialized call details and dispatches the call to an instance of the concrete class on the target JVM.
July 8, 2015 | Software Consultancy
I was quite excited around autumn last year when Google started to work on a new version of Angular (Angular 2.0) which promised to revolutionise web development. There were rumours that Angular 2.0 wouldn’t be backward compatible with its predecessor, and would be written in Google’s AtScript which is a JavaScript based language on top of Microsoft’s TypeScript. The lack of backwards compatibility raised some concerns, especially for the clients that we had used Angular at. But, lets not get ahead of ourselves here….
June 23, 2015 | Cloud, DevOps, Terraform Provider
Working with OpenCredo clients, I’ve noticed that even if you are one of the few organisations that can boast ‘Infrastructure as Code’, perhaps it’s only true of your VMs, and likely you have ‘bootstrap problems’. What I mean by this, is that you require some cloud-infrastructure to already be in place before your VM automation can go to work.
February 16, 2015 | Software Consultancy
Apache Mesos is often explained as being a kernel for the data-centre; meaning that cluster resources (CPU, RAM, …) are tracked and offered to “user space” programs (i.e. frameworks) to do computations on the cluster.
November 4, 2014 | Software Consultancy
When starting a project, teams often spend their time re-inventing the ‘automated testing wheel’. While every project has it’s own challenges and every team it’s own needs, many things exist as common requirements of a flexible test automation framework.
This post introduces an effective Java test framework that can be used to quickly get started with test automation on a Java project.
October 23, 2014 | Cassandra
Spring Data Cassandra (SDC) is a community project under the Spring Data (SD) umbrella that provides convenient and familiar APIs to work with Apache Cassandra.
February 24, 2014 | Cloud Native, Microservices
Last year some of us attended the London Spring eXchange where we encountered a new and interesting tool that Pivotal was working on: Spring Boot. Since then we had the opportunity to see what it’s capable of in a live project and we were deeply impressed.
January 28, 2014 | Software Consultancy
A while ago I published this blog post about writing tests for mobile applications using Appium and cucumber-jvm.
In this post, I will look at an alternative approach to testing an Android native application using Cucumber-Android.
Throughout the post I will draw comparisons between Appium and Cucumber-Android, the goal being to determine the best approach for testing an android application using Cucumber. I will focus on the ease of configuration and use, speed of test runs and quality of reporting.
September 19, 2013 | Software Consultancy
This post will give an overview of mobile testing using Appium. We will integrate tests for a native Android application into an existing Cucumber-JVM based set of acceptance tests and demonstrate multi platform testing from a single set of BDD scenarios. The sample code for this can be found here.
July 2, 2013 | Software Consultancy
This blog post will address the issue of slow test runs when using Cucumber JVM with web automation tools such as WebDriver to perform acceptance testing on a web application.
If your team is using continuous integration this becomes especially noticeable, forcing teams to either wait for acceptance tests to complete before deploying or having to ignore the bulk of the tests.
The sample code and description in this post will show you how to convert your suite to running tests in parallel, something which has historically been problematic with unanswered questions and outstanding Cucumber-JVM bugs on the subject. Tying the described approach in with Selenium Grid2 will allow you distribute your testing across several machines if your suite is especially large or slow.
June 30, 2013 | Software Consultancy
Spring Data Hadoop (SDH) is a Spring offshoot project that allows the invocation and configuration of Hadoop tasks within a Spring application context. It offers support for Hadoop jobs, HBase, Pig, Hive, Cascading and additionally JSR-223 scripting for job preparation and tidy-up.
It is most suited for use in organisations with existing Spring applications or investment in Spring expertise. Some SDH features replicate functionality of tools in the Hadoop ecosystem that DevOps engineers who maintain a Hadoop cluster will be more familiar with.
January 10, 2013 | DevOps
Recently I have started looking into SaltStack as a solution that does both config management and orchestration. It is a relatively new project started in 2011, but it has a growing fanbase among Sys Admins and DevOps Engineers. In this blog post I will look into Salt as a promising alternative, and comparing it to Puppet as a way of exploring its basic set of features.
August 16, 2012 | Neo4j
It’s been more than a year now since I rolled out Neo4j Graph Database Server image in Amazon EC2.
In May 2011 the version of Neo4j was 1.3 and just recently guys at Neo Technology published version 1.7.2 so I thought now is the time to revisit this exercise and make fresh AMIs available.
Last year I created Neo4j AMI manually in one region then copied it across to the remaining AWS regions. Due to the size of the AMI and the latency between regions this process was slow.
March 21, 2012 | Software Consultancy
Event processing Language (EPL) enables us to write complex queries to get the most out of our event stream in real time, using SQL-like syntax.
EPL allows us to use full power of aggregation of the high volume event stream to get required results with the minimal latency. In this blog we are going to explore some aspects of numerical aggregation of data with high precision BigDecimal values. We will also demonstrate how you can add you own aggregation function to Esper engine and use them in EPL statements.
February 13, 2012 | Software Consultancy
Recently we were approached by a client to do some performance testing of the web application they had written. The budget allowed two days for this task. Ok. No problem. Yes, we can. Naturally I had one or another question though…
February 8, 2012 | Data Analysis, Data Engineering
Most of the important players in this space are large IT corporations like Oracle and IBM with their commercial (read expensive) offerings.
While most of CEP products offer some great features, it’s license model and close code policy doesn’t allow developers to play with them on pet projects, which would drive adoption and usage of CEP in every day programming.