23 items found: Search results for "webinar" in all categories x
May 22, 2017 | Data Analysis
As a final piece of our recent blog series about Apache Spark on 16 May we have presented details of a use-case about using Spark Structured Streaming to generate real-time alerts of suspicious activity in an AWS-based infrastructure.
This blog is written exclusively by the OpenCredo team. We do not accept external contributions.
October 10, 2016 | Cassandra
In the culmination of our blog series on the topic, on October 6th 2016 OpenCredo Consultants Dominic Fox, Alla Babkina and Guy Richardson, and hosted by Marco Cullen, presented the common design and implementation issues that they have come across in real-world Apache Cassandra deployments.
News | November 25, 2015
September 24, 2015 | Microservices
Unless you’ve been living under a (COBOL-based) rock for the last few years, you will have no doubt heard of the emerging trend of microservices. This approach to developing ‘loosely coupled service-oriented architecture with bounded contexts’ has captured the hearts and minds of many developers. The promise of easier enforcement of good architectural and design principles, such as encapsulation and interface segregation, combined with the availability to experiment with different languages and platforms for each service, is a (developer) match made in heaven.
September 18, 2015 | Microservices
We’re pleased to begin our series of OpenCredo webinars with “The Business Behind Microservices”, which takes a look at the some of the business and organisational challenges that come along with the decision to implement microservices.
May 9, 2017 | Cassandra
Data analytics isn’t a field commonly associated with testing, but there’s no reason we can’t treat it like any other application. Data analytics services are often deployed in production, and production services should be properly tested. This post covers some basic approaches for the testing of Cassandra/Spark code. There will be some code examples, but the focus is on how to structure your code to ensure it is testable!
This blog is written exclusively by the OpenCredo team. We do not accept external contributions.
May 2, 2017 | Cassandra, Data Engineering
My recent blogpost I explored a few cases where using Cassandra and Spark together can be useful. My focus was on the functional behaviour of such a stack and what you need to do as a developer to interact with it. However, it did not describe any details about the infrastructure setup that is capable of running such Spark code or any deployment considerations. In this post, I will explore this in more detail and show some practical advice in how to deploy Spark and Apache Cassandra.
April 25, 2017 | Cassandra, Data Analysis, Data Engineering
Apache Spark is a powerful open source processing engine which is fast becoming our technology of choice for data analytic projects here at OpenCredo. For many years now we have been helping our clients to practically implement and take advantage of various big data technologies including the like of Apache Cassandra amongst others.
March 23, 2017 | Cassandra, Data Analysis, Data Engineering
In recent years, Cassandra has become one of the most widely used NoSQL databases: many of our clients use Cassandra for a variety of different purposes. This is no accident as it is a great datastore with nice scalability and performance characteristics.
However, adopting Cassandra as a single, one size fits all database has several downsides. The partitioned/distributed data storage model makes it difficult (and often very inefficient) to do certain types of queries or data analytics that are much more straightforward in a relational database.
October 13, 2016 | Data Analysis
In Lisp, you don’t just write your program down toward the language, you also build the language up toward your program. As you’re writing a program you may think “I wish Lisp had such-and-such an operator.” So you go and write it. Afterward you realize that using the new operator would simplify the design of another part of the program, and so on. Language and program evolve together…In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient – Paul Graham, Programming Bottom-Up
September 27, 2016 | Cassandra, Data Engineering
If there is one thing to understand about Cassandra, it is the fact that it is optimised for writes. In Cassandra everything is a write including logical deletion of data which results in tombstones – special deletion records. We have noticed that lack of understanding of tombstones is often the root cause of production issues our clients experience with Cassandra. We have decided to share a compilation of the most common problems with Cassandra tombstones and some practical advice on solving them.
September 15, 2016 | Cassandra
Cassandra isn’t a relational database management system, but it has some features that make it look a bit like one. Chief among these is CQL, a query language with an SQL-like syntax. CQL isn’t a bad thing in itself – in fact it’s very convenient – but it can be misleading since it gives developers the illusion that they are working with a familiar data model, when things are really very different under the hood.
September 6, 2016 | Cassandra
A growing number of clients are asking OpenCredo for help with using Apache Cassandra and solving specific problems they encounter. Clients have different use cases, requirements, implementation and teams but experience similar issues. We have noticed that Cassandra data modelling problems are the most consistent cause of Cassandra failing to meet their expectations. Data modelling is one of the most complex areas of using Cassandra and has many considerations.
August 26, 2016 | Cassandra
At OpenCredo we have been working with Cassandra since 2012 and we are big fans of both open source Apache Cassandra and the capabilities of DataStax Enterprise. Over the years we have collected a great deal of experience throughout the company on how to deliver the benefits of Cassandra in real world projects and have also seen some common pitfalls that businesses have fallen into.
August 24, 2016 | Cassandra
At OpenCredo we are seeing an increase in adoption of Apache Cassandra as a leading NoSQL database for managing large data volumes, but we have also seen many clients experiencing difficulty converting their high expectations into operational Cassandra performance. Here we present a high-level technical overview of the major strengths and limitations of Cassandra that we have observed over the last few years while helping our clients resolve the real-world issues that they have experienced.
October 31, 2015 | Microservices
Over the past few weeks I’ve been writing an OpenCredo blog series on the topic of “Building a Microservice Development Ecosystem”, but my JavaOne talk of the same title crept up on me before I managed to finish the remaining posts. I’m still planning to finish the full blog series, but in the meantime I thought it would be beneficial to share the video and slides associated with the talk, alongside some of my related thinking. I’ve been fortunate to work on several interesting microservice projects at OpenCredo, and we’re always keen to share our knowledge or offer advice, and so please do get in touch if we can help you or your organisation.
September 20, 2015 | Microservices
Over the past five years I have worked within several projects that used a ‘microservice’-based architecture, and one constant issue I have encountered is the absence of standardised patterns for local development and ‘off the shelf’ development tooling that support this. When working with monoliths we have become quite adept at streamlining the development, build, test and deploy cycles. Development tooling to help with these processes is also readily available (and often integrated with our IDEs). For example, many platforms provide ‘hot reloading’ for viewing the effects of code changes in near-real time, automated execution of tests, regular local feedback from continuous integration servers, and tooling to enable the creation of a local environment that mimics the production stack.