Microservices approach gains recently popularity. Some time ago service oriented architecture (SOA) approach was very popular. But what is the difference?
Category: DATA PROCESSING
We will discuss here solutions for data processing and distributed computing, like HDFS, spark, and kubernetes.
Schema evolution and backward and forward compatibility for data in data lakes
We have discussed before the format for clean and derived data in data lakes. One of the popular formats for this goal is an avro format. We will talk here why it is needed and how to achieve backward and forward compatibility by designing avro schemas.
Read more
HBase is next step in your big data technology stack
Authentication and authorizaton for XMLA Connect and Mondrian
If you would like to turn on basic authentication for mondrian cubes from excel you need to implement steps below.
Read more
How to implement Kylin dialect for Mondrian
In this post I will explain, how to implement Kylin dialect in Mondrian.
Improving performance by reading data with Hive for HDFS using subfolders (partitioning)
In ourĀ previous article we have discussed the root structure for HDFS. In this article we will discuss next level of the file structure, which will help to improve the speed of reading data.
Raw, clean, and derived data in data lakes based on HDFS
You may think, that there is no need to structure data in HDFS. You can systemize it in the future. But I think this is a wrong way. We should always keep in mind: there is no free lunch. Therefore it is better to make desicions at the beginning.
Read more
Thoughts about schema-on-write and schema-on-read
There are two approcahes, which we can select for designing storage of the data. They are schema-on-read and schema-on-write.
How to integrate Apache Kylin OLAP In Excel (pivot) [XMLA Connect and Mondrian]
Apache Kylin is very powerfull OLAP engine. It supports ODBC driver to move the data in excel, however this driver is not user friendly. Users should wright sql queries for this.
Short note about HDFS or why you need distributed file system
Why do you need HDFS (Hadoop Distributed Files System)? If the amount of data is small and place on your computer is enough for this, then you do not need distributed file system. But if you like to process a large amount of data, which is not possible to save on one computer, then you need to think about distributed file system.