Tech Talk: "Microservice-based System Integration" mit Georg Pfeiffer von Emakina CEE

Tech Talk: "Microservice-based System Integration" mit Georg Pfeiffer von Emakina CEE

Hey guys, welcome to my presentation! Today I'm going to talk about our flavor of Microservice-based System Integrations here at EMAKINA CEE.

I'm Georg Pfeiffer, I'm 34 years old and I've been working at EMAKINA CEE for years 8 now. I started right after my studies of business informatics at TU Vienna. For the last three years I've been working as a CRM Developer, mainly with Microsoft CRM.

And what we do a lot here is intergrating data from other systems.

Generally I'm a huge fan of Microsoft technologies, especially .NET. I think they have been on the right direction the last couple of years.

I like to make things as simple as possible. I think our industry is already complicated enough.

Okay, so let's start with the classic scenario, that a lot of companies face at one point:

You have your typical IT landscape, there are lots of different systems, which all have a specific purpose. You have your webshops, you have your CMSs which manage your content, you have your ERP systems which handle you business processes, you have your CRMs which manage your client's relations. But they all have one thing in common – they do a poor job on their own. So, a webshop, for example, needs to know the product stock from ERP system. Or the ERP system needs the orders from the webshops, so they can actually be processed and shipped. The CRM is pretty much useless without all of the customer data. The same is true for marketing automation tools.

The bottom line is; the better these systems are inter-connected, the better they perform. And usually, also, the faster they get the data, the better they perform. So, we don't want to interchange all the data once a night from midnight to 4 am. That's just not good enough.

So, what do we want; we want a fast performant integration solution. So, ideally, it's event-based triggered. It should be simple, extendable, future proof and scalable. Serverless and microservices would be a good approach here. But, what we really want is we want a single solution for all integrations. We don't want a specific solution for the orders from the webshop to the ERP system, or to the CRM, that way we would end up with spaghetti integration architecture. Which would be really hard to maintain. So basically we want all integrations to facilitate the same data format, a common data format, and use the same architecture. For the architecture we go for serverless microservice approach. So what does that actually mean? Let me present our implementation for that.

A microservice based system integration. This is not an enterprise service bus. We use dump pipes and smart end points. As supposed to the smart pipes of the enterprise service bus. What does that mean? We have publishers and subscriber microservices, which are responsible to getting data out of the systems, into the message bus, or getting data out of the message bus into the target systems. The only thing the message bus does her is routing. We can implement all of this serverless with Azure Functions and Azure Service Bus.

So first of all, we have to create a new common data language – the common data format. Which is universal throughout our whole system landscape. It contains mappings from and to all the proprietary data formats of our sub systems. For example, for account you could have firstname and lastname, and in the sub system the technical name for it could be name 1 and name 2. You have these mappings in there.

This is the core of our implementation. Only messages in this language will enter the data bus. So, it's really important it should be versioned, preferably in Git.

So, for the messaging our messages can contain multiple objects, but they should only contain relational objects. So, that means, if you send a contact, you should also send the related account. Or, if you send an order, you should send the order items and its connected accounts. But, we do not send multiple contacts in one message. We rather send multiple messages. It's then the responsibility of the target systems,

to integrate the data, update the records, do everything in one operation, and handle all the references. Of course it's important that each object gets a global ID, which is universal through our whole system landscape.

Then, we have this source for each object to track its origin. And of course we have a last modified timestamp to solve all the concurrency issues. For the concurrency issues we could hold a whole separate presentation about whole different strategies on how to solve this, but for now we just assume the most recent timestamp always wins.

So, for our API-gateway we usually are using Azure API management. The API gateway is the gatekeeper into the message bus. There's no way a message enters the message bus without going through the API Gateway. The API Gateway enforces to common data format, and it ensures that the message is well formatted. That means the burden of verification is on a single component. So subscribers can always assume that their messages are well formatted. Our message bus is the Azure Service Bus. It's important that there is no hidden logic. It should just route the messages. It follows a publish-subscribe pattern, that means messages get published to so called topics and various systems can subscribe to them and process them. Here, it makes sense to split topics up in entity types, so you can have topics for accounts, for orders, or for stock updates.

Now we go to our microservices. They are implemented serverless as Azure Functions, we differentiate between two types of microservices. We have publishers, which are responsible for getting data out of the system and into the message bus. Typically we prefer an event-based approach, because it's just simply faster. But if there is really no other option available, we could also go with the poll-based approach, but here we have to make sure that we are using small intervals. It's just not good enough if we just pull the data once every night.

Either way, the publisher processes the message. It gets it from the system, it puts it into the common data format, and sends it to the message bus, via the API-gateway.

Subscribers – they subscribe to a message from the message bus. Each subscriber is responsible for a topic- and system-combination. So, for example, you have your ERP and CRM subscribers, which subscribe to orders and accounts and then you have your CRM subscriber which only subscribes to the leads – we don't need the leads in the ERP system.

This is the place to handle business logic and concurrency issues. Depending on the data load, you can scale the subscribers, you have two options for Azure Functions; you could either host them on a dedicated machine in a App Service Plan, there you can scale them vertically by just making the machine more performant, so going up a higher tier, or you could just use multiple machines and just scale it horizontally. The other option you have is, you could host it as a consumption based plan. Here the cloud decides for you – based on your load – how many instances you need. The advantage here is you only pay what you actually use. But a disadvantage for example would be, you have a code start up time, so if you are relying on speed, you would have to go use the App Service Plan.

Let's go and see some benchmarks of systems we actually integrated, so you can actually see that it works. We have two flagship projects here at EMAKINA CEE, where we use this kind of system inegration.

One is where we used lots of the Salesforce suite, we use Salesforce Service Cloud, we used the Sales Cloud, we used the Commerce Cloud for the webshop and the Marketing Cloud. As well as other systems like the PIM and DAM to manage products and digital assets. And we integrated all this with already existing client infrastructure like their ERP system. For the other project, we implemented a Microsoft Dynamics365 service application, which builds the central workplace for one of the hugest service centers in Austria. So that the CRM can perform at its best and provide a full 360 degree view for each service request, it's important that we integrate data from actually 7 different sub systems, and that provides the ultimate service experience.

So much for buzzwords.

Let's take a look at the actual numbers: surprisingly the keyfigures are pretty similar for both projects. We have a typical roundtrip time of four to six seconds, so basically if you order something in the webshop, it takes the longest six seconds until it arrives in the ERP system, and can be processed.

A typical load for our central systems, which are usually the CRMs, is one million API requests per hour. This can even go up at crunch times – for example if you are doing an initial load – to four or five million API requests per hour.

Actually I have a funny story to tell. Last Christmas our Microsoft CRM system was acting really slow, we discovered that Microsoft actually throttled our API, because they thought there was some kind of DDoS attack going on, or there was some malfunction. We then had to convince them that everything is okay, and after a few calls with the main guys in San Francisco, they finally unthrottled the system again and marked our CRM instance as something special in their monitoring so that this won't happen again and our system will not be throttled again. Actually, they were quite surprised on what we could do with their system and yeah, so I think we built something pretty unique here.

With that being said, I now want to finish my presentation with the awesome team that achieved that:

I am really proud of everyone in that team and what we accomplished together!

System Integration #Done

 

 

Erfahre mehr zum DevTeam von EMAKINA Central & Eastern Europe GmbH