Conf42 Cloud Native 2025 - Online

- premiere 5PM GMT

Orchestrating the Future: Decentralized Automation & RESTful Integration with Dimensigon DM

Video size:

Abstract

Dimensigon is an open‐source Python platform that decentralizes and orchestrates complex inter‐server automation. It integrates diverse IT infrastructures via RESTful APIs, enabling continuous automation, secure distributed management, and real‐time log federation—even across disconnected networks.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi, everyone. Daniel Moya here. In this session, we will talk about Dimensigon. It's a decentralized automation tool that enables RESTful API in all your servers, and it has a command line interface that is so cool because it has a tab and you can autocomplete, you can quickly Distributed commands, pull software from anywhere to any other place and create orchestrations so fast with just few commands. Dimensicon is our company name, but it's also the name of this technology. We also call it DM and this serves very much when you have two or more, subnets. Or data centers that are connected through very specific points, and you want to manage them as a unique group of servers. We call that a dimension. Yes, this we can easily, do inter server operations. So we can do operation one in server one over here, another one over here. Or also coordinate when some event is triggering an, an automation that it helps to execute something on the other side of, of the network. Also, we can execute a ADOC or standalone automations. And also we can distribute, do distribute commands that we will see this functionality further. About the architecture, we have one repository in GitHub that it contains all these three components at the moment. dmCore is a Python flask, it enables the RESTful API. The elevator, it upgrades when there is, detect, there is, another new version. Inside the network, then it spreads the new version to the others. If of course you configure as such, DL is a common line interface that translates all your commands into restful API to interact with the DM core. And we have a plan for, adding the web manager, graphical user interface to build a metadata that it's in the data in the distributed database that we implement. In Docker, we can create the subnets with this command here. And after that, we create our bridge container that is connected to both subnets. And later, we can create also our containers from each subnet that represents a datacenter or a separated network that you have in your company. Coming to SSH, we can just execute all this. You will create quickly the five containers we can connect to our first to create the dimension. Let me show you in the presentation in the dimension. In Dimensiigon, we create with Dimensiigon new. This is like a cluster, as we say, it is a group of servers. Usually, our recommendation is to create one per life cycle. The production, pre production or development is the best way you can do for organization and for security. Coming back, we can Sect DM zero VASH minus L to load the environment in here. It will ask for the root password, but it's not root from operative system is root inside the command line. And you will see that just we put something for the test. It will provide us a join token. This we can use in any other machine. For example, here we can go to the other one. And nsx it dm1 bash l In here we can do Dimensigon, join. We have to see from the first, this first subnet. We copy paste and the token. In here, of course, it's not yet listening. So we have to do no hub, Dimensiigon in background. There you go. Then we can just join the Dimensiigon, the Dimension. Once we join, I will repeat this operation to the other servers. When I am done, we will continue. As you will receive this message of join the dimension, the other joins you can execute, of course, from the other nodes. But in this case, as we have one node as a bridge, I will execute them all from the DM0. For this training, we recommend using Docker and we can simulate two. Networks with one server in the middle that will act as a bridge is very easy with docker. We can imagine this network, for example, as one cloud provider or one data center on this one. It can be. I can represent another data center. On prem, cloud, it doesn't matter as long as you understand that these servers didn't have a communication with these servers and the communication goes through DM for our automations and everything we can create it as, two networks in Docker and then create our machines. this, this image is a very publicly available. You can pull it at any time for each new server. You have to secure them and see gone. Token. There are three ways of executing this. Another one is through RESTful and from the command line. Or, and this one as well, from the DM core. It generates a token for a new server. The token can be used only once. So we have to generate one token for each new server from any other server that is already part of this group. Adding the last one. We have to copy our token as before. We are using the second subnet. And then it's playing about. The decentralization and distributed management. When we are adding each of these servers, we can administrate from any of them, any other or all the others, all of them as a group. This is called a dimension dimension. This cluster, then it's already with some logics to handle the split brain in case there is a network instability. And also it has mechanism to keep the metadata consistent between the surviving group. This is all a distributed database built on top of SQL Alchemy. Coming back to our command line, we have now all the servers up and running. You can see here that there is a. That D shell where it contains at the moment just the username and server default to connect because you can Download the software and connect to another server Where is running the mensegon and not exactly to the local host. You can do that. In this case, we just, Dcl, we are not yet connected, but we can do login safe. Then when we input our password, you can see here that our token will be safe. And next time we can directly execute by tap. We can execute any of this command. We are already log in. First of all, we will do the status. You see with the tab, it auto completes. For example, I use this. And we have, this is the version of the catalog. This is the metadata of the database. And the server is in here. Yes. We can continuously add the others by tab. And the important thing is to see the catalog and to control that the catalog should be in the same version. If not, We have something to execute, the manager catalog refresh. This will detect if there is a new catalog in the network and it will pull the catalog to our location. You have to execute this manager catalog refresh from another server where it's in a minor version or an older version. Then it will retrieve the latest catalog from any of the networks. Additionally, it has a mechanism to auto update the catalog when it has found a new version. You will find this into the nohub or the loch from the dimensicon process. Coming back to the presentation, once you have two or more nodes, it's more interesting because you can have Some commands that make sense when you have not only one server, but more to administrate. DCL was the, was developed on top of Promptoolkit with other components that you may see in the repository or the PYPI. And we will go through all the different commands that we have here. But first of all, I want to Explain more about our units. The hierarchy that we have is one, orchestration is one automation. We can create the steps inside and step. We have to define actions to have some sort of reusability so we can reuse these actions into many other orchestration. We can also reuse orchestration as one step into another orchestration. Also, we have a step and do the step and do we have to define in which condition we consider than the that the orchestration is failing and then it will execute the step and do also in the order that we wish to, to execute. We have a repository that we call orchestration library in there. We wish that everyone can share their JSON that, it can they can easily be imported into the command line interface. And once you import the JSON, you can navigate through the command line interface through the steps and modify them. If you modify an orchestration, there is a versioning mechanism that it will create a new version of the orchestration instead of overwriting the current version. This is made for consistency. between your automations. So you know that there is some immutability to the version that you were using. And in case someone creates a modification, it's a new version inside the catalog. As explained before, there is a catalog that it's spreading over your network of servers that you manage with Dimensiigon. These Is when you create something, new that can action, orchestration, lock federation and software that are added to the library, any of these operations will it. Higher the watermark of the catalog and with the time it will be distributed. If there is a need for immediate distribution due to an orchestration execution or so, then it will come in the background. Let's jump into our first demo. We will create the typical hello world of this. Very simple, just with the, A command line interface. This is a summary of what you have to type. Let's go into the SSH to test it with the D Shell. At D Shell, first of all, we have to create the action that we will use to populate one step into the orchestration. Then we have to just type action create. You see, I can tap and also I can use the minus H in any sub option. Then we can do action create. We need a name that it can be a hello world to make it the same as example and then type shell some comment here we can shell python ansible then we can easily integrate with any playbook in ansible and there are some special call request and also we have some other internals that I will show you later. then as, as such, we will start then in any, in any command, we can just type minus H. We will see our options. We can preview what we are doing. Instead, there is an interesting minus H because you can see All the parameters here that you can modify, apart from the name, the version, the type we can, we can modify and go to all the other type that from what we have initially, configured the expected is standard out or also the expected, the expected return code and some other more complex parameters. Topics that we won't discuss in a hello world. Then from here, we can just simple set code. The easy here we will do in vi echo and then hello from we can directly use variables. We have also a vault. Okay, let me put an exclamation. Then like this, we can just save it. Then in here, we can preview and also we can submit. Submit will create a new item in the repository. Then it will make the catalog to higher the watermark a bit more. When we submit. We got some ID. Pay attention to this 880. Then, for example, if we do action list and by name. We will see that we have some other internal actions that we will explain later. But our hello world is there. Then we can, of course, list and see the contents of this. Of course, you can action and then load from a JSON in here. It will be a file with the absolute. path and you can upload actions defined by any other colleague or so. Then when we have our action already we can go into org then we see our options we can list we may not have anything at the moment but we are about to create it a hello world also with the same name has no problem with the it's a different type of entity then When we have a, oh yes, I did a list, hello world, there is no list, hello world, we have to do a create, hello world, then over here there is nothing more that we need to modify or create, name, hello world, Then in here is a bit more options. We have also the step option in this step. We have to start with the step and do or undo. We can define in any order they undo first or whatever. We will define the order also later in case we need an undo. But then let's do our first step do to move forward. We see a one on the command line. Then when we have this, we can select. Action, look, I will copy paste, select minus eight, set action template ID, then when we are here, I can just tap and we see our ID with the translation of the name we, we gave in the action definition is a version one. When we select this, we can just also preview. We see our, our guy there and submit. With it, oh yes, of course, save first, then submit. With this operation we create our first orchestration. Then we can org list by name, hello world. We have this orchestration. Also we have the detail that it provides in the steps. Our action that we define type shell and what it does. Yes. Okay. So in this regard, when we need to create more, we can do it or copy. You will create the version two when we need to load from a JSON file, we will do a load. Yes. And that's it. And also when we do a copy and a tab with the orchestration. Yes. Then we can, we have one option called dump. If we do the dump, we can do dump h, dump, hello world, hello json, yes. And then we can exit and exit over here and you will see, hello world. This will be our definition that we can do the orchestration load and load this file. But we haven't had to create the JSON in the past. So we generate JSON, but from command line is like we navigate them easy. Then we enter into this shell and now we can simply do the same. Orc run the options here. There are some options to put parameters and some options for the vault where we have the secrets. Of course, we have the scope that you can create secrets that are usable only in production or in other environments in case you have similar, same, other environments in the same dimension. At least this way you can, select. Secrets from different, dictionaries. Yes. Then in this case, we just run. We will put our orchestration and we have the target. We can put Exactly as such one, two nodes or three nodes, for example, these three nodes and then with the enter we will just hit and it will run in these three servers you see here important is that you quickly see the server and success equal true, okay? Then we see from it is like this. apart from it, I will show you another way of just executing that very simple statements at once in all the servers you wish from our presentation. I record then an output for your review. You see the output is, also there are interesting stuff for the return, code and the parameters that were passed and some elapse time of how it was executed, on the runtime or at the end. You have some sort of summary of success through for the automation that, orchestration. Okay. And the, on all the target that has been a success. Is it similar to Ansible? We come back to the next topic is distributed commands. You can do a command shell that is enabled this shell with the delta and then select the target of what servers it will have to complete with the tab and any upcoming command will execute in all the servers. Let's try it. Here in the shell, to execute our distributed command, we are tab, we do in tab all the time to use the help in any command or sub command, you will get a very nice help. And then in command, we use shell. And we put the target. Now we are using the tab, abusing on the tab all the time. Then we will just use two servers for having a very short, output. Then when we are here, it's as easy as tabbing, as using any command. Yes, we can use this command and query everywhere. Or to double check that our Systems are ready for the upcoming automation that we will run. Some sort of preparation. It helps very much for it or troubleshooting in many servers. This, it was useful first time with some distributed shell in Exadata, one of the ones that has some experience with Oracle. And from that idea, that is also a Python, we came into these distributed commands. Coming back to the presentation, the next topic could be the software library. The software library, we can add a software in any host in our network and use it in any other place. It will pull the software or one off or on demand from a orchestration. As you can check with the help, you can do a software add or list or also software send. We will try here in another demo that we will send a software that we will put, in the DM four that is isolated by DM one and the software will be sent to the DM one through proxy in DM zero. We can do a quick demo and do a one-off transfer from server DM four to server DM one as we don't have the software in DM four. Let's place it first and DM four. It's this one, then we can copy software like this to dm4 with our alias for the machine 200 megabytes, then we have to still modify the ownership of the file this we can do a quick one off demo of how to transfer. Let's of course put the software first into dm4 and then transfer it to dm1. In docker, we have to first do the copy software installer to dm4 and then later we just need here the container to change the ownership of the file when we have this we can quickly connect for the registration to this to the node that it has our software dm4 bash minus l then when we have our software correct permissions You can go in this shell. Let me check it out. If we have already, we don't have the token. So to do a low auto login, it's very comfortable, the login safe. Then we have the token and we can do the software at instead of going directly, I would do minus age to show you the help software. We don't have any, you know, in the list. So we do the add the family test. It can be the name dummy software version 1. 0 and file the absolute the absolute path just like that. And we add the software. The, the dictionary or catalog in this note will rise. Its high watermark. Yes. This, we can see actually in here we are in the 65, have to be a bit quick. Also, it can. Yeah, we see this one is like 23rd February at 712 and on the other note we have another dictionary is not yet up to date. Anyway, we can see the software. List, for example, if I either ID or name, we will see something nice, dummy software. Then we see that our dummy software, it has this size in bytes and it has a checksum and we can use the software by this ID. as you can see, there is a for orchestration and another moment we will explain, the list by name and we have these orchestrations. The one of sent software. We can execute a known in any other orchestration is a native type in. The schema, the schema would be the YAML that you have to buy, build with the following, parameters. But the required ones are just a software and in the server that it will come, yes? And more or less, it executes this part internally. then with the software, we can go in software, send, it's a one off. Then in here, we have our software. Before that. Minus age. We have first the destination path. let's use temp. The software ID is this one. It will translate, which is very easy. And then the destination server. We have it in the server DM four that if you remember is the DM. 65 ending with the containers is a bit not that easy as in your company with some meaningful server names. But here with the containers, we have also this automated names. And then another that is not the 65, for example, in this one before we transfer it, we can go. Also to that server. Actually, we were in that C40. So in the C40, we will see that they have, no, their software is not there in TMP. So when we do this, and we have no other, we can still put the foreground, so it will wait, and this command will wait until the software is successfully transferred, and also the forward that it will overwrite it if it's already there. Then, let's go like this. This, it's happened because, the, the software is not existing. In destination, which means we just have to, manage the catalog at destination is sometimes we have to do this. We can go to destination and let's see here in this in this software, you will see that there is nothing list, nothing, but we can do manager. Catalog refresh. This should pull the catalog from the other node. No catalog found. Then we have now we are software list. Now we have it. So as we have pulled the new version, we can retry this And now we have a transfer ID. That was so fast. I'm repeating the test. So when you have a software sent, we click it, we have our transfer ID, go to transfer list ID. And then with this ID, we have no info in here. Let me see from here. We come here, transfer list. Last one is in progress. Let's check it out what we have there because we have the same 101 chunks. it's, in here, in temp, we have all these chunks. They are being transferred one by one. Then they are, being, joined. And when they are joined, it forms the final binary file You see? I have less now and now is finally the file is mounted back. This, what has happened is very interesting to understand the possibilities of this technology with the software when you use it one off, of course. It's just a matter of some comfort, but if you, if we use it with automation, it can be something like, one step we pull the software to whatever, whatever number of, distant servers that we wish, install it and do a post installation steps into other actions. Yes, what has happened in here is that we have transferred a software that is in one location that is not directly communicated with another location. Yes, this is interesting for many companies that they have some software in another part of the company and it's sometimes difficult to mount an NFS from one place into another. So this way with the proxy, we can just skip all the network difficulties and just pass the software from one place to another on the fly via an orchestration as well. Before we try more complex orchestrations, let me do a quick catch up over all the commands we have. Action, we already said that is a part of a step that is inside an orchestration. Command is a distributed command. Environment, we can list the environment. We can also set or set or get variables. This can also be used by inside the orchestrations as well. But for secrets, we have the vaults that we will come in another moment. Exec is executions. We can list executions. In this case, we had in the past the hello world. For in in the past we execute that was a sex success and it was a cute in that servers here. Okay. see control C control L. Then we continue here exit nothing luck Federation. We will come to this topic of subscribing to a law or to a directory that it will. Replicate its content of the log from many directories everywhere, to one other or other locations. log, this is just to the login inside the application that we can modify here. inside login we have still the save or to login only once. or to another username also we can create, but this is a more advanced topic. I will create some users. then in the manager we have a two minutes. The catalog refresh. To manage the locking mechanism, we can ignore one server, mark and unignore to include it back into the, clustering, locking mechanism. Yes. Also, we can show. How is the status? For example, we can show a status of this is unlocked for catalog operations for upgrade or for orchestration executions when there is one that we decide to lock to and don't enable concurrency for because we we have very critical orchestration running. We can just see it here. Also, we have the orchestrate orchestrations where we list or create, or we copy or we load from Jason or we just run it. interesting at the moment now just to list. I see that we have the hello world only. The other ones are the internal actions that we saw in the past. How are the actions? We see in the list by name, I want to explain the Ascent software. It executes, insert an orchestration. This would be like, we in one step, we define one action as Ascent software. And then in that one, we will have to pass the parameters of in the destination and in which server. Destination directory where it would be sending software. Wait servers is where you want an orchestration to stop in the middle of another server, provisioning until the server is added to DimensiGone. Orchestration, you want to wait, you want, no, you want to create an orchestration inside an action, then this you can specify. Wait route to server. When you add a server, it has to be an existing route to reach the server. It's a second level of adding a server when it's inside the catalog already. delete servers, you can wait until a server is no longer part of the dimension, of the dimension. Then it means that you can continue because something was executing there, whatever. And our, our action there, that we have used in the same name orchestration. Then cancel here, and after this we have the pin command. We can ping any number of nodes here to understand if they are up and running with Dimensiigon, not the server itself. In servers there are some interesting stuff. We can list the servers. We can delete the server from the dimension, the dimension, and we can see its roots is interesting. We are right now in the DM0. So we can see the roots. Yeah, they are not that much interesting because this one can reach all the servers. Cost is the number of jumps that it has. And this we see that is our server. So, but let's go to the server. We see the routes from here. Also, the, this one, yes, this one is isolated in, is like in one corner from one of the two networks that we have one network, another network. Then if we list the routes, we will see that in this case, we have two servers that they are accessible through a proxy server. That is our server that we are, we were connected right now. And another. Two servers that it can connect directly is, of course, one is the proxy server and another one is the network that we have in the same network. This is interesting in the servers and still on the list. We saw that we can have some details. So by name, let's say if I put some details, yeah, we can see what is the gates. So for each IP, it's, listening for each IP on the server to upcoming, yeah, upcoming connections. This is interesting also. Then let's clean again. About software. We can see from the software library what we have there. Yeah, like for my ideas. We have just one. You want when it taps with with one it out of resolution resolution to just one and I. D. In this case, if we have more, we will see, tabular, options. Yeah, that we have in the status. We are used at the beginning of this tutorial. yeah, we can query. The status of other. Also, we can include the detail, just if we query two, you will see that just querying. We see the catalog that is in sync with the others, and we see that, the other notes are registered as alive, not in coma in coma. It can be when also they are like you stop the domestic on service and also the neighbors. And we have, in this case, these two networks, the ID and the server name, and the other one, it has another network here, and this one is kind of interesting to see when it was executed. And also, the version that is running of the software. This is also managed by the elevator. Apart from it, we have the sync that in that one. To be honest, this one, I, I don't, I don't know. I don't remember for this. What is this for? Because, I would have to, double check with the documentation. The sync, the transfer we already saw is the transfer list. In this case, we have two completed one. It was too fast. I was not able to show you just the chunks that are being transferred on the other one. We saw it on how it was completed and also apart from the time transfer. I like I like very much the vault and by double purely braces inside the code of a python or in in shell, we can specify secrets. Thanks. And these secrets are accessible by the dictionary vault dot and the variable name. Then, of course, here we don't have, but we can define a scope of to list. And in this regard, it's just that simple. No, wait a moment. I have a vault list. Nothing. Of course, we have a vault, right? Minus eight, then we can just say, okay, right, let's go broad and variable name can be, so my username from a database and the password. Yes, and as you see here, the global is the default scope. It's already connecting with the other nodes. And spreading the word about this secret. Actually, if we see now from this node to another, we see now a new version. And with the time, in a matter of 5 minutes, it will spread the catalog to the other machines. We see in the vault, list. And well, in the case we don't indicate the scope, we have nothing listed because for security we have configured this way, but if we include the scope, we see the script directly with this. With the value as well, so that it can be changed, it can be changed with the configuration. That's all, that's, so far the main commands that we can use around. Of course, around the orchestration, there are many options with the command line in how to define and how to, the copy, something that we will use. To, generate the version two of the our hello world and continue with the tutorial, but to keep it short this time is better than we just refresh some of the concepts that we explained so we can come with a So, Nice idea about what we have instead of complicating more with some more examples. Remember that the orchestration is made of steps. We define the actions beforehand. We can, we have shown only the action shell, but also there is an ansible. Playbooks you can integrate. Python directly and another orchestration made of whatever, steps or also the internal method. As we saw, the software sent are wait for servers, this kind of things, and yeah, require also some, testing from your site if you wish. there are some complex logics that can be, execute as a parallel processes. If you have one or more steps depending on one. It means it can be executed in parallel and also it works as well for the undo. So if you have, one number one, if it fails, it will execute step seven, as undo number two, number 10. Will be executed in parallel if number one is successful. Number two, if it fails, it will execute the number 11 as undo. And number three, it has a more complex undo as it can also execute first undo number four. Then if successful number four, it will execute number five and six on. As you can see, number five goes off. It has also number eight. And after that, number nine, this This kind of situation that can be easily defined with the command line as well as we mentioned. This is a polyglot, technology. So when you combine steps using a different language or even, Interfaces from different technologies like a chef or a terraform from another machine, et cetera. So you can combine a polyglot and have something very easy and fast defined. Thanks for joining this presentation of the mensegon. if you have any questions, please feel free to contact me. I am, I have the LinkedIn link in my profile in conf42. Thanks and goodbye.
...

Daniel Moya

Oracle DBA & Specialist for Cloud migrations @ Dimensigon

Daniel Moya's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)