The “Baseline” concept is firmly anchored in the IT Infrastructure Library (ITIL). It is not self-explanatory. And trying to make the term more understandable with “measurement basis” does not lead anywhere either. Instead, this is why we look at the concept of ITIL baselines. We will show you what it is all about and how you can implement it with i-doit and network discovery.
The term baselining was first used in Great Britain in the late 1980s. At that time it was written down in version 1 of the IT Infrastructure Library (ITIL). Even still today the term appears in several places within ITIL literature.
The ITIL documents describe not just one type of baseline, but three.
Up to this point everything still sounds simple. In the following explanation, we will mainly deal with the Configuration Baseline and its practical application in IT operations.
Before we do this, it’s useful to take a look at the term “Configuration”. Once you understand what this is really about, the further examples will become even clearer.
What at first glance looks like a translation of the German word “Konfiguration” has much more to it. That’s because It is not simply a defined combination of hardware and software components.
The ITIL glossary has the following definition:
” A generic term for a group of Configuration Items that are used together to deliver an IT Service or a more comprehensive part of an IT Service. “Configuration” also refers to the parameter settings for one or more CIs.”
At the first glance, this definition seems more confusing than enlightening. But the way it reads is more complicated than it actually is. What it really says, is that:
represent a configuration. After all, a Configuration Item (CI) is the smallest possible documentation unit, the atom in the universe of IT asset management, so to speak. i-doit refers to these CIs as “objects”.
If you want to introduce a CMDB in your company and add your data for the first time, you should first clarify a few things. First and foremost, you must answer the question of whether the current state of the IT infrastructure is actually “right”. This question should be easy to answer. However, if you investigate various different points of the company, the answer is no longer as clear as you might previously think.
Here is an example:
A company introduces new software for CRM. This project needed an application server with 8 GB RAM. A virtual machine was provided by the IT department as agreed and charged to the business department.
The server is described in its original configuration in the project documentation. It hasn’t been changed by anyone. An external service provider sent the PDF document. This is a target documentation at the time the project began. All the participants in the project were satisfied. TARGET and ACTUAL were the same.
A software update will be installed after the project is completed. As it turned out, the memory of the application server had to be upgraded in order to do this. This was done by your friendly administrator quickly, with just a few mouse clicks. The virtual server is now allocated 16GB RAM.
The system runs to the users’ satisfaction. But the documented information starts to unravel. Strictly speaking, the target is still defined as 8 GB. The specialist department may not even be aware of the difficulties that occurred during the update. So what do the other processes refer to?
The administrator will say: “To the ACTUAL!
Ideally, in the event of an error, they will remember that at the time they increased the memory of the application server to 16 GB.
The specialist department (and perhaps the external service provider) will say that the target documentation is decisive. After all, this configuration was the status that everyone was aware of.
So, you see that even a relatively small thing can have a great impact.
What happens in the event of a break(down)?
Does the error correction process refer to the old (and only) document? Then, in an officially completely correct manner, restore an old target configuration after a failure. However, the application will not work now. The result is unnecessary research and time lost until the CRM service is restored.
The reason is clear: there is insufficient documentation of the target status.
A discrepancy in the main memory of a server can already create a significant problem. However, if small changes to configurations affect the licence management, they can be even more cost-intensive for the company. Once security-related aspects enter the equation, the consequences of an incorrect SET configuration can be disastrous.
An admin is usually responsible for the setup and ongoing maintenance of a CMDB. As a technician, one must assume that the current status in the network is also the desired status. After all, the system is running. This has an associated problem, because neither ACTUAL nor TARGET are defined. It is only a question of the functional status. This is assumed to be the basis for a recovery in case of an error.
The simplest way to create such a status is with snapshots of virtual environments or with backups at fixed times. These snapshots are supposed to be functional. We have become accustomed to not documenting them further.
The same applies to the configuration of a network. The software installed on clients is also usually neglected. It doesn’t matter how it got there. In the best case scenario, the backup log contains a one line report: “Full backup from 29.02.2020.”.
But how do the people in charge of the various departments see it? How do those responsible for budgets, accounting or processes deal with non-existent documentation?
If we only document the ACTUAL state by means of a snapshot, we may miss something important. In the aforementioned example, someone has to pay for the difference created by the 8GB allocated. In other cases the licence has to be adjusted. The target documentation must be updated and distributed to the right people. It may also be possible to find out why our change process allows for sudden differences from the ACTUAL to the TARGET.
All of this is reason enough to talk about how to deal with past omissions and inconsistencies in a project. But it is also reason enough to set out a baseline that is clearly defined for everyone involved. A clear basis must be created, which can be worked towards and from which work can continue.
One thing will quickly become clear: The technical snapshot alone is insufficient. We also need to be able to rework, adapt and enrich it with additional information. We also want to plan it in the future. That’s where we come in.
Here is the baseline.
We draw a line and say: each part, including accounting, technology, licences and documentation have agreed on this one truth. That is the new target. There must be no deviations, for any reason. If there are, we want to discover them and take them into consideration!
A baseline can be triggered by many things. In our example, the new release of the CRM software has provided the reason to expand the memory of the server. This means that we also need more memory for backup and more time to restore. Perhaps a new version of the web server was installed during the update or PHP has been updated.
Has the database also been updated? If so, this automatically leads to a rework for all other applications running on the database cluster. All in all, our baseline “CRM Update Spring 2020” therefore comprises many individual points. These are directly related to each other and form an internal baseline. This baseline is now our new target plan. For invoicing, for backup concepts, for licence management, in the case of malfunction and finally for our entire IT documentation.
You define the first baseline the first moment you enter data into the CMDB. It is irrelevant whether this is something official or not. All other information and documentation becomes outdated. From now on, these may only be used as an historical reference. You could, for example, add them to the corresponding configuration item as a hyperlink or document.
From now on your CMDB is the leading system for all ITSM processes. Keep in mind that the ACTUAL included in the CMDB is the target for the follow-up processes. Exception: You verify it immediately and adjust it if necessary.
This exception is important. It forces us to think about how much information we transfer to the CMDB at once. Do we transfer all the data directly or only what we can verify and correct within a reasonable time?
With this consideration, it should be noted that from the time of inclusion in the CMDB, all CIs in it are under “Configuration Control”.
For our aforementioned example, this means:
The change management process must ensure that the same situation does not occur again the next time the storage is upgraded. This applies to the revision of the target documentation, the triggering of billing and also to the ACTUAL documentation. Which data is able to be overwritten “just like that” must be determined in advance. We will return to this later.
Incidentally, ITIL has its own term for the process of initial inclusion in the CMDB and defines it as follows:
Configuration Identification
The activity responsible for collecting information about configuration items and their relationships and for loading this information into the CMDB. In addition, configuration identification involves assigning labels to the CIs themselves to enable a search for the corresponding configuration records.
It’s now time for the next task. You have to take a snapshot of the configuration you have now found. This task is not to be underestimated, even in a small network. In a relatively short time frame, the configurations should be read out from the participating network nodes and entered into a database. No changes should be made during the initial recording.
Keeping this information up to date, even for just a few days, is a demanding challenge. Fortunately, this is where automation in the form of network discovery comes into play. We use JDisc as the tool of choice here. This puts us in an excellent position, because the manufacturer takes care of supporting the latest devices and standards in a timely manner.
When we release JDisc on our network, every network node found from DNS, directory or ARP caches is verified. Different methods are used to read the configurations of the devices stored in the firmware or operating system.
Primarily, this saves us time. However, don’t underestimate the factor of quality either. Nobody types this fast and free from errors. The devices are then listed in tabular form, can be analysed, grouped and sorted according to certain properties and configurations.
We have a habit of searching for similar characteristics in things and of grouping them together. We can put this habit to good use with the data obtained through JDisc. In this way we are able to deal with the huge amount of data in the network.
We group similar assets like notebooks, PCs or IP phones. We also form separate scan groups from IP networks that are structured geographically. The combination of both gives us dynamic groups. For example, we could group clients in France and telephones in the USA.
So we structure processable data blocks already in the discovery tool. In this way, we make it possible to pre-qualify the data even before it is entered into the CMDB. In JDisc, these can be checked for completeness in advance and then transferred to the CMDB all together, at the same time. The interface to JDisc integrated in i-doit can handle its groups.
Now the post-processing in the CMDB begins. This mainly concerns enrichment with data that could not be identified via the discovery. This includes contracts, accounting values and unique labels. This work can be distributed within the team and handled by appropriate regional supervisors. Important: TARGET and ACTUAL should be verified.
Once the TARGET and ACTUAL have been verified, completed and approved for all subsequent processes, this should be publicised. So, how should your colleagues know that the data is verified?
This information is indicated in the database. We need a note during the lifecycle. The keyword here is “logbook”.
The workflows that i-doit brings along are perfect as “notes”. For each workflow type, a Baseline is quickly created. The data imported from JDisc into the CMDB is added to a baseline. It is supplemented by a meaningful text.
Once the agreed process is completed, the task is set to “done”. Each action results in an entry for the objects in question. The screenshot shows us the LOG entry of a single workstation computer, which is easily understandable for anyone who looks at it later.
If data is to be transferred to the CMDB, we should follow a defined process. This process can look like the following:
This all sounds very simple so far. You might get the impression that the interaction of CMDB and Discovery will solves all possible problems. However, this is not yet the case.
Let’s take the example of our CRM system once again. At a first glance, we see that we have now transferred all related data to the CMDB. The system works and the users are satisfied.
However, the CMDB cannot do the thinking for you. If we want to know what belongs to the IT network of the CRM service, the CMDB can only give us a limited answer. In this case, human thinking is much more important than what the machine can do.
Even the definition of the configuration indicates that it is not only about the properties of individual CIs. The dependencies between them are also important. What is paramount, is the interconnection of several technical and human services. This network only makes the service used by the users possible.
So how can discovery help us to determine a service configuration? And where are the limits of this technology?
The data provided by the discovery system also includes the three servers we are looking at. As humans, we know (or can deduce), that they fill the roles of database server, application server and web server for our CRM service.
The Discovery also knows the corresponding indications. The system can detect running services, services and daemons. However, this does not mean that a service running on a server is also desired or necessary. Also it is not apparent for the services, to whom they are offered.
The systems need to be told that the superior service is the CRM service. Machines cannot recognise this alone. The Discovery solution is no different. It can look at the connections between the devices and analyse them, and the network topology can be analysed and recorded. However, the way this information is interpreted is left to us humans.
There is now a tempting opportunity. We could document the CRM service with the help of an automatically executed analysis of the network topology. However, there is some fuzziness that needs to be taken into account. If two machines are not connected at the time of the discovery run, this does not automatically mean that they are never connected. If two servers communicate via port 1433, we can assume that an MS SQL server is doing its job here. But we do not know exactly. Even when there is an open port, it doesn’t mean that it should necessarily be open or even that it is needed.
As a result, the Discovery saves us an enormous amount of analysis time. The information we receive gives us solid clues and enables further investigations to be carried out. The problem is that we cannot treat the data as truth. Is this data showing us the ACTUAL state or the TARGET state? It’s advisable to proceed here with caution. What we’re doing here is connecting patterns. But we are still far from complete knowledge.
In order to transfer all this information to a common database like the CMDB, we need a model for the documentation. This model must be able to be followed by everyone involved. Therefore, ideally, It should be self-explanatory. Some different models are described in the 7th ITIL Baseline.
We separate the functional description and target documentation from the ACTUAL configuration identified with JDisc. And we define the following rules:
The following model could therefore be easily mapped in a CMDB:
Configuration changes such as the following are also possible and self-explanatory:
There are several advantages achieved by using this model:
However, there are also some drawbacks that must not be hidden or ignored. For example:
For servers, applications or services, the model we have presented can be applied very well. SET and ACTUAL are documented in separate objects. However, if we’re looking at the clients, we can no longer apply the model in this way.
Nobody documents a suitable target object for each client. So instead, it’s useful to take a pragmatic approach and work instead with references. We define individual configuration items as a reference for the target configuration. These reference configuration items can be referred to, for example, with the help of an additional attribute “Link to the target CI”.
Are you ready for your first baseline? You have our assurance that the optimal combination of JDisc and i-doit will give you the easy start you need, in the world of baselining. You can test both systems free of charge.