Virtual Db2 User Group Sponsors

IntelliMagic

Virtual Db2 User Group | March 2024

A Day in the Life of a Db2 for zOS Schema

Toine Michielse, Solutions Architech
Broadcom Software

There is a lot of talk about DevOps and agile development and with good reason. But what does this mean for a Db2 for z/OS database administrator? In this presentation I will provide an overview of how the process of schema management can be organized with well accepted processes and tools. This will help to optimize the time of scarce DBA resources while safeguarding or even improving the quality of deliverables

Toine Michielse, Solutions Architech

Toine Michielse

I have been working with Db2 for z/OS ever since V1. In my career I have worked as COBOL/IMS/DB2 programmer, IMS/DB2 DBA and DB2 system engineer. I have worked a number of years as a Db2 Lab advocate, supporting IBM customers worldwide.

Before joining Broadcom I worked at SwissRe. Besides my function as their lead mainframe Architect I also managed the Mainframe Capacity team as well as team of Db2 specialists providing education and consultancy to SwissRe development units.

I currently work at Broadcom as one of the Db2 Evangelists and Solutions Architect. I am also proud to be an IBM Champion since 2022. My main focus is mainframe modernization in all aspects. 

When I am not working I enjoy paragliding and I play drums in a rock band.

Read the Transcription

Amanda Hendley (00:00)
Welcome, everyone. My name is Amanda Henley. I am your host for today's virtual user group. We are here today to talk about Db2. So I'm really excited for today's session. Pleased to have you here. As you notice, I just turned on the recording. So after today's session, we will have an audio recording, or, I'm sorry, a full video recording available, as well as the PowerPoint presentation in PDF. And then we'll also provide you all with a transcript so in case you can't or don't want to listen to the whole thing, maybe you're at the office. You can read the transcript as well. If you're not already subscribed to our newsletter, please go do that, because when we don't have meetings in a month, you can get the newsletter and we'll include in their articles a recap of the last session and important links to check out things in the world of Db2.

Amanda Hendley (01:00)
So our agenda for today, quick and simple. We've already accomplished our introductory remarks. We're going to have our presentation. We'll have time for Q and A, and talk about some news and articles that are out there, and then we will announce when the next date is.

Amanda Hendley (01:21)
So I do want to thank our partner for today's session. IntelliMagic is our sponsor. They're a supporter of this and a couple of other groups. Please go check them out on their website. And I do have a resource for you from them to share a little bit later. So I am just organizing my screen a little bit, getting rid of some of these open windows.

Amanda Hendley (01:48)
After today's session, we have a very quick and simple exit survey. It's just going to ping you as you're leaving. And it's one question. So I hope you'll take a moment before you exit out completely and let us know if you learned something today.

Amanda Hendley (02:05)
And then the only other thing I want to mention is that we do have our most influential people in mainframe program running at Planet Mainframe, we're doing a call for nominations. It is not a ranked competition or anything. We're just recognizing those most influential people in mainframe all next month as a part of the anniversary birthday of the mainframe. So our deadline for nominations is this Friday the 23rd. Planet MainframeYou can scan this QR code or go to Planet Mainframe. And you see the banners. It's really easy. Give us the person's name and why you're nominating them. Eshout-outsveryone will get recognized, but we'll be doing some feature articles and giving shout outs to all the people that have made mainframe what it is today.

Amanda Hendley (03:00)
And now I'm pleased to welcome Toine for "A day in the life of Db2 for zOS schema". He's a solutions architect for Broadcom. Let me stop my share so he can start his share while I give you a little bit more intro. He's a Db2 evangelist and solutions architect over at Broadcom. He's been working with Db2 and in ever since version one, has had a variety of roles and is an IBM champion since 2022. So fun facts. When he's not working and doing the amazing work he does, he enjoys paragliding and plays drums in a rock band. So we're excited to have you. Thanks for joining us.

Toine Michielse (03:42)
Thank you very much, Amanda. I hope everyone can hear me and it's really good to be here. I see some names that I recognize. Of course there's Greg Mullins, but there's also Adrian Hohenstein that I used to work with in Switzerland. So I know there's folks from Europe joining. I saw a name that I recognized to be in Australia, another good friend of mine. So I was really excited to see this world population joining here through Amanda. So thank you for joining in for the session. I hope to make it a little bit entertaining. Yes, it's the day in the life of Db2 for zOS schema, at least the way that I see it, that I would like to see it going forward.

Toine Michielse (04:25)
Here's what I'm going to be talking about. I'm going to introduce myself a little bit and then I'm going to talk a little bit about Bob Dylan. Why that has become clear in a second. And then I would like to take you through schema development as we did it years ago, as we do it now, and as we hopefully be doing it tomorrow. Then I'll jump into technology that can use to implement that future vision, if you like.

Toine Michielse (04:52)
And then finally I'll give you my impression, a sketchbook of what a day could look like from the perspective of a developer, a Dba and automation, if you will. And of course, as Amanda said, there's room for questions. So to quickly introduce myself, my name is Tuan Michielse. I was born in the Netherlands and that's why my last name is probably difficult to pronounce for at least some of you. And I've done everything you you can possibly do with Db2. I've been a programmer doing well, COBOL IMS to begin with, but then later against Db2 when I became available. I've been a Dba on IMS and Db2. I've been a system engineer. I've worked as an architect. I also was very lucky to work for the Db2 development itself for a number of years as a lab advocate. In fact, while I was a system engineer, a young systems engineer, I got invited to spend nearly a year in the lab and work in development on data sharing. That was really, really cool. And recently I've moved. Well, recently is now four years ago, I moved to Madrid in Spain from Switzerland.

Toine Michielse (06:11)
And so I'm still busy learning Spanish. I'm getting older, so it's a bit of. Yeah, it's not the easiest language for me to learn. And my passions. Yes, Amanda touched upon it already. I love paragliding and still do that. And I do play in a rock band then. The band is called Ciencia Urbana. That's the spanish pronunciation. We play pretty much our own songs. We have four CDs out on Spotify. A fifth one is on its way. So if you like rock, then give us a listen. But most of all, I love Db2. I love data. And my true passion for a fairly large number of years now is mainframe modernization. And of course, I don't mean modernizing the mainframe. The mainframe, as modern as it is. What I mean with that is modernizing the way that we interact with the mainframe. I grew up, of course, on dumb terminals, terminals that were hooked up directly into the network green screen. Of course, they were red, yellow, and green. And I still meet people that they don't use a dump terminal anymore. They use terminal emulator, but they still work in the same ISPF environment. On the other hand, there's a whole new generation of folks that use visual studio code as their entry point into the mainframe. And I think both are valid, and we'll need both, especially if we want to keep the mainframe alive and certainly something that I want to do.

Toine Michielse (07:50)
Why? And this is where it starts getting interesting for towards Bob Dylan, this song. And I'm going to go a little bit into that song on the next slide. But the times, they are changing, maybe you know it. I think at least the older folks in the, in the room will know it because it was released in 1964. And I see that the colors are a bit difficult to read. Now, here's a question for you. I'm not sure, Amanda. Can they raise their hands or can they shout out?

Amanda Hendley (08:23)
They should be able to raise their hands. All right, unmute and shout out whatever you want to do?

Toine Michielse (08:27)
Okay, just. Just unmute yourself and shout. Does anyone know what else happened in the year that this song was released? 1964. New mainframe, I think. Yes, exactly. Yes. That's when the mainframe was released. And, oh, by the way, coincidentally, five days later, on April 17 of 1964, I was released. So I'm not sure if that bridged myself to the mainframe, but, yeah, I'm very much associated with the year 64. And so when I started looking into the artwork for this presentation, I came across this song, and I thought it was extremely. But extremely appropriate. Extremely appropriate for this presentation.

Toine Michielse (09:20)
And here's why. Here's the lyrics. It's the first verse of this song. Now, these are so important that I think I want to read into you. However, my body just protests against reading the slides. So instead of that, I'm just going to sing it to you. So bear with me. Here we go. So, Bob Dylan, 1964, and he's singing. Come gather round, people wherever you roam and admit that the waters around you have grown and accept that soon you'll be drenched to the bone if your time to you is worth saving and you better start swimming for your sink like a stone for the times they are a changing. That is it. And that's the message. You know, why is that the message? For me, for years and years and years, for decades, we've been working with the mainframe pretty much in the same way. However, you know, the waters around us have grown, I think, in pretty much every installation. But for sure, the vast majority, we've come under pressure, under pressure of new development methodologies that sort of force us to look at things like DevOps and automating it processes. And the reason for that is that we are a shrinking workforce. I recognize some of the names here, and I know these people have very, very deep skills. And those deep skills, they are rare. Nowadays, they are rare and so they are very valuable. And it takes time to build new skills. So I think it's important that in the way that we do business, we try to make the most of these highly trained, highly skilled, but highly scarce resources. And that was reflected if your time to use worth saving, because I think, at least for me, but also for those people that do have those deep skills, their times is worth saving.

Toine Michielse (11:38)
And what I mean with that is I would like to see if we can do things in such a way that these deeply skilled and valuable resources can focus their time on where they can shine the most, where they can make money for the company by tuning that, that difficult piece of SQL to the max, reducing the software build, that kind of stuff. Right? And so, and so for me, that song text, whether you liked it or not, and whether you like me singing or not, I don't care. That reflects, you know, why I wrote this presentation.

Toine Michielse (12:14)
Okay, so now let's get to go closer to Db2. What I would like to do is go through something that I call the circle of life of a Db2 schema. And this is the agent, the ancient way of doing business. In the ancient way of doing business, and I was part of that. And certainly when you were doing IMS designs, but also the early days of Db2, we would spend an enormous amount of time coming up with the initial schema. Before application could start writing, we would have design sessions, this and the other.

Toine Michielse (12:51)
We would spend an enormous amount of time. It was so difficult to tweak afterwards. Oops, that's the wrong button. Come on, here we go. And once that initial schema design was done, you get into a flow, right? You get into a cycle, a development cycle where developers and Dbas work together on deploying changes. And those deploy changes were the result of new requirements entering the stage. A developer would get a request to work on new business function or maybe even the initial delivery. But if new business function was required and that would have to be reflected in the schema, then now she or he would have to go out, reach out to the Dba and say, look, I need a new column or I need a new segment in my MS database. And before that developer could continue with her work, with writing the code against it, she would have to wait until that was delivered. So the Dba will do his work, he will provision stuff in development. And once that is done, when it's in the environment where the developer doesn't work, she or he will then go to a new function, test against it, and finally deploy. And the circle will repeat with some frequency.

Toine Michielse (14:35)
I keep doing that. I have to be patient. Right, here we go. So now what happens in today's world? In today's world, we're looking at a whole army of developers. We're looking at typically a smaller initial schema phase, but we look at many, many more iterations of that circle of life. We see many more requirements. One of the reasons is because we now have adopted agile development methods. So you start small and you build on top of that in smaller increments. Does mean that that circle gets executed far more frequently. And again, it's typically a much larger group of developers that work and that interrupt that poor Dba. And this is something that I would like to focus on because that Dba is one of those individuals that is nowadays is becoming rare. Especially those Dbas that have the golden fingertips that can tune an application or a database schema like this. Know tomorrow they can make so much money by spending time, investing time and their skills into looking at the environment and working towards it. However, if they get interrupted all the time, that is not good for that productivity.

Toine Michielse (16:07)
So as far as I'm concerned, what I would really, really like to do is to make sure that we still have the, that small initial schema design phase and we still of course, have that circle of life. However, if you look at the circle of life, you will see that the Dba being interrupted and doing his work has gone out of the picture. And the reason I say that is because what I would like to do is take those actions that can be done by a developer are actually shift what we call shifted left to the developer. So think about that initial schema and the work that's being done. Most of those requests that are being generated are for simple things like add a column to a table. Well, if you have the right means, if you have the right processes, then that itself does not require the deep skills of the Dba, because typically a developer knows exactly what column she wants. She knows exactly what the characteristics should be and in which table that column belongs. In fact, in many organizations you will find a developer either sends a Jira ticket or a snow ticket and says, please add this column to this table with these characteristics.

Toine Michielse (17:29)
Well, in that case, the added value of the Dba doing the work is not that great. However, he does get interrupted. When we started talking to our customers, we found that somewhere between 70 and 80% of the changes were as simple as that. So what I would like to see is that we don't make the Dba disappear, but we make him disappear out of most of the iterations of the circle of life. And if we do that, then there is a huge opportunity for automation. So what I would really like is to optimize the speed of this circle as well by automating these things that can be automated, like the deployment phase. Maybe it's part of the coding and testing phase, and maybe also the provisioning phase itself using modern technologies like, well, for instance, Jenkins. I know this is the symbol for Jenkins, but could be other things like ancillary or whatever, right? So this is really where I would like to, where I would like to go to and the rest of my presentation is going to serve that purpose. Hopefully. If there's any questions or remarks, then please don't hesitate to interrupt me.

Toine Michielse (18:53)
Okay, here we go. So if we think about especially the part where things get automated to the max, then of course there's a few technologies that I would like you to be aware of, and probably you already are aware of. The first one I would like to draw to your attention is some form of source code management. I've used git here in this example, and that's also the symbol for git, but other source code version source code management systems, they provide pretty much the same functions. Why do I think source code management or modern source code management is so important? I believe it's important because apart from the fact that it provides version control, allowing teams to work together on the same code space easily, the architecture itself is very cool. It allows you to work to work off platform using the comfortable environment of visual studio code in a distributed architecture. But it also has capabilities for release building. And what's more, it also provides excellent ways to integrate with other DevOps related tools, like for instance, Jenkins. I'm not going to go into all the details of the interactions with, with git. If you're interested, there's plenty of opportunities to play with it or read about it because it's all open source.

Toine Michielse (20:40)
You can start making your own repository of source code material and then use all these commands to get it into your own working directory on your workstation and through the staging in your local repo, make sure that it gets into the controllable area. Now, people tend to believe that this is only for program source. I would argue that DDL could be seen as source code as well. In fact, it is source code, right? I mean, if you have DDL that describes your schema, there's nothing, nothing would stop you from storing that in a source code management like subversion or git or whatever your source code management is. So this is certainly, as far as I'm concerned, not purely focused on, not purely focused on code that developers write, but also code that the Dba writes, the DDL. And in fact, if we want to bring the role of the developer and the Dba closer together for certain aspects of that role, then there's even a lot of value in observing it that way. Then if you think about automation, of course you're going to get into stuff like process orchestration. There's a whole bunch of process orchestrators.

Toine Michielse (22:14)
I think when you speak about DevOps, one of the names that always pops up is Jenkins? Right. What is Jenkins? It allows you to write. Jenkins is a system that allows you to develop what's called a pipeline, a sequence of actions that Jenkins will execute. And you can make that be based on variables that you pass. But what's more, because it's an automated solution, it's a fixed script. The actions are repeatable and robustly executed. Plus, you have a lot of additional benefits. You get a lot of insights on what processes are run, what steps have been executed, what the state of the execution is, et cetera, et cetera. What's really nice is just like Git is open source, it's widely used, distributed architecture off platform, meaning off the mainframe platform. But of course, nothing stops it from working with mainframe resources because it contains a wealth of extensions. And also, if you think about the work that's being done in Zowe, Zowe opens up service to the mainframe by making them available through either RESTful APIs or even Zowe clis and Jenkins pipelines, the steps in the Jenkins pipeline, they have access to a command line, a command environment, like a terminal or a bash shell.

Toine Michielse (24:05)
So Jenkins is perfectly capable of executing actions against the mainframe. And there's Zowe extensions to do a wealth of things like define a GICs transaction or kick off a z o SMF workflow, stuff like that. The other thing that is really cool is that Jenkins very strongly integrates with Git, as seen in the previous, the technology I introduced in the previous slide, but also stuff that is important when it comes to serious enterprise level projects. Management like, you know, change management, like ServiceNow, Jira, you can use it for interaction with the rest of the world through email, through slack, etcetera. You name it. You name it, right? So that's a really cool piece of technology that if you're not familiar with it, start having a look at it. And one of the things I particularly like is the insight, the observability that Jenkins gets almost out of the box. So here's a few examples where you can say, look, here's a workload that runs maybe overnight, or what have you, you in a series of runs, and in one blink of an eye, you see whether it's green, okay, or whether it's red. There has been some issues with it, and there's a lot more that you can see.

Toine Michielse (25:31)
So here what you see is a typical Jenkins dashboard where you see those pipelines that run, and you see, well, maybe you cannot read it, but you see statistics of how long certain, certain steps of the pipeline typically execute. You see again, which ones were successful, which one were not successful, et cetera, et cetera. So it has a lot of stuff out of the box. Another part, process orchestration could be set up SMF. Set of SMF is more mainframe based, but you can drive it of your own platform. So for instance, thinking about that developer that sits in a modern visual studio code environment, provided he has the proper Zowe extensions, he can kick off an SMF pipeline, sorry, set of workflow and control it. It is mainframe only. So that's where Jenkins can integrate both distributed artifacts as well as mainframe based artifacts is really, really mainframe only. And while not impossible, it would be far more difficult to drive actions on non mainframe platforms. However, it does have a Jenkins-like process control with workflows. You can also see the stage of different steps. It can have manual steps built in.

Toine Michielse (27:09)
Yeah. Again, it's integrating non mainframe processes. That's not really what's designed for. Then there's Ansible, maybe a bit of a new get on the block, but you hear it pop up nowadays in discussions as well when it comes to DevOps or it process automation. A colleague of mine, Dennis Tronin, he did a session, get cozy with Ansible, explaining how to interact with it at IDUG and other user groups or conferences. And my personal opinion is that I think it's far more geared towards infrastructure maintenance and infrastructure processing as program builds. So yes, you can do all the tasks that would come up in automating a process like schema management. And Broadcom, the company I work for, we have an Ansible collection that allows you to do schema management. However, I think the true power of Ansible is, for instance, where you have to apply a certain maintenance or install a certain package to, let's say, 100 servers that you have proficient in the cloud. But you know, I'm open for, I'm always open to be proven wrong. So if you have a different opinion, please let me know now or after the session. I'm always willing to learn.

Toine Michielse (28:54)
So I've said it a few times throughout the presentation, different parts. For me, integration is the magic word, right? What I gear towards is making sure that, as I said, we modernize the way that we interact with the mainframe. Now, the mainframe, I love that box. I love it. I love it for its robustness, I love it for its capabilities. It's the most powerful general purpose box that I know. And the availability characteristics, security characteristics, they're unparalleled in the world, right? So I really love to have that. However, we do have these next gen developers, right? We do have a whole new army of people that will be working in the enterprise, that will be working on implementing business function, and hopefully they will be in that process reaching out to the mainframe, but not necessarily so. I mean, they're very much at home in visual studio code. That's where they're productive. And all those tools, and far more than I just mentioned, the modern DevOps tools, at some point they will all require access to services that are housed on the mainframe. Whether that's service provided by tools, whether that's data that lives in Db2, I don't really care.

Toine Michielse (30:29)
But at some point it comes now integrating that, that becomes very important if we want to have, for instance, Jenkins pipeline invoke services or manipulate data and stuff like that. For me, the best way to integrate is over Zowe, the open mainframe project that really, really was and is the big enabler to integrate both the next gen developer as well as all those modern DevOps tools. So for instance, what we at Broadcom have been doing, but I know that other vendors like Rocket and IBM, and even beams to some extent have also been working towards, is making sure that all those services that were traditionally consumed in the mainframe are now consumable over RESTful APIs. RESTful APIs. I'm not sure if you've ever coded against the RESTful API. It's doable, but it's a bit tedious. We take one step further and what we do is we harness those RESTful APIs. We make them available in the shape of a command line command by providing a Zowe extension. So now you have a CLI that you can use to invoke those services. And if you can have a command that you can issue over CLI, of course, any product that has the capability to issue issue commands over Bashell or what have you.

Toine Michielse (32:00)
Like for instance, Jenkins. Like for instance, the environment that visual studio code lives in, whether that's, you know, on mainframe or, sorry, not mainframe, of course, in the cloud, whether that's, that's on a private windows or a private Mac workstation. They all have the capability to interact with mainframe services through RESTful API, but with the ease of invocation of a command line interface by using Zowe, the Zowe Cli and the Zowe extenders specific to the function. That's how I see we can bring those things together like the purely distributed DevOps tools like Jenkins, like Git, etcetera, and as well the workplace for the next gen developer and make that all work together with, and even seamlessly together with stuff that's on the mainframe. So one more thing before I get to the meat, I've used the word API quite a bit now, and I use it very liberally. I like RESTful APIs for a number of reasons, but one of the reasons that I like RESTful APIs, that you can consume them from everywhere. So it doesn't really matter where the service is provided in our case on the mainframe, and where the service is consumed in our case, efficiency code, or maybe a Jenkins pipeline that runs in some vm or in the docker on your workstation, I don't really care.

Toine Michielse (33:54)
RESTful APIs, they actually are the central component that make that happen. However, I use it very liberally. So as far as I'm concerned, you can see zOS SMF itself as an API as well. It's an interface interface between one process and the other. So now, coming back to, let's say, the title of the presentation, a day in the life of the schema. And here there's a question, and the question I've already answered way, way back in this presentation, partially at least, say what is the schema really that different from a program? Now you can argue both sides. There's a huge difference between a schema and a program in the sense that schemas tend to be persistent, right? Programs, you can create a new version of program, make it available whenever you like. However, that's not the same with the Db2 table. If you want to have a new version of the table, you don't throw away the old one and build up a new one. That's not how it works. You work on changing the shape of the schema, hopefully in a non disruptive manner from the old state to the new state. That's how you do things, how you work with schema.

Toine Michielse (35:33)
So in that sense it's different. However, from an IT process point of view, is it really that different? I don't think so. So you know what I already said earlier in the presentation, your schema is defined by DDL. You can see that as a programming language, right? You manipulate the contents of a schema by using SQL, you can see that as a language as well. But staying to the DDL, that DDL, if you look at the versions of DDL throughout your enterprise, there will be multiple versions. Very likely the version in production is going to look different than the version that you have in your quality insurance environment. At some points they will be synchronized. When the release that you bring into production out of your quality insurance is deployed, then at that point in time the schemas will look the same. But in the larger scheme of things, throughout the life of a schema, the different lifecycle environments will have different versions, and those different versions can be reflected very nicely in a source code repository like git by having maybe different branches or different releases of a certain project. So from that point of view it is the same.

Toine Michielse (37:03)
There's another point of view and that is what do you do a program, if once you've written the source code, you compile it, and once you've compiled it, it's sort of instantiated, right? That's when you have your load module compile link, then you have your load module that you can do something with. DDL is no different. You write it and then you execute it, right? You use, I don't know, spoofy or these, that decent tab two or whatever. At some point you execute the DDL and then it comes to life. Then it becomes a table that you can store data in. Just like a program source that is built, you have the build phase of a DDL. We typically don't see it that way, but it is very valid. But what is valid is the fact that if we make changes to the schema, just like when a developer makes changes to a program, initial creation or a modification of an existing function, then one of the first things, hopefully that the developer do is test. Hopefully when a Dba says I'm going to tweak this index, hopefully he will test it as well. Of course the test will look different in the case of the Dba, tweaking an index or creating an index that is in the shape of a performance test.

Toine Michielse (38:32)
But he still goes through the process of testing from that aspect, from source management, from building, from testing, and of course deployment across lifecycle environments. As far as I'm concerned, schema are not that different from a program at all. Good. So now let's look at an imagining the way I envision the day would look like for a developer, if I would have my way. If we get into that situation where we have deployed all those nice tools and automated whatever we can, then my calendar could look like this. I just represented this, a very simple table with three columns. We have time and MoL stands for more or less. It's not meant to be precise or not meant to represent reality. It's just to start a discussion. So at 08:00 what I think the developers should start with is a nice good cup of coffee. If you don't like coffee, have tea, but at least have some time to enjoy and wake up and relax. The next thing that will typically happen in agile environments is there would be some kind of stand up meeting, a daily stand up to discuss what the day is going to look like.

Toine Michielse (40:10)
Then he goes into analyze time and this is where it comes interesting, because during the analysis, the analysis that he needs to do for one of his activities, he says, I'm going to need a new column and an index. Now he may not know how to do that, he may not know how to add a new column, he may not know how to do an index, or maybe he's not sure of parameters, what have you. So I think it's always good to have these daily interactions with the Dba. Now mind you, I call it, I label this specifically a daily consultancy meeting. Why? I believe that the Dba is far more, far more productive when he consults to the developers than rather when he does all the boring work for the developer, if you will. That gives more productivity and that allows to shift left, but without losing control over the quality because that would be a point in time where the Dba has an opportunity to educate the developer and to control what the shape should look like. But then after that is done, and that's a very valuable meeting, maybe it's an hour, maybe it just doesn't exist every day.

Toine Michielse (41:34)
I don't really care again, only to showcase then at that point in time the developer will do his self provisioning of schema and data. So he will use maybe his CLI commands out of his fish to the code environment, or maybe he will push the button and say, Mister Jenkins pipelines, please provide me a copy of this schema with the data. Now as far as I'm concerned, that's an automated action. So that's really pushing the button. You let the system do it. There's no need for a Dba to do that work for a developer. The developer knows exactly which tables he needs, he knows exactly what data he wants. Once that is done, and that's maybe a minute later, he can do the pre change test. He knows what he's going to have to do with the new column that was the result of his analyze time. He knows how to do it from the Dba. Then he gets to work, do the self provisioning, do the pre change test, and hopefully I don't care, but hopefully that's automated as well because Jenkins could just as well be implementing functional tests for all I care.

Toine Michielse (42:49)
At that point in time he will implement the schema changes as has been discussed with the Dba, and deploy them into his own sandbox environment, at which time he can continue with coding the change and testing and create new test data and do the post change test. And hopefully everything's okay so he can commit today's work. Commit is one of those git terms. He can say, okay, I'm done with all this, let's move on to the next day. So that's how I could envision what a day in the life of a developer would look like from a Dba point of view. Again, remember, my goal is to free up as much of his valuable time as possible. He will also have that cup of coffee. Hopefully he will also talk with his team about upcoming activities. Maybe there is an urgent performance problem that he needs to work on, etcetera. But hopefully he will have a lot more time for coffee and donuts. And of course that time will be felt with intelligent work. So here's the touch point. The Dba and the developers, they will have a touch point. And again, I don't care whether it's 10:00, whether it's daily, but at least a regular touchpoint where you have that exchange of Dba knowledge flowing towards the development team.

Toine Michielse (44:22)
The developers can ask the questions and so everyone can move forward at the most optimal speed. But then there's a new point from the Dba point of view. As will become clear in the next slide, the Dba has used his skills to analyze what I recorded here as system tests. I truly believe that you would do performance testing left, right and center continuously. Maybe there's important production processes that from a performance point of view need to be safeguarded, and they would, the Dbas would be looking at it and analyzed at fixed points of their workday. And if you look at that daily consultant meeting, that also has a remark there to discuss the results of the previous day analysis. So the Dba will use that meeting with the developers not only to help the developer with their questions, but also to push information that they've learned that requires changes from the developer back into that organization. And from that point onward, he is free to be creative and design those solutions that require where his deep skills actually come to fruition. That's where the Dba will shine. In that part. He has time to design, create solutions to optimize, et cetera, et cetera, et cetera.

Toine Michielse (46:20)
And while he's doing that, maybe he will be, he will also be generating DDL. Maybe he will design an index that he would like to see flow into the schema change process. However, those will be things that are not necessary, that don't have an interaction with development. Maybe he will discuss that with the developers to show, look, I've done this, maybe you can run this test again, but that's purely his business, etcetera. I'm not going to go into all these actions because I want to go into how we automate it. But before we go there, I would also have a look at what it looks like from system view, because I think here is where these two fields come together from the automation point of view. First of all, it starts very late in the day, as you can see. By the way, the system, the automation, the Jenkins pipeline, they don't drink coffee. They will start with their automated tests. They will start with, for instance, the integration tests. So everything that the developer has committed, he says this is unit tested, can go into integration. And once it's in the integration, of course it needs to be tested.

Toine Michielse (47:46)
So there's testing time. In order to do that, it will have to look at which programs have been committed, what test cases can I execute with them, et cetera, et cetera. So once that is done, prepare the test environment, make sure that the data is available, make sure that the correct schema level has been applied. Right? So if a new program is moved into integration and that new program is dependent on the column, that column better be there. So that change must have been integrated as well, right? So that all goes into integration. Test environment preparation, everything that is ok can automatically flow into the next environment. Hopefully there will be a performance test environment, because the performance test environment would be the workplace for the Dba to work on performance problems and prevent performance problems from going into production. The system can then drive the performance test, etc. Etcetera. Prepare the result dashboard, distribute reports so that the Dba can start working on those for the next day. So that's more or less how you could envision those three days from the system perspective, from the Dba perspective and the developer perspective to work in conjunction.

Toine Michielse (49:23)
Now, if I would take a much more detailed look into that, if I try to start, let's say, designing my new process, here's what it could potentially look like. Again, this is not a rule. This is how I would envision a design could work. But of course, in your environment, in your installation, you probably have established process that would not fit completely in there. However, this would be a way to implement what we just saw. So the first part, and oh, by the way those actions, they refer back to the entry points in the calendar. So if you're interested in this and you want to look at the slides again, you will have the handout. That's where you find information. So if I think about that first action, the self provision of schema and data, the way that would work from the developer point of view is that he would have a list of objects that he needs. He knows exactly. I'm going to work with program ABC and program ABC needs table one, table five and table seven of this particular schema. He does have the list of objects, he just puts it somewhere in a document, I don't care.

Toine Michielse (50:44)
But somewhere where the API could pick that up. And the API literal in the sense that could be a Jenkinss pipeline for instance. It could be something else, maybe SMF workflow, but something that will invoke the schema management softpre-changeware. So the schema and hopefully also the data that's contained in the test database will be made available. Once that is done, he goes into action two pre change test. So he will pick out of repository of test cases that have been well defined that are there because of course it's a pre change test. So that's assuming that the module has already been tested. Previously it was already in existence. We're now changing it. So he will pick up some of the old test cases and use again an API CLI or a Jenkins pipeline to prepare the test environment. He says, here's my list of test cases, my test identifiers, please Mister Jenkins execute the prepared test environment. And Jenkins will make sure that the data gets proficient out of, let's say test database into his environment, if that wasn't already done by the schema management software and make other artifacts like test scripts available, etcetera.

Toine Michielse (52:14)
And the next step will be to actually execute the unit test and of course report the results. Those are all automatable, automatable actions. And so come to think of it, this is all push of the button work. It's all a matter of seconds at max, it's minutes to execute. Now the developer needs to prepare the schema so he knows the program is working. He needs to prepare the schema according to the instructions that he received from the Dba, or because he already knows if it's really, really simple. So what he will do is he will take that DDLs code that maybe is also stored in Git, he will take that, change it to his liking and invoke the schema management software to make sure that DDL, that new version DDL will actually get applied to his schema. We're still talking seconds to minutes here. That's where his real work starts, coding the program and generating new test data that will cover the new function that he just coded in his program. And of course after that we'll have the post change, which looks pretty much the same. Right? Except that the list of tests will now include the newly tested data and newly tested scripts.

Toine Michielse (53:41)
So again, prepare, test, environment, execute, and report results. In fact, if you look at it, action four and action two are exactly the same, except the object version. Both the code version as well as the schema version have changed. Okay, makes sense so far. And then, you know, when that's done, he commits. So, and as you can see, there is no interaction with the Dba. There's no interrupt with the Dba because the work, the prep work has already been done in the consultant meeting, or because it's so simple, he doesn't need it. And if you would look at it, you know Jenkins pipelines, this is what it could potentially look like. That object list that the developer needs, the tables he identifies it, feeds it into the Jenkins pipeline, and Jenkins will then do all the creates and alters and need it. He will execute the pipeline and the result will be DDL. Right? Excuse me. Right. The next part, the pretest, he will get the test list, same thing. He will instruct Jenkins to execute the prepared test line and the prepared test line will then invoke maybe also the execute of that.

Toine Michielse (55:18)
You know, you know, there's, there's no need for two different interactions. The DDL file that has been generated, of course that, that needs to be created. Right. And that will be done after, after the pre, the pre change test has been executed. So after action two, that's when the DDL will be executed against the schema. Now, in some cases, maybe, maybe that Dba represented by the smiling guy there, he's very happy because he had his morning coffee. Maybe he will have a look at the DDL, maybe not per se in the sandbox environment, but maybe when it goes to higher level environments where you have a similar scheme, he would be having a look for sure. He would be having a last look before it goes into production. And nothing would stop you from doing that. You can have a Jenkins pipeline that does it, even within the pipeline. The pipeline can interrupt and send an email to the Dba or the Dba group or send a slack message to the ABA group and say, look, this deployment is underway, please have a look, see if you agree, and if not, blah blah blah. However, nonetheless, there's always the influence of the Dba in the DDL development, right?

Toine Michielse (56:44)
Once the DDL is ready for execution, Jenkins will just create, will execute the, create all the dev schema pipeline and that will create the objects at which point in time the developer will now code. He will create a new program, new test data, and that's when the next stage comes into play. The new test data that will go into the test case pipeline and that will be prepared and executed. Voila, you're done. Now that sounds like a lot, but hard of the matter is there's only a handful of different pipelines that come into play. You just feed them with different sources. So the object list will be different every time you invoke the create Alt dev schema pipeline. But the heart of the matter is, you know you will use that for each and every possible constellation. Object pipelines, object list, right. So with a handful of well designed pipelines, you can do all of this in automated fashion from the system view. I'll skip the Dba view because it's not that different from the system point of view. That's where things get really, really nicely automated. Because I have a list of programs that have been committed.

Toine Michielse (58:21)
I just need to make that available, maybe in a useful fashion by, for instance, to generate program this pipeline which will interrogate my repository and see which are the new programs that have been committed into my repository, which are the new programs that need to undergo testing. And oh, by the way, what's the DDL that's associated with that? And you can make that as complex, as simple as you like, right? And of course you need to test cases once you all have that, right? Once you have that, then first of all the DDL will go into the create alter in schema pipeline. Now that sounds like different, but the only thing that's different is actually the environment in the previous slide. In the previous slide we had dev there, now we have int. Is that, is that different? No, it's just a different parameter. It's just a different schema and a different subsystem probably, but the content of the pipeline is going to be exactly the same. That will again go into the prepared test pipeline again, that building block is there and yes, it will be more involved, it will be more things being tested, but it can be the same stuff.

Toine Michielse (59:38)
It will execute the tests and then when you execute the integration test, it's either okay or not okay. If it's not okay, it just goes back. They will be taken out of the integration pipeline. All the changes that have been made, they will be undone. And we go back to the developer. The developer can fix whatever needs to be fixed. Everything that's okay, we'll go into the next stage where it says okay, everything that's okay. I'm going into, let's say, my next part, which could be, I would strongly recommend for it would be the performance schema environment. The actions are very much the same. Right. I need to DDL, that needs to flow, I need to have the schema being created and I need maybe the program preparations, but ultimately, you know, the actions, they're going to be the same and they're going to be followed by tests. However, here it's really, really important that, you know, somehow the Dba will have a way to, to say, you know, this is getting so close to my critical environments, I want to make sure that I stop problems before they become a problem. Right. So maybe that's where he will have a last look, a manual inspection, a manual sign off.

Toine Michielse (01:01:00)
Yes, this is okay. This can go. Maybe there's additional rules that need to be applied, sizing of data sets, maybe assignment of buffer pools, all those things that are getting more important between, let's say a sandbox environment and ultimate production. So that's where he would do things like an approval flow. But once that is done, the things is going to be more or less the same, which you saw in a previous slide, right? So you perform your test pipeline and again, that will generate oks, not oks. The reporting will be different. Potentially maybe the reporting will go through Dba. I really don't care right, where it goes, as long as it's automated, as long as it's documented, as long as it flows nicely back into the process, the calendars and the meetings that are being done there. And ultimately, finally, if everything that is okay can move into the final states. And now that I see this, I notice that I have a slight slip. But that performance schema pipeline, that will be, let's say that next integration level, whether that's pre production, where you assemble until you're ready to do a release, or whether it's production directly, I don't know.

Toine Michielse (01:02:32)
But this is how I envision doing this. And with that, I think I've come to the end of my allotted time. So if you have questions. Do you have any questions? And, oh, by the way, that's my daughter and she has always been very inquisitive, always had lots of questions, and that never changed. She's still very well. She's now 18, but that was her a few years later talking to some carabinieri somewhere in Italy. She's always asking questions. So I hope you have a lot of questions as well.

Amanda Hendley (01:03:15)
And you're welcome to chat those questions, or if you want, come off mute and ask them.

Toine Michielse (01:03:32)
So then I have a question and I hope someone will raise a hand or just shout out, am I crazy? Or do you think there's actually value in looking at the future like this? Anyone?

Amanda Hendley (01:04:06)
There's some action in chat.

Toine Michielse (01:04:09)
I don't see chat. All right then with that, let me give it back to you. Amanda.

Amanda Hendley (01:04:38)
Thank you. Thank you so much for your presentation. We had a couple of early inquiries about the video and everything, and it will be available. Just give us about a week. As far as some news and other information. Intellimagic posted a release just last month about what's new for intellimagic vision. I thought you might want to check that out. There is a Db2 job out there on the planet mainframe job board for John Deere in Illinois, if that is of interest here in the states. And we're always looking for contributors at planet Frame. I'd love to feature your thoughts and opinions in an article or series within planet mainframe. You can reach out to us on our social channels. We are on X, Facebook, LinkedIn, and videos also pop up. They're on YouTube, but I do think they're toggled and delayed. So virtual user groups is going to be your best first place to go for the video content. Again, I want to thank Intellimagic for their sponsorship and have you all saved the date for May 21 for our next session. So two months on the third Tuesday of the month, same time will also be on Zoom and with in that.

Amanda Hendley (01:06:09)
Thank you so much for presenting today. Great session and I think you've answered everyone's questions, so look forward to hosting you again sometime.

Toine Michielse (01:06:20)
Thank you very much for the opportunity, Amanda. And yeah, welcome.

Amanda Hendley (01:06:24)
Thank you. Happy to have a great rest of the day.

 

Upcoming Virtual Db2 Meetings

May 21, 2024

Virtual Db2 User Group Meeting

Can Db2 for z/OS be hacked?
Emil Kotrc, IBM Champion, Software Architech
Broadcom Software

Register here

July 16, 2024

Virtual Db2 User Group Meeting

Next Virtual Db2 Meeting

May 21, 2024

Can Db2 for z/OS be hacked?

In an era where cybersecurity threats loom large, safeguarding your Db2 for z/OS system is paramount.This session delves into the vulnerabilities that might be inherent in Db2 environments and provides actionable insights to fortify your defenses.

The presenter opens with a resounding affirmation: “Can Db2 be hacked? Yes, sure it can.” However, the answer doesn’t stop there; it’s contingent on the adequacy of security measures in place. This comprehensive exploration aims to equip attendees with a deeper understanding of potential breaches and proactive strategies to mitigate risks.

Register Here!

Emil Kotrc

Emil Kotrc

IBM Champion,
Solutions Architech

Broadcom Software

Virtual Db2 Newsletter

Newsletter #08 | February 2024

Db2_Newsletter_008

Upcoming Meetings

July 16, 2024

Virtual Db2 User Group Meeting

September 17, 2024

Virtual Db2 User Group Meeting

 

November 19, 2024

Virtual Db2 User Group Meeting