Virtually Trained and Ready 

 The mainframe industry must constantly train and educate new professionals. To make this possible, trainers, mentors, team leaders, and department managers need access to mainframe resources. But when mainframe access is in short supply, what can you do?  

More, more, more, mainframe! 

Countless market surveys and press articles point at a continued—and growing—reliance on the mainframe environment to support big business. Organizations today may have a plethora of choices in terms of IT platforms, yet the IBM mainframe remains the go-to platform for business-critical workloads. And the trajectory is upwards, as more customers and greater functionality increase demands for busy mainframe environments.  

Which means, upstream, there is also continued demand for additional skilled mainframers to get that work done. No surprise then that the demands of the mainframe from a training and skills development perspective is also on the rise. Simple laws of supply and demand are at play here—not to mention the murky reality of the much-documented skills crisis.  

You get what you’re given 

Continued business growth drives additional resource needs; this is a fact of business life. Yet such benign realities are not without their difficulties. The mainframe is a complex environment to administer, and it carries a significant cost.

Generally speaking, the mainframe’s expenses are seen as acceptable because mainframes act as the execution engine for a lot of revenue-generating activities. Yet cost justification struggles in less tangible areas such as research or training. With a heavy emphasis on production workload, allocating mainframe time to skill-building and research is—for some organizations—severely limited and extremely hard to commission. 

Understandably so, of course, but this makes meeting training demands all the more difficult.  

As if by magic, a mainframe appeared 

If only there were a way for a mainframe environment to magically appear whenever it was needed. We enjoyed this article by Planet Mainframe, which introduces the concept of the virtual mainframe, outlining its potential benefits as a training and research platform that can support a variety of scenarios.

The article expresses the value of virtualized mainframe training environments in two important areas –  

“Skills: With veteran developers retiring, training new talent is needed. In particular, experienced employees. Anyone with access to virtual mainframes can build new skills or expand existing skills – all in the same space.  

Modernization: Virtual mainframes allow students and developers to adopt DevOps and CI/CD pipelines while working within a mainframe context”. 

And the need is real;both the skills question and the drive towards DevOps remain high on the agenda for mainframe teams, yet the environment sometimes struggles to support the ambition.  

The same article explains the challenge of mainframe access: “In addition to the expense, traditional mainframes [are] often shared, limiting access… As technology continually evolves, so should training, specifically for mainframe developers.” 

Access to an easy-to-use ‘sandpit’ enables junior or trainee developers to receive coaching from experienced mentors, learn at their own pace with tutorials, and get to grips with modern mainframe DevOps toolchain technology.  

The same environment is available to training departments or third parties looking to provide a comprehensive yet accessible mainframe training space and a ‘safe’ location for R&D activities. Onboarding new skills and providing a familiar, comprehensive platform for modernization efforts is an obvious benefit of a virtual mainframe training facility. 

A case in point 

The Planet Mainframe article further explores the situation at PopUp Mainframe customer, Legal & General (L&G). As part of its mainframe modernization program, the company implemented a ‘virtual mainframe environment’ to satisfy various training, development, and collaboration needs.  

Essentially, the PopUp Mainframe solution that L&G used provided the flexibility to enable mainframe resources to be used without impacting the regular mainframe production cycles or schedules.  

Fine-tune your own mainframe training program 

If you want to learn more about how PopUp Mainframe could help add flexibility and availability to your mainframe training and development activities, get in touch 

Show and Tell – CEO Insights from PopUp Mainframe

At the GSE UK Conference, Gary Thornhill, founder and CEO of PopUp Mainframe, shared the journey behind his company and how it emerged as a response to the challenges faced during the COVID-19 pandemic. In this interview, he discusses how PopUp Mainframe is revolutionizing mainframe accessibility, addressing industry pain points like environment bottlenecks, skill gaps, and innovation barriers while embracing sustainability and hybrid computing solutions.  

 Show and Tell – CEO Insights from PopUp Mainframe 

Hello, my name is Gary Thornhill, and I am the founder and CEO of PopUp Mainframe. My career has been a bit of a journey. I started out in mainframe operations, which later expanded into middleware. From there, I moved into middleware and led a company that focused on DevOps. The idea for PopUp Mainframe actually came about during the COVID-19 pandemic. At the time, I was the CEO of Sandhata Technologies, a DevOps consultancy. Like many others, we faced the challenge of senior, highly skilled consultants being let go from key client accounts as businesses tightened their budgets. This gave us an opportunity to think differently. One of our clients needed a way to quickly create environments for their work, and we realized there was a broader problem to solve. That’s when PopUp Mainframe was born. It was a solution designed to address the growing need for speed, accessibility, and innovation in mainframe.  

What are some of the current mainframe challenges that PopUp Mainframe addresses?  

The challenges I see in mainframe organizations are often more about the way they’re set up than the technology itself. A lot of organizations still operate with siloed teams, and many rely heavily on outsourcing. This makes it difficult for businesses to innovate quickly. On top of that, mainframes often have static environments—ones that can’t be easily spun up or down—which limits flexibility.  

PopUp Mainframe directly addresses these issues. For example, we allow organizations to create temporary environments in less than ten minutes, whether on-premises or in the cloud. This eliminates the bottleneck of waiting for new environments to be set up. Our platform is also designed to be user-friendly; you don’t need to be an expert in green-screen interfaces to be productive. This accessibility opens up the mainframe to more people, solving both the shortage of environments and the skills gap in the process.  

How has the mainframe evolved, and what role do innovative technologies play in this space?  

When I started in the industry, roles within mainframe teams were very specialized. You’d have one group managing Db2, another handling IMS, and others focused on operations or automation. Today, that has shifted. People are expected to wear multiple hats, often blending mainframe and distributed skills. For example, a developer might now work across mainframe and Linux environments.  

This evolution has been driven in part by initiatives like the Open Mainframe Project, which fosters collaboration and innovation in the community. A great example of this is automated testing. Many organizations are still relying on manual testing, or worse, skipping unit testing altogether. Through PopUp Mainframe, we’ve been working with the Open Mainframe Project to advance frameworks like Galasa, which allows distributed testing tools—such as JUnit or Selenium—to be used on the mainframe.  

This kind of innovation is critical. Automated testing not only speeds up development but also reduces the cost of change, making mainframes more competitive and easier to maintain.  

Do companies without mainframes benefit from adopting the technology? 

It’s a fascinating question. While mainframes are often associated with legacy systems, they’re incredibly relevant in today’s world of big data and high-performance processing. In fact, I’ve seen new clients—organizations that have never used mainframes before—embracing the technology.  

Mainframes are unmatched when it comes to reliability and processing power. Parallel Sysplex, for instance, has been around for 30 years and remains the only true hot failover system. If you have high processing needs, there’s simply no better platform. Plus, mainframes are incredibly sustainable, with the lowest cost per transaction compared to other technologies.  

PopUp Mainframe offers a way for organizations to explore the benefits of mainframe technology without making massive upfront investments. For businesses looking to test new approaches or handle large volumes of data, the mainframe is still the gold standard.  

How does PopUp Mainframe support green tech initiatives?  

PopUp Mainframe supports green tech in a couple of ways. First, our platform is literally “on-demand” and systems can be turned on then turned off when they’re not in use. Traditionally, mainframe environments tend to sit idle, gathering dust and accumulating technical debt. By adopting a “switch it off” mentality, organizations can dramatically reduce energy consumption and optimise their resources. 

On our end, we’ve taken steps to ensure our own operations are as sustainable as possible. For instance, we source data centers with green initiatives like solar-powered servers. It’s a small but meaningful step, and it aligns with our commitment to greener IT practices.  

It’s also worth noting that every digital action has a carbon footprint. Checking your bank balance, for example, uses energy. Most people don’t think about this, but by making IT systems more efficient, we can help reduce the overall environmental impact. PopUp Mainframe is part of the Sustainable IT organization, and we’re pushing for broader changes across the industry.  

What’s the biggest misconception about mainframes?  

The biggest misconception is that mainframes are outdated dinosaurs. In reality, they’re anything but. Mainframes can do just about everything distributed systems can—and often better.  

Take z/OS Connect, for example. It allows mainframes to host web services. Db2 is another great example. It’s an incredibly powerful database, but many organizations aren’t taking full advantage of its capabilities.  

 The issue isn’t with the technology itself but with how it’s perceived. Organizations need to focus on cultural change—encouraging teams to embrace the smart, innovative tools that are already available on the mainframe.  

Can you share one of your clients’ success stories?  

 We’ve had the privilege of working with some incredible clients. One that stands out is a UK insurance company. They’ve completely transformed their developer experience by using PopUp Mainframe. With tools like VS Code and modern CI/CD pipelines, their developers can now work faster and more efficiently. It’s made the mainframe an attractive platform for innovation, especially for cutting-edge work.  

Another client, who runs their mainframe alongside Windows applications in Azure, has seen similar success. Thanks to our partnership with Delphix (Perforce), they can perform end-to-end testing with referential integrity. This level of quality testing has significantly improved their release cycles, allowing them to deliver changes much faster and with greater confidence.  

 What emerging industry trend excites you the most?  

Artificial Intelligence (AI) is incredibly exciting. Tools like Watson Code Assistant and BMC’s Code Insights have the potential to transform how we work. Imagine being able to query vast amounts of documentation and instantly find answers—it’s a game-changer for productivity.  

That said, I think it’s important for organizations to approach AI thoughtfully. There’s a lot of hype right now, and it reminds me of the early days of cloud computing. Companies need to clearly define what they hope to achieve with AI, rather than jumping on the bandwagon. Used correctly, AI can solve significant challenges, but it’s not a one-size-fits-all solution.   

What’s next for PopUp Mainframe?  

We’re incredibly excited about what’s on the horizon for PopUp Mainframe. At the moment, our platform runs on x86 architecture—this includes environments like AWS, Azure, and on-premises virtual machines. But we’re taking things a step further by working on running PopUp Mainframe directly on IBM’s Integrated Facility for Linux (IFL) and LinuxONE.  

This development will be a game-changer, especially for larger enterprises. It means organizations will have the flexibility to deploy PopUp Mainframe either on traditional x86 setups or directly on the physical mainframe. For example, they’ll be able to leverage the agility of Delphix (Perforce) virtualization, which allows for forward and rewind capabilities on multiple PopUp Mainframes. This creates an ideal hybrid scenario, where businesses can experiment with cloud-based solutions while still maintaining the reliability and power of their physical mainframes.  

 In essence, our goal is to offer clients the freedom to operate in a mixed environment, balancing the best of both worlds while keeping their infrastructure modern and adaptable.  

How can PopUp Mainframe solve the industry’s talent challenges?  

The skills gap in the mainframe industry is a pressing concern, and I firmly believe PopUp Mainframe plays a crucial role in addressing this challenge. We’ve already started working with a few North American universities to introduce students to the platform.  

The key lies in making the technology approachable. With PopUp Mainframe, you can do everything you would on a traditional mainframe, but with tools that younger generations are already familiar with—like Eclipse-based GUIs and VS Code. This lowers the learning curve and removes the intimidation factor often associated with mainframes.  

Today’s graduates care deeply about making an impact. They’re less focused on the technology itself and more interested in what it can achieve. PopUp Mainframe aligns with that mindset by allowing them to quickly implement ideas, make code changes, and bring new functionality to life. Imagine telling a young developer they’ll have to wait a week for a Db2 update—they’d be pulling their hair out! By  contrast, our platform enables near-instantaneous changes, which keeps the momentum going and fosters creativity.   

The idea of the “big, scary mainframe” from sci-fi films of the 70s and 80s is outdated. With PopUp Mainframe, we’re helping to reframe that perception and show that mainframes can be just as user-friendly and exciting as any other modern technology.  

What does it take to get started with PopUp Mainframe?  

Getting started with PopUp Mainframe is remarkably straightforward. You can either download our compressed image—now just 60 GB—or access it through Azure. Once downloaded, the setup process takes about ten minutes.  

We’ve also created the PopUp Manual, a detailed guide that walks users through every step of the process. It covers everything from connecting PopUp Mainframe to your physical mainframe to migrating data and configurations.  

The biggest hurdle isn’t the platform itself—it’s navigating organizational processes to gain access to infrastructure. That’s often where delays occur. However, once PopUp Mainframe is up and running, you can hit the ground running.  

Our platform also offers flexibility when it comes to security. For example, in certain scenarios, you can start without full RACF or ACF2 profiles. This allows you to quickly set things up, make changes, and save them to disk. Later, if necessary, you can provide a more secure copy for broader organizational use.  

Ultimately, PopUp Mainframe is just another mainframe—only faster, more agile, and easier to use. It allows teams to utilize their existing skills while bringing in distributed expertise, particularly in areas like testing and automation. It’s the perfect balance of familiarity and innovation.  

This transcript is from an interview with Gary Thornhill, conducted by Planet Mainframe at the GSE Conference 2024. Watch the full interview here. 

Getting to Git

Mainframe DevOps tooling offers a new era of productivity – with Git leading the charge. But source code migrations are far from straightforward. Fortunately, help is at hand, as expert Stuart Ashby explains.

Introduction

The mainframe development community has benefited from a bewildering array of modern technology releases over the last couple of decades that have almost completely changed how mainframe systems are built. This has coincided with the now-widespread adoption of DevOps as a procedural discipline for application delivery across the IBM mainframe and other platforms.

One example of innovative technology that has found its way into the mainframe world is the Git source repository and tooling. Outside of the mainframe, Git has proven to be a simple, sensible addition to any DevOps toolchain. But is it just as straightforward for the mainframe?

First Things First

Let me start with some assertions. First, in my opinion, the discussion around “Can Git be used for mainframe code?” is over. Vendors have demonstrated the technology integration with mainframes successfully, and some innovative organizations have run successful pilots. It is now difficult to argue that Git is not ready for the mainframe.

Using the Same Vocabulary

Traditionally, the mainframe development community has used terms like compile, assemble, linkedit, bind, newcopy, and phasein to label SCM and lifecycle phases. To outsiders, these terms can seem overly complex. A more contemporary approach describes compile, assemble, and linkedit as the build or the CI activity, and bind, newcopy, and phasein as the deploy or the CD activity. There are grey areas, of course, but these simplified concepts of build and deploy use terminology that is more familiar outside the mainframe community. Crucially, agreeing on the taxonomy for the future is a critical planning component of any SCM repository or DevOps process change.

Drilling Down into Mainframe DevOps with Git

The next question to address is “What is my branching strategy going to be?” The popular strategies to consider are GitFlow, GitHub Flow, GitLab Flow, and Trunk-based development.

Some months ago, I would have instantly replied that it had to be Trunk-based. This strategy has merits because it closely resembles the way that many mainframe developers work: once unit testing is completed, code is not rebuilt but is only deployed into test environments above unit testing.

Having been involved with a couple of organizations on the Git migration pathway, I also see the value of GitHub Flow, where a distinct branch hierarchy reminds me a lot of the traditional SCM lifecycle. In this branching strategy, for example, the developer has a feature branch to implement code changes based on specifications, and then a pull request populates the release branch. This pull request triggers a CI pipeline to rebuild, optimizing the binaries associated with the branch and removing debugging options.

So, there are choices to be made, but they are not necessarily irreversible. You must choose whether to replicate existing branching and build strategies, as these choices will influence your implementation. And remember, there will almost certainly be a cultural shift – so be prepared to embrace that, too.

Migrating Source Code

Once the branching strategy is established, the next step is to migrate the source code (and the vital audit history) from its current system into the main branch of a Git repository.

There is potentially a lot involved here, with numerous factors to consider, including the source location and system, the transferal process, and the target setup. Without sensible planning and preparation, this step can be extremely complex, time-consuming, and error-prone.

Utilities designed to extract the complete source code and history will have to be written and tested, with any defects corrected and enhancements made. Trial migrations must also be validated. Only when the Git repository migrations are complete and fully validated can the migration utilities be considered redundant.

Here to Help

At PopUp Mainframe, we have experience helping organizations modernize their mainframe delivery practices, including moving traditional SCM repositories into Git as part of a DevOps toolchain. Our tools and services streamline this otherwise arduous process, dramatically reducing the effort involved.

Our migration tools and utilities are proven against customer SCM repository migrations, preserving the full change history, including who made changes and when. We can support ChangeMan, Endevor, and Code Pipeline (formerly ISPW). These repository migrations typically take only a few weeks, depending on the repository structure and the amount of source code and history.

After the main Git repository is fully migrated, it is plain sailing to clone, edit, commit, and manage pull requests.

PopUp Mainframe can partner with your organization to create a statement of work (SOW) that outlines the efforts and timelines involved in getting from SCM to Git repository.

Other Considerations

I would suggest addressing a whole area of culture change to support this new way of working. Any change involves people, process, and technology – in that order.

All of the most successful transformations I’ve seen have started with a focus on people and the desired outcome.

The repository migration project should begin with a pioneering “pilot team” to adopt Git. First, we understand their needs, create processes to meet those needs, and get the team onto the new tooling. Feedback from this pilot team helps refine the process and builds momentum for subsequent phases.

This incremental approach ensures broad approval and acceptance within the organization.

It is also essential to measure the pilot team’s productivity with the new tooling. Results typically follow a “J-curve,” with a temporary dip as developers unlearn old processes and adopt new ones, before realizing the tangible benefits. Tracking and projecting outcomes are critical elements of this process.

If your DevOps team is exploring Git as a viable mainframe version control solution and needs guidance, reach out to us.

 

By Stuart Ashby