The Mainframe Delivery Revolution Continues

Last year, the PopUp Mainframe team was delighted to attend, present, and exhibit at the GSE UK Conference 2024 – the region’s biggest and best mainframe community event. Rubbing shoulders with industry luminaries, technical experts, household name organizations, it was a fantastic experience. And this year, with a fresh new title of GS UK 2025, the conference looks set to be one of the best ever – we look forward to attending another fantastic event!

Here’s a quick recap of why the PopUp Mainframe team value the conference so much.  

The need for speed – a mainframe market requirement

Mainframes remain the mainstay of enterprise computing. Industry reports (including our own from earlier this year) indicate continued reliance upon IBM mainframes across a variety of sectors, at some of the world’s largest and most successful organizations. Often part of a hybrid IT strategy nowadays, the IBM mainframe remains a central component of the organizational infrastructure, the applications it houses business critical in nature.

As important as it is, issues blight the mainframe environment, and none more so than the problem with bottlenecks (also reported in our survey). Mainframe delivery teams often suffer from the shortage of non-production environments for development, test, research, and training, impacting their ability to deliver as fast as the business would like.

Even with the advent of DevOps style tooling on the mainframe, the resource availability restrictions make genuine acceleration of delivery very tough.

Flexible mainframe delivery to match your imagination

We spoke last year of PopUp Mainframe’s breakthrough approach to providing readily-available, virtual mainframe environments to anyone who needs access, whenever they need it. PopUp Mainframe can figuratively pop-up in minutes to provide ready access for situations that demand it – scaling up test environment availability to run some resource-intensive or multi-user testing, to enable dev teams to collaborate on complex merges, to enable system administrators to validate fixes across different versions of the same sub-system, to offer ready access to new trainees before they ‘go live’ with their own LPAR access, to support an urgent application fix to commence while the usual LPAR is down for maintenance. The list goes on.

We were grateful for being invited to speak and for the lively discussion from the audience.

Increased PopUp Capability On Show

Since last year, we’ve released an updated product, widened the scope of our deployment offering to take in the IFL and LinuxONE, and we’ve added some key new capabilities to the product line.

And we’re pleased to return to the conference (now renamed GS UK) to widen the scope of our discussions based on these recent innovations. Join us to learn more about our latest innovations on Linux, with Ansible playbooks, with our new “FastTrack” facility, using mainframe open-source technology, and more.

Our busy conference speaking agenda looks like this.

Monday 3rd November

11am – More Horses! Low Linux on Z speeds up mainframe change

2.30pm – Open-Source Mainframe DevOps demonstration

3.45pm – Automate the heck out of every LPAR (using Ansible playbooks)

Tuesday 4th November

3.30pm – Mainframe IT Skills Overview – part of the WAVEZ 101 Track

4.45pm – Platform Engineering – a real-world use case

Wednesday 5th November

Midday – Market Survey Findings RoundTable (Executive Track – Invitation Only)

The full conference agenda is here.

Team Time

This is a real community show, and the attendee list reads like a who’s who in the UK mainframe world. From the major mainframe vendors such as IBM, Broadcom and BMC, to the crucial resellers, consultancies and service providers like TES Enterprise Solutions and Vertali, to the press presence of Planet Mainframe, to the notable end user organizations present, to the fantastic volunteers at the Open Mainframe Project, there’s an insightful, informative, conversation to be had every hour of the day.

Join Us

The mainframe community has redoubled its efforts in the last few years to engage more proactively and open its doors to the curious, and the Whittlebury Park conference is a real fixture of the mainframe community calendar.

Please make time to join us at one of our sessions or come and chat to us on the Expo floor (we’ll be hanging out with our partners, TES Enterprise Solutions at their booth).

We hope to see you there. If you haven’t yet registered – go here.

Further Reading For more information on how PopUp Mainframe can help revolutionize mainframe delivery in your organization, take a look at the web site, including this list of recent press articles.

The Modern Mainframe: A Ready-Made Hybrid? 

The popularity of the Linux platform continues to rise—but does that apply equally to mainframe users? This blog examines the growing hybrid trend in a mainframe context. 

Wait – Linux on a Mainframe? 

Absolutely! Since its inception a generation ago, Linux on IBM Mainframes–such as IBM Z and LinuxONE–has gained significant traction, particularly in enterprise environments.  

By 2006, Linux on IBM Z had already been embraced by over 1,700 customers. Its promise to combine the flexibility of Linux with the reliability and scalability of IBM’s mainframe systems, met with wide acclaim, and organizations adopted it to support a variety of use-cases, including modern applications, cloud computing, databases, and containerized workloads, benefiting from features like advanced data security, high performance, and sustainability.  

For example, the LinuxONE Emperor 4, is popular among financial services organizations such as Citibank, according to sources 

Facilitating Innovation 

More recently, IBM introduced the Integrated Facility for Linux (IFL), a specialized processor designed specifically to run Linux workloads on IBM Z and LinuxONE systems. The IFL provides high server density, reduced operational costs, and

 enhanced performance capabilities, such as Simultaneous Multi-Threading (SMT) and Single Instruction Multiple Data (SIMD) technologies.  

Additionally, IFLs can be added dynamically to systems and support various capacity-on-demand options. That flexibility means organizations use Integrated Facilities for Linux (IFLs) in a cloud computing context to achieve scalability, security, and efficiency. Beyond cloud computing, IFLs are also leveraged – including to house additional development and testing environments: there’s understandable merit in using a flexible, robust and secure environment to simulate critical workloads, to accelerate delivery while keeping vital z/OS cycles devoted to production. 

From an IT sustainability perspective, there are genuine green credentials associated with the platform. According to IBM, running Linux workloads on an IBM z16 single frame or rack mount – instead of on comparable x86 servers in similar conditions – can reduce energy consumption by 75% and space by 67%. Who knew blue was so green? 

Measuring Success  

In 2020, IBM reported “Linux capacity increasing 55 percent year-over-year,” cementing the strategic nature of its investment commitment.  

Industry analyst, Brent Ellis, of Forrester, more recently explained, “[IBM] has a strategy to enable more modern workloads to run on the hardware and the number of people acquiring mainframe hardware to run on Linux is increasing. Over the next few years, it is likely there will be more capability … coming to [mainframe] Linux to ensure a steady and non-disruptive transition to modern environments within the mainframe.”  

Further quantifying that trend, however, is harder than you might expect. While there is plenty of information about the mainframe market – in the form of reports and surveys – details surrounding mainframe Linux are only lightly pencilled in. How Linux supports a modern mainframe-centric strategy is not well-defined, despite the uniformly accepted wisdom of its huge potential.   

To learn more, PopUp Mainframe included a couple of questions about attitudes towards mainframe usage—and specifically Linux environments—as part of the market survey we commissioned this year with research experts Vanson Bourne. We wanted to hear how Linux matters to mainframe decision-makers.  

The survey is still ongoing, so we’re not going to share the numbers yet. But early indications are worth mentioning. Firstly—no great surprise for a mainframe market survey, perhaps—we see an overwhelming loyalty to the mainframe platform, both today and into the foreseeable future.  

What’s perhaps more illuminating, however, is that the initial findings suggest a huge appetite – an overwhelming majority of respondents (we will give you the precise number when the survey is closed) – for using Linux-based mainframe environments in their various forms. Simply put, many who hold the mainframe in high regard see the potential and value of Linux as part of that equation.  

Make the Most of your Modern Mainframe 

Of course, potential is one thing and practical solutions are quite another. Our study therefore also digs into the bottlenecks facing mainframe teams today, and where they most want to improve their capabilities to support the business. 

We look forward to being able to report the full results in the coming weeks – revealing key findings, spotlighting challenges, and offering practical steps towards an even more efficient mainframe environment.  

Update – we are pleased to announce the survey results are now live – visit this page for more details.   

Can the best get any better?

The z16 mainframe was hailed as the very best, and customers agreed. As we look ahead to its successor, it’s time to ask whether it can get even better and, if so, what it needs to achieve. 

Continue reading

Virtually Trained and Ready 

 The mainframe industry must constantly train and educate new professionals. To make this possible, trainers, mentors, team leaders, and department managers need access to mainframe resources. But when mainframe access is in short supply, what can you do?  

More, more, more, mainframe! 

Countless market surveys and press articles point at a continued—and growing—reliance on the mainframe environment to support big business. Organizations today may have a plethora of choices in terms of IT platforms, yet the IBM mainframe remains the go-to platform for business-critical workloads. And the trajectory is upwards, as more customers and greater functionality increase demands for busy mainframe environments.  

Which means, upstream, there is also continued demand for additional skilled mainframers to get that work done. No surprise then that the demands of the mainframe from a training and skills development perspective is also on the rise. Simple laws of supply and demand are at play here—not to mention the murky reality of the much-documented skills crisis.  

You get what you’re given 

Continued business growth drives additional resource needs; this is a fact of business life. Yet such benign realities are not without their difficulties. The mainframe is a complex environment to administer, and it carries a significant cost.

Generally speaking, the mainframe’s expenses are seen as acceptable because mainframes act as the execution engine for a lot of revenue-generating activities. Yet cost justification struggles in less tangible areas such as research or training. With a heavy emphasis on production workload, allocating mainframe time to skill-building and research is—for some organizations—severely limited and extremely hard to commission. 

Understandably so, of course, but this makes meeting training demands all the more difficult.  

As if by magic, a mainframe appeared 

If only there were a way for a mainframe environment to magically appear whenever it was needed. We enjoyed this article by Planet Mainframe, which introduces the concept of the virtual mainframe, outlining its potential benefits as a training and research platform that can support a variety of scenarios.

The article expresses the value of virtualized mainframe training environments in two important areas –  

“Skills: With veteran developers retiring, training new talent is needed. In particular, experienced employees. Anyone with access to virtual mainframes can build new skills or expand existing skills – all in the same space.  

Modernization: Virtual mainframes allow students and developers to adopt DevOps and CI/CD pipelines while working within a mainframe context”. 

And the need is real;both the skills question and the drive towards DevOps remain high on the agenda for mainframe teams, yet the environment sometimes struggles to support the ambition.  

The same article explains the challenge of mainframe access: “In addition to the expense, traditional mainframes [are] often shared, limiting access… As technology continually evolves, so should training, specifically for mainframe developers.” 

Access to an easy-to-use ‘sandpit’ enables junior or trainee developers to receive coaching from experienced mentors, learn at their own pace with tutorials, and get to grips with modern mainframe DevOps toolchain technology.  

The same environment is available to training departments or third parties looking to provide a comprehensive yet accessible mainframe training space and a ‘safe’ location for R&D activities. Onboarding new skills and providing a familiar, comprehensive platform for modernization efforts is an obvious benefit of a virtual mainframe training facility. 

A case in point 

The Planet Mainframe article further explores the situation at PopUp Mainframe customer, Legal & General (L&G). As part of its mainframe modernization program, the company implemented a ‘virtual mainframe environment’ to satisfy various training, development, and collaboration needs.  

Essentially, the PopUp Mainframe solution that L&G used provided the flexibility to enable mainframe resources to be used without impacting the regular mainframe production cycles or schedules.  

Fine-tune your own mainframe training program 

If you want to learn more about how PopUp Mainframe could help add flexibility and availability to your mainframe training and development activities, get in touch 

Getting to Git

Mainframe DevOps tooling offers a new era of productivity – with Git leading the charge. But source code migrations are far from straightforward. Fortunately, help is at hand, as expert Stuart Ashby explains.

Introduction

The mainframe development community has benefited from a bewildering array of modern technology releases over the last couple of decades that have almost completely changed how mainframe systems are built. This has coincided with the now-widespread adoption of DevOps as a procedural discipline for application delivery across the IBM mainframe and other platforms.

One example of innovative technology that has found its way into the mainframe world is the Git source repository and tooling. Outside of the mainframe, Git has proven to be a simple, sensible addition to any DevOps toolchain. But is it just as straightforward for the mainframe?

First Things First

Let me start with some assertions. First, in my opinion, the discussion around “Can Git be used for mainframe code?” is over. Vendors have demonstrated the technology integration with mainframes successfully, and some innovative organizations have run successful pilots. It is now difficult to argue that Git is not ready for the mainframe.

Using the Same Vocabulary

Traditionally, the mainframe development community has used terms like compile, assemble, linkedit, bind, newcopy, and phasein to label SCM and lifecycle phases. To outsiders, these terms can seem overly complex. A more contemporary approach describes compile, assemble, and linkedit as the build or the CI activity, and bind, newcopy, and phasein as the deploy or the CD activity. There are grey areas, of course, but these simplified concepts of build and deploy use terminology that is more familiar outside the mainframe community. Crucially, agreeing on the taxonomy for the future is a critical planning component of any SCM repository or DevOps process change.

Drilling Down into Mainframe DevOps with Git

The next question to address is “What is my branching strategy going to be?” The popular strategies to consider are GitFlow, GitHub Flow, GitLab Flow, and Trunk-based development.

Some months ago, I would have instantly replied that it had to be Trunk-based. This strategy has merits because it closely resembles the way that many mainframe developers work: once unit testing is completed, code is not rebuilt but is only deployed into test environments above unit testing.

Having been involved with a couple of organizations on the Git migration pathway, I also see the value of GitHub Flow, where a distinct branch hierarchy reminds me a lot of the traditional SCM lifecycle. In this branching strategy, for example, the developer has a feature branch to implement code changes based on specifications, and then a pull request populates the release branch. This pull request triggers a CI pipeline to rebuild, optimizing the binaries associated with the branch and removing debugging options.

So, there are choices to be made, but they are not necessarily irreversible. You must choose whether to replicate existing branching and build strategies, as these choices will influence your implementation. And remember, there will almost certainly be a cultural shift – so be prepared to embrace that, too.

Migrating Source Code

Once the branching strategy is established, the next step is to migrate the source code (and the vital audit history) from its current system into the main branch of a Git repository.

There is potentially a lot involved here, with numerous factors to consider, including the source location and system, the transferal process, and the target setup. Without sensible planning and preparation, this step can be extremely complex, time-consuming, and error-prone.

Utilities designed to extract the complete source code and history will have to be written and tested, with any defects corrected and enhancements made. Trial migrations must also be validated. Only when the Git repository migrations are complete and fully validated can the migration utilities be considered redundant.

Here to Help

At PopUp Mainframe, we have experience helping organizations modernize their mainframe delivery practices, including moving traditional SCM repositories into Git as part of a DevOps toolchain. Our tools and services streamline this otherwise arduous process, dramatically reducing the effort involved.

Our migration tools and utilities are proven against customer SCM repository migrations, preserving the full change history, including who made changes and when. We can support ChangeMan, Endevor, and Code Pipeline (formerly ISPW). These repository migrations typically take only a few weeks, depending on the repository structure and the amount of source code and history.

After the main Git repository is fully migrated, it is plain sailing to clone, edit, commit, and manage pull requests.

PopUp Mainframe can partner with your organization to create a statement of work (SOW) that outlines the efforts and timelines involved in getting from SCM to Git repository.

Other Considerations

I would suggest addressing a whole area of culture change to support this new way of working. Any change involves people, process, and technology – in that order.

All of the most successful transformations I’ve seen have started with a focus on people and the desired outcome.

The repository migration project should begin with a pioneering “pilot team” to adopt Git. First, we understand their needs, create processes to meet those needs, and get the team onto the new tooling. Feedback from this pilot team helps refine the process and builds momentum for subsequent phases.

This incremental approach ensures broad approval and acceptance within the organization.

It is also essential to measure the pilot team’s productivity with the new tooling. Results typically follow a “J-curve,” with a temporary dip as developers unlearn old processes and adopt new ones, before realizing the tangible benefits. Tracking and projecting outcomes are critical elements of this process.

If your DevOps team is exploring Git as a viable mainframe version control solution and needs guidance, reach out to us.

 

By Stuart Ashby

Sustainable IT Strategies

Integrating the ESG Imperative 

According to the Global Sustainability Barometer survey – commissioned by Microsoft, conducted by Ecosystem, and released by Kyndryl, “Although 84 percent of organizations consider sustainability goals to be of high strategic importance, only 21 percent actively leverage technology to minimize their environmental impact and guide their broader sustainability strategy.”  

The days are over when technology solutions are purely and simply about helping improve profits and reducing costs; implementing IT initiatives to actively reduce resource usage and emissions is taking its rightful place on today’s strategic agenda. 

Innovating Towards Sustainable IT 

At the arrowhead of using technology to support a more sustainable business model are pioneering organizations and vendors driving the agenda toward a cleaner, responsible, and sustainable business environment

The work of SustainableIT.org in championing and recognizing IT’s vital role in ESG and sustainability imperatives is noteworthy. Their list of award winners reads like a who’s who in sustainable IT innovation.  

PopUp Mainframe was delighted to be among this year’s recipients in recognition of our work in supporting our clients’ ESG and sustainability objectives, as well as our internal efforts towards a cleaner, greener provision of technology.  

As the organizers explained, “In August 2024, after a rigorous judging process, 30 companies were named winners, based on their ambitious ESG goals and measurable results. These winners demonstrated the power of technology to impact not just the environment but also social equity and governance. Key Selection Criteria were:
Ambitious ESG targets & commitment; – Proven impact with clear metrics (e.g., CO2 reduction, cost savings); Leadership in best practices & innovation; Cross-functional collaboration” 

We were delighted to be among the winners of the Environment category award.  

PopUp Mainframe – helping customers deliver On GreenIT objectives 

While each customer case will differ, PopUp Mainframe’s solution offers breakthrough possibilities for an organization’s sustainable IT strategy –  

  • Removing the need for additional infrastructure investment and energy consumption by reusing existing data center or cloud resources 
  • Maximizing energy-efficient mainframe resources and deferring/removing additional energy requirements through efficient scheduling of production workload 
  • Reducing IT’s emissions and energy footprint by only running testing and development environments when needed  

We are actively engaged in supporting our customers’ efforts towards a more sustainable IT provision as part of their operational objectives.  

PopUp Mainframe and Green IT 

PopUp Mainframe provides a sustainable, low-cost, and rapid solution for organizations looking to modernize IT. The solution enables on-demand availability of virtual mainframe environments, dramatically reducing the need for physical hardware and thereby lowering energy and emissions. By enabling mainframe test environments to run in solar-powered cloud data centers, PopUp Mainframe offers mainframe access to anyone who needs it while serving as a catalyst for sustainable transformation.  

 

See SustainableIT.org’s 2024 Impact Award brochure here.