The Modern Mainframe: A Ready-Made Hybrid? 

The popularity of the Linux platform continues to rise—but does that apply equally to mainframe users? This blog examines the growing hybrid trend in a mainframe context. 

Wait – Linux on a Mainframe? 

Absolutely! Since its inception a generation ago, Linux on IBM Mainframes–such as IBM Z and LinuxONE–has gained significant traction, particularly in enterprise environments.  

By 2006, Linux on IBM Z had already been embraced by over 1,700 customers. Its promise to combine the flexibility of Linux with the reliability and scalability of IBM’s mainframe systems, met with wide acclaim, and organizations adopted it to support a variety of use-cases, including modern applications, cloud computing, databases, and containerized workloads, benefiting from features like advanced data security, high performance, and sustainability.  

For example, the LinuxONE Emperor 4, is popular among financial services organizations such as Citibank, according to sources 

Facilitating Innovation 

More recently, IBM introduced the Integrated Facility for Linux (IFL), a specialized processor designed specifically to run Linux workloads on IBM Z and LinuxONE systems. The IFL provides high server density, reduced operational costs, and

 enhanced performance capabilities, such as Simultaneous Multi-Threading (SMT) and Single Instruction Multiple Data (SIMD) technologies.  

Additionally, IFLs can be added dynamically to systems and support various capacity-on-demand options. That flexibility means organizations use Integrated Facilities for Linux (IFLs) in a cloud computing context to achieve scalability, security, and efficiency. Beyond cloud computing, IFLs are also leveraged – including to house additional development and testing environments: there’s understandable merit in using a flexible, robust and secure environment to simulate critical workloads, to accelerate delivery while keeping vital z/OS cycles devoted to production. 

From an IT sustainability perspective, there are genuine green credentials associated with the platform. According to IBM, running Linux workloads on an IBM z16 single frame or rack mount – instead of on comparable x86 servers in similar conditions – can reduce energy consumption by 75% and space by 67%. Who knew blue was so green? 

Measuring Success  

In 2020, IBM reported “Linux capacity increasing 55 percent year-over-year,” cementing the strategic nature of its investment commitment.  

Industry analyst, Brent Ellis, of Forrester, more recently explained, “[IBM] has a strategy to enable more modern workloads to run on the hardware and the number of people acquiring mainframe hardware to run on Linux is increasing. Over the next few years, it is likely there will be more capability … coming to [mainframe] Linux to ensure a steady and non-disruptive transition to modern environments within the mainframe.”  

Further quantifying that trend, however, is harder than you might expect. While there is plenty of information about the mainframe market – in the form of reports and surveys – details surrounding mainframe Linux are only lightly pencilled in. How Linux supports a modern mainframe-centric strategy is not well-defined, despite the uniformly accepted wisdom of its huge potential.   

To learn more, PopUp Mainframe included a couple of questions about attitudes towards mainframe usage—and specifically Linux environments—as part of the market survey we commissioned this year with research experts Vanson Bourne. We wanted to hear how Linux matters to mainframe decision-makers.  

The survey is still ongoing, so we’re not going to share the numbers yet. But early indications are worth mentioning. Firstly—no great surprise for a mainframe market survey, perhaps—we see an overwhelming loyalty to the mainframe platform, both today and into the foreseeable future.  

What’s perhaps more illuminating, however, is that the initial findings suggest a huge appetite – an overwhelming majority of respondents (we will give you the precise number when the survey is closed) – for using Linux-based mainframe environments in their various forms. Simply put, many who hold the mainframe in high regard see the potential and value of Linux as part of that equation.  

Make the Most of your Modern Mainframe 

Of course, potential is one thing and practical solutions are quite another. Our study therefore also digs into the bottlenecks facing mainframe teams today, and where they most want to improve their capabilities to support the business. 

We look forward to being able to report the full results in the coming weeks – revealing key findings, spotlighting challenges, and offering practical steps towards an even more efficient mainframe environment.  

Update – we are pleased to announce the survey results are now live – visit this page for more details.   

Can the best get any better?

The z16 mainframe was hailed as the very best, and customers agreed. As we look ahead to its successor, it’s time to ask whether it can get even better and, if so, what it needs to achieve. 

Continue reading

Virtually Trained and Ready 

 The mainframe industry must constantly train and educate new professionals. To make this possible, trainers, mentors, team leaders, and department managers need access to mainframe resources. But when mainframe access is in short supply, what can you do?  

More, more, more, mainframe! 

Countless market surveys and press articles point at a continued—and growing—reliance on the mainframe environment to support big business. Organizations today may have a plethora of choices in terms of IT platforms, yet the IBM mainframe remains the go-to platform for business-critical workloads. And the trajectory is upwards, as more customers and greater functionality increase demands for busy mainframe environments.  

Which means, upstream, there is also continued demand for additional skilled mainframers to get that work done. No surprise then that the demands of the mainframe from a training and skills development perspective is also on the rise. Simple laws of supply and demand are at play here—not to mention the murky reality of the much-documented skills crisis.  

You get what you’re given 

Continued business growth drives additional resource needs; this is a fact of business life. Yet such benign realities are not without their difficulties. The mainframe is a complex environment to administer, and it carries a significant cost.

Generally speaking, the mainframe’s expenses are seen as acceptable because mainframes act as the execution engine for a lot of revenue-generating activities. Yet cost justification struggles in less tangible areas such as research or training. With a heavy emphasis on production workload, allocating mainframe time to skill-building and research is—for some organizations—severely limited and extremely hard to commission. 

Understandably so, of course, but this makes meeting training demands all the more difficult.  

As if by magic, a mainframe appeared 

If only there were a way for a mainframe environment to magically appear whenever it was needed. We enjoyed this article by Planet Mainframe, which introduces the concept of the virtual mainframe, outlining its potential benefits as a training and research platform that can support a variety of scenarios.

The article expresses the value of virtualized mainframe training environments in two important areas –  

“Skills: With veteran developers retiring, training new talent is needed. In particular, experienced employees. Anyone with access to virtual mainframes can build new skills or expand existing skills – all in the same space.  

Modernization: Virtual mainframes allow students and developers to adopt DevOps and CI/CD pipelines while working within a mainframe context”. 

And the need is real;both the skills question and the drive towards DevOps remain high on the agenda for mainframe teams, yet the environment sometimes struggles to support the ambition.  

The same article explains the challenge of mainframe access: “In addition to the expense, traditional mainframes [are] often shared, limiting access… As technology continually evolves, so should training, specifically for mainframe developers.” 

Access to an easy-to-use ‘sandpit’ enables junior or trainee developers to receive coaching from experienced mentors, learn at their own pace with tutorials, and get to grips with modern mainframe DevOps toolchain technology.  

The same environment is available to training departments or third parties looking to provide a comprehensive yet accessible mainframe training space and a ‘safe’ location for R&D activities. Onboarding new skills and providing a familiar, comprehensive platform for modernization efforts is an obvious benefit of a virtual mainframe training facility. 

A case in point 

The Planet Mainframe article further explores the situation at PopUp Mainframe customer, Legal & General (L&G). As part of its mainframe modernization program, the company implemented a ‘virtual mainframe environment’ to satisfy various training, development, and collaboration needs.  

Essentially, the PopUp Mainframe solution that L&G used provided the flexibility to enable mainframe resources to be used without impacting the regular mainframe production cycles or schedules.  

Fine-tune your own mainframe training program 

If you want to learn more about how PopUp Mainframe could help add flexibility and availability to your mainframe training and development activities, get in touch 

Getting to Git

Mainframe DevOps tooling offers a new era of productivity – with Git leading the charge. But source code migrations are far from straightforward. Fortunately, help is at hand, as expert Stuart Ashby explains.

Introduction

The mainframe development community has benefited from a bewildering array of modern technology releases over the last couple of decades that have almost completely changed how mainframe systems are built. This has coincided with the now-widespread adoption of DevOps as a procedural discipline for application delivery across the IBM mainframe and other platforms.

One example of innovative technology that has found its way into the mainframe world is the Git source repository and tooling. Outside of the mainframe, Git has proven to be a simple, sensible addition to any DevOps toolchain. But is it just as straightforward for the mainframe?

First Things First

Let me start with some assertions. First, in my opinion, the discussion around “Can Git be used for mainframe code?” is over. Vendors have demonstrated the technology integration with mainframes successfully, and some innovative organizations have run successful pilots. It is now difficult to argue that Git is not ready for the mainframe.

Using the Same Vocabulary

Traditionally, the mainframe development community has used terms like compile, assemble, linkedit, bind, newcopy, and phasein to label SCM and lifecycle phases. To outsiders, these terms can seem overly complex. A more contemporary approach describes compile, assemble, and linkedit as the build or the CI activity, and bind, newcopy, and phasein as the deploy or the CD activity. There are grey areas, of course, but these simplified concepts of build and deploy use terminology that is more familiar outside the mainframe community. Crucially, agreeing on the taxonomy for the future is a critical planning component of any SCM repository or DevOps process change.

Drilling Down into Mainframe DevOps with Git

The next question to address is “What is my branching strategy going to be?” The popular strategies to consider are GitFlow, GitHub Flow, GitLab Flow, and Trunk-based development.

Some months ago, I would have instantly replied that it had to be Trunk-based. This strategy has merits because it closely resembles the way that many mainframe developers work: once unit testing is completed, code is not rebuilt but is only deployed into test environments above unit testing.

Having been involved with a couple of organizations on the Git migration pathway, I also see the value of GitHub Flow, where a distinct branch hierarchy reminds me a lot of the traditional SCM lifecycle. In this branching strategy, for example, the developer has a feature branch to implement code changes based on specifications, and then a pull request populates the release branch. This pull request triggers a CI pipeline to rebuild, optimizing the binaries associated with the branch and removing debugging options.

So, there are choices to be made, but they are not necessarily irreversible. You must choose whether to replicate existing branching and build strategies, as these choices will influence your implementation. And remember, there will almost certainly be a cultural shift – so be prepared to embrace that, too.

Migrating Source Code

Once the branching strategy is established, the next step is to migrate the source code (and the vital audit history) from its current system into the main branch of a Git repository.

There is potentially a lot involved here, with numerous factors to consider, including the source location and system, the transferal process, and the target setup. Without sensible planning and preparation, this step can be extremely complex, time-consuming, and error-prone.

Utilities designed to extract the complete source code and history will have to be written and tested, with any defects corrected and enhancements made. Trial migrations must also be validated. Only when the Git repository migrations are complete and fully validated can the migration utilities be considered redundant.

Here to Help

At PopUp Mainframe, we have experience helping organizations modernize their mainframe delivery practices, including moving traditional SCM repositories into Git as part of a DevOps toolchain. Our tools and services streamline this otherwise arduous process, dramatically reducing the effort involved.

Our migration tools and utilities are proven against customer SCM repository migrations, preserving the full change history, including who made changes and when. We can support ChangeMan, Endevor, and Code Pipeline (formerly ISPW). These repository migrations typically take only a few weeks, depending on the repository structure and the amount of source code and history.

After the main Git repository is fully migrated, it is plain sailing to clone, edit, commit, and manage pull requests.

PopUp Mainframe can partner with your organization to create a statement of work (SOW) that outlines the efforts and timelines involved in getting from SCM to Git repository.

Other Considerations

I would suggest addressing a whole area of culture change to support this new way of working. Any change involves people, process, and technology – in that order.

All of the most successful transformations I’ve seen have started with a focus on people and the desired outcome.

The repository migration project should begin with a pioneering “pilot team” to adopt Git. First, we understand their needs, create processes to meet those needs, and get the team onto the new tooling. Feedback from this pilot team helps refine the process and builds momentum for subsequent phases.

This incremental approach ensures broad approval and acceptance within the organization.

It is also essential to measure the pilot team’s productivity with the new tooling. Results typically follow a “J-curve,” with a temporary dip as developers unlearn old processes and adopt new ones, before realizing the tangible benefits. Tracking and projecting outcomes are critical elements of this process.

If your DevOps team is exploring Git as a viable mainframe version control solution and needs guidance, reach out to us.

 

By Stuart Ashby

Sustainable IT Strategies

Integrating the ESG Imperative 

According to the Global Sustainability Barometer survey – commissioned by Microsoft, conducted by Ecosystem, and released by Kyndryl, “Although 84 percent of organizations consider sustainability goals to be of high strategic importance, only 21 percent actively leverage technology to minimize their environmental impact and guide their broader sustainability strategy.”  

The days are over when technology solutions are purely and simply about helping improve profits and reducing costs; implementing IT initiatives to actively reduce resource usage and emissions is taking its rightful place on today’s strategic agenda. 

Innovating Towards Sustainable IT 

At the arrowhead of using technology to support a more sustainable business model are pioneering organizations and vendors driving the agenda toward a cleaner, responsible, and sustainable business environment

The work of SustainableIT.org in championing and recognizing IT’s vital role in ESG and sustainability imperatives is noteworthy. Their list of award winners reads like a who’s who in sustainable IT innovation.  

PopUp Mainframe was delighted to be among this year’s recipients in recognition of our work in supporting our clients’ ESG and sustainability objectives, as well as our internal efforts towards a cleaner, greener provision of technology.  

As the organizers explained, “In August 2024, after a rigorous judging process, 30 companies were named winners, based on their ambitious ESG goals and measurable results. These winners demonstrated the power of technology to impact not just the environment but also social equity and governance. Key Selection Criteria were:
Ambitious ESG targets & commitment; – Proven impact with clear metrics (e.g., CO2 reduction, cost savings); Leadership in best practices & innovation; Cross-functional collaboration” 

We were delighted to be among the winners of the Environment category award.  

PopUp Mainframe – helping customers deliver On GreenIT objectives 

While each customer case will differ, PopUp Mainframe’s solution offers breakthrough possibilities for an organization’s sustainable IT strategy –  

  • Removing the need for additional infrastructure investment and energy consumption by reusing existing data center or cloud resources 
  • Maximizing energy-efficient mainframe resources and deferring/removing additional energy requirements through efficient scheduling of production workload 
  • Reducing IT’s emissions and energy footprint by only running testing and development environments when needed  

We are actively engaged in supporting our customers’ efforts towards a more sustainable IT provision as part of their operational objectives.  

PopUp Mainframe and Green IT 

PopUp Mainframe provides a sustainable, low-cost, and rapid solution for organizations looking to modernize IT. The solution enables on-demand availability of virtual mainframe environments, dramatically reducing the need for physical hardware and thereby lowering energy and emissions. By enabling mainframe test environments to run in solar-powered cloud data centers, PopUp Mainframe offers mainframe access to anyone who needs it while serving as a catalyst for sustainable transformation.  

 

See SustainableIT.org’s 2024 Impact Award brochure here.  

Revolutionising mainframe delivery. Done.

The PopUp Mainframe team was delighted to attend, present, and exhibit at the GSE UK Conference 2024 recently – the region’s biggest and best mainframe community event. Rubbing shoulders with industry luminaries, technical experts, household-name organizations – it was a fantastic experience. As arguably the youngest company in the room, many of our conversations were introductory in nature.  

“Just what does PopUp Mainframe do?” It was an opening line we heard more than a few times.  

Summarizing several conversations into one – in case you didn’t get a chance to ask – here’s our answer. 

The need for speed – a mainframe market requirement 

Mainframes remain the mainstay of enterprise computing. Celebrating its 60th birthday, reports reveal a continued reliance upon – and investment in – IBM mainframes across a variety of sectors, at some of the world’s largest and most successful organizations. Now often part of a hybrid IT strategy, the IBM mainframe remains a central component of the organizational infrastructure, housing business’s most critical applications.  

As important as it is, issues blight the mainframe. Mainframe environments have very regimented and restricted time periods (and LPAR capacity) for dev and test activities. And for good reason. Regulatory pressures, internal audit requirements, extensive system and functional testing cycles, and the need for efficient resource management of mainframe LPARs mean much of the mainframe world lives by stringent rules on availability, timeslots and schedules. In such environments, asking for ad-hoc or out-of-cycle resources to test something new – or whatever the reason – is handled as an exception, and often refused. Simply, the mainframe is too busy, and every minute (and every associated dollar) is already accounted for.  

Mainframe delivery teams often suffer the most, with the shortage of non-production environments for development and testing, impacting their ability to deliver as fast as the business would like. Application and system teams must plan new releases well in advance. Delivery and testing slots are sacrosanct, making any more creative, agile initiatives, impractical to include. There’s just no extra time, and no additional budget, to do other things that haven’t already been scheduled.  

Even with the advent of DevOps-style tooling on the mainframe, the resource availability restrictions make genuine acceleration of delivery very tough.  

Flexible mainframe delivery to match your imagination 

Those who manage mainframe environments wish they could do more, without increasing costs, while those who own the applications wish they could deliver faster.  

But is that wish just a pipe dream?  

Imagine a world where you can have instant access to fully functioning mainframe resources but without the associated cost and effort involved. Imagine snapping your fingers and having your own mainframe sandpit to play in. What extra innovations would you work on? What new capabilities or integrations might you test? What could a personal virtual mainframe help you deliver? What could you achieve that you’ve never attempted before? 

This imagined reality has been brought to life by the team at PopUp Mainframe. Their solution directly addresses the need for more instantaneous access to mainframe resources with an immediately available, fully functioning mainframe in the form of its ‘PopUp’ product. The PopUp comes to life in minutes and behaves the same as physical mainframe environments. You can install any mainframe subsystem or bespoke application there – enabling you to use it for new dev and test purposes. The personal virtual mainframe is here. 

Where it can help you 

Because PopUp Mainframe – as the name suggests – can appear anywhere you need it in the mainframe dev and test process, it adds the flexibility you were probably missing before.  

Rapid application testing, intensive spikes of regression or performance assessments, interactive CI/CD build and test activities, code reviews, training exercises, hackathons, and even a ‘safe sandpit’ area for junior developers to work on new ideas – you name it. PopUp Mainframe revolutionises the mainframe SDLC by accelerating delivery, reducing hardware costs, and helping teams deliver on the promise of mainframe DevOps.  

PopUp is fully compatible with standard DevOps toolchains (Git, Jenkins, BMC, IBM, open source) and can be deployed on-prem or in the cloud. PopUp makes delivering projects on a mainframe identical to delivering projects on distributed environments. 

Traditionally constrained by waterfall methodologies, siloed teams, limited time slots, and rigid schedules, developers have long imagined a more flexible environment for mainframe delivery. Well now, it is finally here – true mainframe delivery agility is now within reach.   

 On the shoulders of giants 

PopUp Mainframe tackles typical mainframe delivery bottlenecks through the provision of on-demand mainframe environments, to accelerate flexible and low-cost mainframe delivery.  

Using IBM’s mainframe test environment – ZD&T – as the underpinnings for its solution, PopUp Mainframe can be commissioned within minutes, with applications and data transferred (and masked) as required. Your very own mainframe sandbox is just minutes away. PopUp Mainframe supports a variety of popular mainframe DevOps toolchains, cloud or distributed platforms, to ensure it is available wherever needed, and with whatever tools are required.  

One of our customers said that, previously, it took 6 months to create a new z/OS environment, and now – with PopUp Mainframe – it takes just 30 minutes. Learn more about this story here 

Learn More 

Want to revolutionize your mainframe delivery? Review our GSE Presentation Slides and then contact us.