
Evolution of Human-Machine Interaction
The last 50+ years have witnessed a sea change in the technology space — including in how humans interact with machines. We started with punched cards in the 1970s, graduated to tapes and disks, then navigated through syntax-constrained low-level languages like IBM Assembler, before progressing to higher-level languages such as COBOL, C, Java, and Python.
Interestingly, the seeds of natural language interaction were sown much earlier than we might think. COBOL was deliberately crafted to be “English-like” — approachable enough for even non-programmers to instruct a computer. In that sense, it was an early ancestor of the very concept we now call natural language processing (NLP).
Today, we can interact with machines in natural language, such as English.
This evolution has been nothing short of phenomenal. Human intent itself has become the new syntax. There is a fundamental shift in the way we interact, and that has given way to a new discipline — one that requires no compiler, no rigid syntax rules, just the deliberate and thoughtful use of language. It is called Prompt Engineering: the craft of designing precise, effective instructions to get the best out of Large Language Models (LLMs). It’s not just about getting the right answer; it’s about ensuring that AI understands the context, the nuances and the intent behind every interaction.
Breaking the Mainframe Myth: PopUp Mainframe Meets AI
One of the most persistent challenges in the mainframe world has been the pace and cost of change. Development, Testing and environment provisioning have historically demanded significant time, resources, and careful planning, making agility a luxury rather than a standard. PopUp Mainframe has already shattered this myth by enabling instantaneous provisioning of new mainframe environments, seamless pipeline execution, and – with our FastTrack feature – providing full environment rollback capabilities.
AI is now mainframe-ready, empowering any individual to harness the power of AI within the mainframe space. PopUp Mainframe is compatible with AI and can be integrated with a variety of LLMs in minutes. This opens exciting new avenues for exploration and the opportunity to enable many more people (including non-mainframers) to start interacting confidently with a mainframe.
In the PopUp Mainframe lab, we have been exploring the possibilities of AI for the mainframe over the last few months. While AI is doubtlessly a potentially powerful technology, when used badly it is misleading, time-wasting and even dangerous. We have had a steep learning curve to properly harness the huge power of AI, and the emerging discipline of prompt engineering has been a large part of that.
Top Tips: How to Interact Effectively with AI
Through hands-on exploration of AI integration in the PopUp lab, here are the most valuable lessons we have learned:
1. The More You Give, the More You Get: Specificity is everything. The more detailed and precise your prompts are, the faster and more accurate the responses. Well-crafted prompts also significantly reduce the risk of AI hallucination.
Always try to include as much technical details as possible in your prompt vocabulary:
- Instead of “Create a Db2 table by the name EMPLOYEE” you can mention “Create a Db2 table EMPLOYEE in DBCG Db2 subsystem in database DB1 and tablespace TS1”
- Instead of “List all tablespaces that are huge” you can mention “List all the tablespaces in database DB1 that has exceeded 75% utilization”
- Instead of “Show all the long running jobs in the batch queue” you can mention “List jobs that exceeded 30 minutes of elapsed time that ran yesterday”
- Instead of “List all the failed jobs” you can mention “List all the jobs that had a non-zero return code in the last 24 hrs”
- Instead of “List all programs in CICS” you can mention “List all the application programs (not system-related) that are active in region CICSTSTA”
2. Provide Context Upfront: If you already possess relevant background information, include it in your prompt. This helps the LLM identify the most direct path to the right answer, improving both accuracy and response time. Structure your prompts using clear, logical sections (such as Role, Goal, Execution Rules, Output Format, and Examples) to give the model a clear framework to work within.
3. Never Share Sensitive or Proprietary Data: Treat AI interactions like any other external-facing channel. Never share user credentials, passwords, real IP addresses, ports, or any personally identifiable information (PII). Equally important — avoid exposing in-house source code in plain text. Convert it to binaries to safeguard your intellectual assets. This includes giving an AI assistant access to a directory which has source code anywhere in it, be particularly careful with cloned repositories.
4. Leverage Prompt Engineering Tools: Tools like Promptist and Prompt Genie can meaningfully elevate the quality of your prompts while reducing the time and effort required to build them from scratch. They generate structured, high-impact prompts tailored to your requirements — making your AI interactions more efficient and effective.
5. Review Before You Submit: A prompt submitted in haste is rarely a prompt that delivers. Always review your prompt carefully before submitting. Building a library of organized, task-specific prompt templates can streamline this process and bring consistency to your AI workflows. This also results in more efficient use of AI, saving costly credits and reducing environmental impact.
6. Understand AI’s Capabilities, and Its Limits: AI is a powerful tool, but not an all-purpose one-stop solution. It is essential to first identify which tasks are genuinely well-suited for AI and to remain aware of its boundaries. For instance, you cannot ask AI to create a new Db2 region — but you can absolutely use it to execute Db2 operations, query databases or generate meaningful summaries.
7. Break Tasks Down; Don’t Overwhelm: When dealing with large volumes of data or multi-step tasks, divide and conquer. Submit focused, bite-sized prompts rather than one overwhelming request. Oversized prompts can lead to long wait times, incomplete responses, or hitting usage limits — all of which slow down your workflow and add unnecessary complexity.
8. Embrace the Iterative Nature of AI: Interacting with AI is rarely a one-shot exercise — and that’s perfectly fine. Treat every interaction as an opportunity to refine and improve. Iterate on your prompts, learn from each response, and progressively close in on your desired output.
9. Consider the Power of Local LLMs: It’s worth exploring the option of deploying local LLMs within your environment. While it requires an initial investment, the long-term benefits are compelling: stronger data privacy, reduced recurring costs, freedom from third-party dependencies, and full offline availability.
Conclusion: The Future Is a Well-Crafted Prompt
From punched cards to natural language, the journey of human-machine interaction has been one of the relentless reinventions. The integration of AI with products like PopUp Mainframe is proof that even the most established technologies like the z/OS mainframe are not just keeping pace with change, they are leading it.
Prompt engineering is not merely a technical skill. It is a new literacy for the modern technologist. As AI continues to mature, those who learn to communicate with it clearly, responsibly, and creatively will be best positioned to unlock its full potential, whether that takes place on the cloud, on-premise, or on the mainframe. At PopUp Mainframe, this is just the beginning of our AI journey. We are committed to continuously harnessing the power of AI, pushing the boundaries of mainframe innovation, and exploring new possibilities that deliver real, tangible value to our customers. The intersection of AI and mainframe is rich with potential — and we intend to help lead the way.



