Mainframe Interview Questions and Answers for 2024

In a mainframe system, workstations and terminals are computers with the varied processor and memory configurations that process workloads and perform essential tasks. Whether you are a beginner, an intermediate, or an advanced user of the system, these mainframe interview questions and answers can aid you in enhancing both your confidence and knowledge of the system. The questions have been organized into a number of categories, including CICS, DB2, COBOL, JCL, VSAM, and IDMS, so that they are accessible to those with a broad range of skill levels. It explains every significant aspect of mainframe interview questions. In the mainframe environment, workstations and terminals are computers with varying configurations of processors and memory that process workloads and carry out necessary activities. They can perform billions of instructions per second, making them ideal for data processing and analysis. While mainframes are now considered a vintage technology, the necessity for mainframe developers persists due to the strong processing capabilities of these systems. So, when you sit for mainframe developer interview questions, you should have hands-on training on databases and go further for the Database training course.

  • 4.7 Rating
  • 75 Question(s)
  • 35 Mins of Read
  • 10508 Reader(s)


While mainframes are used for the processing of massive quantities of data, supercomputers are utilized for the resolution of large-scale, difficult mathematical processes.

  • Powerful performance capabilities 
  • Extremely dependable 
  • Extremely scalable
  • They call for a specialized computer operating system (OS). 
  • They come at a high cost. 
  • Takes up a larger amount of real estate. 

Distributed Relational Database Architecture is a group of protocols that makes it possible for applications and database systems running on different platforms to communicate with one another. It also makes it possible for relational data to be spread across various platforms.

The formation of a distributed relational database management system is possible by the connection of any combination of relational database management products that make use of DRDA. Also, DRDA is responsible for coordinating communication across different systems by specifying both what is communicated and the way through which it is transferred.

A must-know for anyone heading into a Mainframe interview, this question is frequently asked in Mainframe interview questions with answers. IMS serves as an alternative entry point into any IMS database. It is possible to collect the necessary data by using it as a file.

Multiple CPUs and large amounts of RAM come together to form a mainframe computer (memory). In this role, they serve as a central processing unit (CPU) for a network of connected workstations and terminals. They are put to use for processing requests from thousands of users, which requires massive data operations in the petabyte range.

Mainframe refers to the housing for the primary memory and the central processing units of a computer system. Requests in e-commerce, finance, education, government, and other sectors are processed in real-time by these computers.

Common Business Oriented Language is the full form of COBOL. To standardise communication across disparate mainframes, the United States Department of Defense funded research and development of this object-oriented, procedural, and imperative language in the 1950s. COBOL has the following characteristics: 

  • Because of its English-like syntax, COBOL was one of the first high-level programming languages designed with the user in mind. 
  • It is a popular choice since it can document itself. 
  • It can take in, analyze, and spit out massive amounts of data with ease. 
  • It includes strong error message support, which makes it simple to fix problems. 

JCL is an acronym that stands for Job Control Language. JCL is the name of a scripting language that is used for the purpose of giving necessary requirements for the processing of a job. It performs the job of acting as an interface between the IBM Mainframe Operating System and COBOL applications, and it is made up of a collection of control statements. 

JCL statements are responsible for alerting the operating system of the needed input data, providing instructions on the operations that must be performed on that data, and determining what actions must be taken with the outcome of those operations.

The acronym DRDA stands for Distributed Relational Database Architecture. This architecture operates as a connection protocol that is designed for the processing of local databases. This is mostly implemented by third-party providers like IBM. The architecture is made up of several principles that facilitate the connection between different databases and applications.

Expect to come across this popular question in Mainframe Interview Questions. To limit the number of potential modifications that may be made to a main key by using a foreign key, self-referencing limitations are used. To make this possible, the foreign keys need to specify a rule known as the DELETE-CASCADE rule. This rule stipulates that if the relationship for the DELETE rule is specified as CASCADE, then additional rows in the table will be recursively destroyed if one row is updated or deleted.

The term "spool" refers to the buffering technique known as Simultaneous Peripheral Operations On-Line. This mechanism is used to temporarily store data so that it may be processed and carried out at a later time.

It is possible to determine whether or not a record exists in a table by using the SEARCH and SEARCH ALL commands.


The records in the table are searched for using a linear search method. A sequential search is another name for this method.

The sorted availability of the data is not a prerequisite for using the table in this instance.

The following is the syntax for SEARCH:

SEARCH TABLE-NAME [VARYING {identifier1/index1}] [AT END Statement] 
{WHEN ConditionPasses  

Utilizes Binary Search in order to locate the record(s) you are looking for in the table.

The information included in the table must be presented here in a sorted format (either ascending or descending).

The following is the syntax for SEARCH ALL: 

[AT END Statement] 
{WHEN ConditionPasses  

A portion of COBOL code that specifies the data structures of COBOL programmes is referred to as a COBOL copybook. Before we begin developing business rules, we must first determine data structures that will serve as the basis for the rules that will be written and managed outside the COBOL programme.

The UPDATE cursor is a pointer that enables us to make changes to or remove the currently displayed row that was recently obtained. The information server is informed by the use of UPDATE Keyword that every entry it receives from the database may be subject to modification or deletion.

SYNC is a Keyword in COBOL that is used for aligning the storage area (data) to a word boundary. A word boundary represents any address that is a multiple of 4. This is done to ensure that calculations are as effective as possible whenever the mainframe server reads data from the word boundary.

IBM is responsible for the development of the DB2 series of data management technologies. Nevertheless, if we narrow our focus to databases, IBM's DB2 is a relational database that was first made available to the public in the year 1989. C, C++, Java, and even Assembly Language were some of the numerous programming languages that were used in the creation of the DB2 database. It is equipped with an operating system that is compatible with both Linux and Windows.

The distinction lies in the scope as well as the level of accuracy. In contrast to the INTEGER data type, which can store integers with a precision of up to 31 bits and a range that extends from -2,147,483,648 to +2147483648, the SMALLINT data type can only store values with a precision of up to 15 bits and can only do so within the range of -32768 to +32767. There is a further data type that may be used to hold the data of the integer type. BIGINT is what it is called, and its range is considerably greater than that of INTEGER.

The primary distinction between the two is that CHAR has a predetermined maximum length, while VARCHAR may have any length between 1 and 255 characters. This indicates that char will always have the same length to store the text, but VARCHAR will alter its length according to the length of the text being stored, which will assist save memory. In addition, the CHAR data type has a limit of 254 bytes for its maximum size, whereas the VARCHAR data type has a limit of 4046 bytes for its maximum size.

The union command is used to combine two or more SELECT queries, and each select statement may be used on either a single table or several tables at the same time. The primary difference between UNION and UNION ALL is that the former eliminates duplicate rows in tables when it is applied, whilst the latter keeps them. UNION removes duplicate rows when it is applied.

The aggregate function known as MAX() will return the value that is greater than all of the other values in a set. For instance, if we have a database full of movies, we can apply MAX(rating) to the rating attribute, and it will choose the rows in which we have the movies that have the highest ratings. This works the same way if we apply MAX(rating) to any other property. A CHAR column may, in fact, make use of the MAX function.

The following are the three primary categories of data that may be stored in COBOL programmes: 

  • Numeric data types are those that are used to represent numerical values ranging from 0 to 9. 
  • The letters A through Z are represented by the alphabetic data type, which may be used to store any alphabetic value. 
  • The alphanumeric data type is one that can represent both numerical and alphabetic information.

The 01 level is the record-setting level. We are unable to duplicate the record itself; but we are able to repeat the fields included inside it. The OCCURS clause specifies that the definition of data names will be repeated several times. As a result of this, we are unable to make use of the OCCURS clause at the 01 levels.

The CALL command is a legitimate instruction that runs and then exits a separate programme. Similar to CALL, but not part of the standard COBOL verb set, is the LINK command. While CALL is performed in a single run unit, the LINK command operates similarly to several independent run units.

When it is necessary to do the file REWRITE of the record, the file should first be opened, and then the record must be read from the file. Only after these two steps should the file be closed. Because of this, the file should consistently be opened using the I/O mode.

It is necessary for us to employ scope terminators if we are working with in-line PERFORMS or EVALUATE statements. It is advised as it makes it easier to comprehend the code and is considered to be an effective coding technique.


The functionality of INCLUDE and COPY is different. It is a programming tool for making designs more expansive. INCLUDE is used for expansion before the compiler is run, whereas COPY is used for expansion during the compilation process.

A common question in Mainframe Interview Questions, don't miss this one. A scenario is referred to as a stalemate when two distinct processes compete for the same resources, or for resources that have been reserved for each other. Both -911 and -913 are the SQL error codes indicating a deadlock.

According to the requirement for referential integrity, consistency must be maintained between primary keys and foreign keys at all times. In other words, there must be a corresponding primary key for every foreign key.

Nested COBOL programmes often make use of the features that it provides. Nested programmes will be unable to access the programme if the COMMON attribute is not given, since this will prevent access to the application. PGMNAME is a good example of a COMMON programme.

The statement is complete in Static SQL before the program has finished executing. As a result, the operational type of the statement is maintained after the execution of the program has been completed. Before being built, a program that only contains static SQL statements must be analysed by an associate degree SQL precompiler.

Since the operational type of a dynamic SQL statement does not remain permanent throughout the course of the execution of a SQL application, this sort of statement is referred to as "ready throughout the execution." It is possible for the given type of the statement to be a character string that is sent to DB2 using either the static SQL statement PREPARE or the dynamic SQL statement EXECUTE IMMEDIATE. 

We can, in fact, move, and doing so won't cause any problems as long as the phrase we use is simply "MOVE." However, if the field is used as a component of number-crunching calculations, the application may fail to function properly on occasion.

If the name of the programme is explicitly referenced in the CALL explanation, then the call is deemed to be static. If, on the other hand, a working storage variable is referenced in the CALL explanation, then the call is considered to be dynamic.

DBD restricts access to only one item at a time for security reasons. It is argued that lock contention has occurred when many objects concurrently request permission to execute.

It is kept in the last nibble. For instance, if our number is 100, it will store hex 0C in the final byte; if our number is 101, it will store hex 1C; if our number is 102, it will store hex 2C; if the number is -101, it will store hex 1D; and so on and so forth.

While it is possible to keep files inside a DIRECTORY in addition to another DIRECTORY that also contains files, a PDS only includes its own members and does not include any other PDSs.

One of the most frequently asked Mainframe interview questions, be ready for it.

  • Using JCL, we can remove as well as create a large number of data sets, VSAM Clusters (which stand for "Virtual Storage Access Method"), and GDGs (which stand for "Generation Data Groups"). 
  • It is possible to do file comparisons using distinct PDS (Partitioned Data Set) members using it. 
  • We can combine and organize a wide variety of data files with it. 
  • Additionally, it is able to execute and compile batch-based applications, which are pre-scheduled programmes that are assigned to run on a computer without any further intervention from the user. 
  • It is simple to make changes to, JCL is also simple to understand for newcomers. 
  • In addition to this, it offers a variety of useful tools, such as IEDCOPY, IDCAMS, and other similar programmes, which make it simpler and more practical to carry out a variety of activities. 

Any parameter that changes with each iteration of a programme should, in general, be regarded as a symbolic parameter. It is possible to get more flexibility in the method by employing symbolic parameters. It won't be necessary to make adjustments to the technique each and every time there is a seemingly little but ongoing change at a certain location.

In the case of symbolic parameters, the string of 1 to 7 alphabetic letters is preceded by an ampersand sign (&). After the ampersand (&), you are required to type in a letter from the alphabet. In JCL statements, symbolic parameters are only allowed to appear in the operand field; they are not allowed anywhere else. 

JCL statements do not allow symbolic parameters to appear in the name or operation fields. In the event that more than one symbolic parameter is supplied to a PROC or EXEC statement, only the first one will be used.

Temporary datasets are datasets (files that store one or more records) that are only required for the length of the task and are erased once the project is over. These datasets are deleted after the job is finished. They need storage just for the period of the work, and as soon as the project is over, the files may be erased.

These datasets are often expressed using the notation DSN=&&name, or else they are left unspecified in terms of a DSN. Using them, we are able to transmit the results of one phase to the subsequent steps of the same project. 

By specifying TYPERUN=SCAN on a job card or by executing JSCAN, it is possible to examine JCL syntax without actually running the programme. The TYPRUN command is used to request specific job processing, such as checking or scanning a task for syntax mistakes. 

This is one example of what may be done with this command. SCAN examines the code for any syntax mistakes but does not carry out the task itself. In every other case, JSCAN is able to verify the syntax of a JCL without actually executing it. 

A task timeout happens if a programme takes more time to complete than the allotted maximum amount of time for the specific category that was chosen. An "S322 abend" is the common name for this kind of error. Due to the circular mistakes that have occurred, the application cannot be finished in this scenario. 

If the amount of data that has to be processed by the programme is very large and requires a longer period of time to accomplish its goals, the TIME parameter might be specified as TIME = 1440. 

A Mainframe basic interview question, be prepared to answer this one. CICS stands for Customer Information Control System. All of the administration of IBM's online transactions are brought within the purview of this sort of system. CICS is for computer integrated circuit system, and it refers to a mode of processing that is mostly launched by the use of a single request that may also influence one or more objects. 1969 was the year that saw the beginning of the CICS's establishment.

PPT is an abbreviation that mostly refers to the Processing Programming Table in the CICS. The PPT includes information such as the name of a programme, the names of its Mapsets, the task usage counter, the language, the size, the primary storage location, and so on.

  • TCP
    • TCP stands for Terminal Control Protocol. 
    • TCP is used for terminal message reception. 
    • It maintains hardware communication needs. 
    • It requests that CICS begin the duties. 
  • KCP
    • KCP stands for Task Control Program. 
    • KCP is used to regulate the execution of jobs as well as their associated attributes at the same time. 
    • It addresses all aspects of multitasking. 
  • PCP
    • PCP stands for Program Control Program. 
    • PCP is a software that is used to find and load programmes for execution. 
    • It transfers control across programmes and then restores control to the CICS. 
  • FCP
    • File Control Program (abbreviated as FCP) is a common acronym for this kind of software. 
    • Application developers rely on FCP to do tasks like reading, writing, and modifying files. 
    • For the sake of consistent data storage and update operations, it retains exclusive authority over the records. 
  • SCP
    • Storage Control Program is an abbreviation for this. 
    • Within a CICS area, it regulates storage allocation and deallocation. 
  • A transaction is a one-of-a-kind identifier that is often used to begin or carry out a certain activity. 
  • A transaction is the kind of thing that the four-character entry is for, and there won't be any duplication permitted, especially in the names of the transactions. 
  • It is possible for a transaction to be initiated simultaneously from distinct systems, but not from the same systems themselves. 
  • When a transaction is performed, there will often be one or more mappings to the programme that fundamentally need to be carried out in order for the transaction to be completed successfully. 

The acronym "FCT" refers to the "File Control Table." It is necessary for the file control table to have entries for each and every kind of VSAM file that is used by the CICS applications. The FCP stands for "film control programme" and is mostly a reference to the FCT.

The FCT includes the following fields: ACCMETH, which stands for "data access method," DATASET, which stands for "dataset name," FILE, which stands for "file name," SERVEQ, which stands for "operation needs to be performed on," FILESTAT, which stands for "file initial status," BUFND, which stands for "number of data numbers," and BUFNI, which stands for "number of index buffers." 

VSAM (Virtual Storage Access Method) is both a form of dataset and an access method for managing a variety of dataset types. As an access technique, VSAM provides higher functionality, performance, and adaptability than conventional disc access methods. In VSAM, records are stored in a format that is incomprehensible to other systems. VSAM arranges data on mainframe systems into files. This is one of the most prevalent high-performance file access mechanisms used in operating system variants such as MVS, z/OS, and OS/390.  

With VSAM, businesses may arrange file records in physical order (the order in which they were entered) or logical order using a key (such as the employee ID number) or based on their relative record numbers on DASD (Direct Access Storage Device). In VSAM, record lengths may be fixed or flexible. 

Integrated Data Cluster Access Method Services is what "IDCAMS" stands for in its acronym form. The IDCAMS Utility makes it simple to perform manipulations on VSAM datasets. It is possible to use it to create new VSAM datasets, delete existing ones, and change existing ones.

 In contrast to non-VSAM data sets, VSAM data sets are structured differently. The groupings of records that make up the VSAM data sets are organised into CI categories (control intervals). The control interval is a predetermined section of storage space where VSAM saves its records. This section is always the same size. 

A VSAM record may be no longer than one cylinder in length at most. Records may be organised in VSAM according to the index key, the relative record number, or the relative byte address, depending on which one is most relevant.

The order in which entries were added to the dataset is the order in which they are stored in ESDS files. The records may be located using their physical address, which is also referred to as their RBA (Relative Byte Address). Assuming that each record in an ESDS dataset has sixty bytes, the RBA of the first record would be zero, the RBA of the second record would be sixty bytes, the RBA of the third record would be one hundred twenty bytes, and so on.  

Access to records may be made in a sequential fashion using RBA, which is also known as addressed access. The data is saved in the same order as it was input into the system. The most recent records are appended to the end of the list. The ESDS dataset does not allow for the deletion of records; nevertheless, such records may be tagged as inactive. The length of records in an ESDS dataset may either be variable or fixed. 

VSAM Share Options are what decide who may see certain VSAM datasets. Using these controls, you may provide varying degrees of access to a single VSAM dataset for various users/jobs. SHAREOPTS is the name of the parameter used in the DEFINE statement (a,b). Two or more users/jobs on the same MVS system may share a file using the cross-region share option denoted by letter a, whereas two or more users/jobs on separate MVS systems can use the cross-system share option denoted by letter b.

Typically, this is the SHAREOPTS value (2 3). If the value for cross-region is 2, then numerous users may work on the file at once, but only one can be an updater. If the cross-system parameter is set to 3, then any number of jobs or users may utilise this file for processing (VSAM does not ensure integrity).

The DBKEY is what is used in order to identify the record instance inside the database at a certain address position. It is the practise of using line numbers in conjunction with page numbers. A four-byte address consists of the numbers 3 and 1, where 3 represents the page number and 1 represents the line number. Pages are always distinct in each and every one of the sections. 

  • Direct Access: The record is persisted whenever the DBKEY is given, provided that the place in question is vacant. 
  • Sequential Access: This kind of access saves each instance of a record one after the other and line by line. 

IDMS/R scans through all the available records in an area in physical sequence, while performing an area. The order of records retrieval will have little bit of relationship to some logical sequence of records. Area sweeps are utilized when all the records need to be retrieved in an area irrespective of the order. Extraction jobs are commonly implemented by Area sweeps.

One of a Kind is the complete abbreviation for OOK. IDD's meta schema is included inside this document. During the course of the installation process, the IDD meta schema is generated. There is a unique name assigned to each database, and each OOK record will have its own unique location to store the database name as well as other information.

Junction record is a form of record for members. Allows for many-to-many relationships to be established between its two owner records. For instance, in the database of a corporation, the record for DEPARTMENT serves as a junction record for the EMPLOYEE record type and the PROJECT record type.

Files are used as the primary organisational mechanism for databases. 

  • Mapping the files to the sections that have already been formatted is done. 
  • Each of the regions is broken down onto a page. 
  • The physical blocks that make up the disc are represented by each page. 
  • The database administrator (DBA) sets the records that are to be kept in each area as well as how those records are to be stored, and then allots a certain number of pages in a file for that area. 
  • In order to cut down on the amount of I/O that is needed, free space is only monitored globally across all of the pages.


A staple in Mainframe Interview Questions, be prepared to answer this one. The distinction between the index and the subscript:


  • The value of the index is equal to the number of places in the array that are displaced. 
  • Index does not need a separate declaration. The index is declared using the INDEX BY notation. 
  • Allows for a more expedient access to the data in the table. 
  • The index is initialised using the SET command. 
  • You may bring the index down by using the SET DOWN BY command, and you can bring it up by using the SET UP BY statement. 


  • The value of subscript indicates the total number of array occurrences. 
  • A separate declaration along the lines of S9(04) COMP in the WORKING-STORAGE SECTION is necessary for subscript. 
  • It takes longer to access the various data pieces. The subscript is initialised with the help of the MOVE command. When you want to raise the subscript, use the ADD statement, and when you want to reduce it, use the SUBTRACT statement. 

DD statement known as JOBLIB is what determines the location of the programme that is executed by the statement known as EXEC. The assertion is valid for each and every stage of the project as a whole, but it cannot be applied to any of the enumerated operations. 

STEPLIB operates in a manner similar to that of JOBLIB and is consulted to ascertain the dataset in which the programme is contained. It is solely relevant to the process step that it is a component of, not to the whole undertaking. This may be applied to every stage of the process, as well as to processes that are listed. 

In the event that both STEPLIB and JOBLIB are being utilised, the system will give precedence to STEPLIB and will ignore JOBLIB as a result.

It's no surprise that this one pops up often in Mainframe Interview Questions. The faulty data associated with an uninitialized numeric item is the root cause of the SOC-7 error more often than any other factor. Through the execution of certain settings, such as the use of assembly languages to invoke operating system services, we are able to get dumps of run-time abends. With the help of these dumps, we are able to determine the precise place of the instruction at which the abend occurred.  

With the help of this, we are able to check the XREF output that lists the compilation and get the line number as well as the verb of the source code of the instruction fault offset. We can get runtime dumps by declaring datasets in JCL as Sysabout, for instance, and then capturing them.  

In addition to that, we are able to make advantage of the setups' included debugging capabilities. When none of the approaches seem to be working, the next step is to pinpoint the precise site of the problem by relying on sound judgement and an awareness of how the system was designed.

  • Ensure that the database has all of the required tables. 
  • Create DCLGEN (Declaration Generator). This step is completely voluntary. Only do that if it's really necessary. 
  • Compile the software ahead of time. 
  • Perform the Link Edit operation and then compile the application. 
  • Perform DB2 BIND. 
  • Carry out the Programming. 

COMMIT statement is used to unlock any locks that are required for a particular work unit, thereby making room for additional work units to be created. In the event that COMMITS are not incorporated into the programme, the processing of the programme will need to revert to the inserts that were made during the course of the program's execution rather than reverting to a small number of inserts that are close to the most recent commit. This procedure adds two to three times as much time to the total amount of time needed to execute the programme. 

The z/OS system uses JCL to handle work by specifying which application should be run, allocating resources, etc. A job is a job description in the broad sense. The z/OS system utilizes the Job Entry Subsystem (JES) to manage the input, scheduling, processing, and output of tasks.

Input Phase: The tasks are input during the input phase using input devices such as remote terminals, card readers, job entry networks, etc. For job submission, JES2 can also read instructions and control statements from internal readers that may be utilised by other applications. 

JES2 assigns a job identifier to each job as it reads the input stream, and then writes the job's JCL, control statements, SYSIN data, and so on to spool data sets on DASD. The datasets may be accessed directly by the spool, allowing for simultaneous task execution, and the spool can be used as a temporary storage space for unfinished operations. In the future, JES2 will choose tasks from this spool to run. 

Conversion Phase: In the conversion process, a converter software is used to examine each JCL command and validate the syntax of the resulting code. 

JES may also check for the presence of procedure calls in JCL. A convertor is a programme that takes job control language (JCL) files, combines them with other JCL files using a procedural library like SYS1.PROCLIB, and then transforms the resulting composite JCL to internal text that is saved in a spool. 

When JES2 detects a mistake during this transformation, it generates and stores an error message in a buffer for further processing. 

If there are no problems with the work, it will be placed in a queue according to its class priority. 

Execution Phase: Initiators are begun during this period, either automatically or with the help of an operator, when the system is first turned on. When everything is set, the initiator will send a request to JES2 to begin the work. 

Based on the priority set for the order of execution, JES2 chooses the work and sends it to the initiator. 

The control blocks are constructed from the internal text generated by the initiator, which is then invoked by the interpreter. It also allots the necessary assets to carry out the task at hand. 

According to the instructions in the JCL EXEC statement, once enough resources are made available, the application will begin running. 

Output Phase: After the programme has been executed, system messages are sent to the user through the output phase. JES2 evaluates the work's output characteristics against the specifications and places the job in a queue for printing or punching. Records that need to be processed locally or remotely may be stored in this queue, which is termed an output queue. 

Jobs are prepared for the purge phase and delivered to the purge queue after the output processing is finished. 

Purge Phase: After a task has been processed, its unused resources and spool space are "purged" when JES2 retrieves them from the purge queue. When the purge is complete, the system will notify the operating system. 

Making use of the RETURN-CODE keyword, which may be used for transferring information to the JCL from the COBOL programme, is one way in which this objective might be accomplished. It is something that we may utilize to determine the results of any process.

In most circumstances, the activity of the programme will return the value 0, 4, 8, or 12 depending on whether or not it was successful. By using this keyword, which also guarantees that the information is transmitted from COBOL to JCL, we are able to make the necessary changes.

The COBOL programme is responsible for handling the internal sorting, and it does so by using an input file, a work file, and an output file. When doing any sort of processing in this environment, it is necessary to recompile the COBOL programme. Internal Sort makes use of two distinct syntaxes, namely: 

The USING and GIVING kinds do not need any further processing of the file. 

  • INPUT PROCEDURE and OUTPUT PROCEDURE sorts are able to manipulate data either before or after the sorting process. 
  • The external sort does not rely on COBOL; instead, it makes direct use of the SORT tool and relies on JCL in conjunction with the PGM=SORT command to complete the process.

When information is requested from the computer's memory, the central processing unit (CPU) first accesses the main memory, also known as random access memory (RAM), and checks to see whether the information being sought is stored on the memory page in question. In the event that it is absent, the central processing unit will execute paging on the secondary memory, which means that it will read data from the hard drive-in equal chunks of memory that are referred to as pages.

When it comes to paging, a frame represents a single page of physical memory, but such pages do not need to be physically contiguous with one another. This guarantees that memory can be accessed more quickly, and that data can be retrieved from secondary memory more quickly.

When two or more mainframe programmes try to gain an exclusive lock on the same resource at the same time, deadlock results because the programmes are unable to proceed until the data is accessible. In the event that the error happens, we have the ability to roll back the currently active work unit of any one of the programmes after a defined preset deadlock time period, which will result in the programme being terminated.

The necessary operations may be carried out by using the IEBEDIT software. It provides choices to either Include or Exclude the stages that need to be completed. Utilizing the appropriate COND Parameters is necessary to accomplish this goal if the number of stages in the PROC is to be reduced.

If there is a specific key that has to be used as the starting point for the reverse process, then that key should be entered into the RIDFLD field. 

  • The Start Browse Command Should Be Issued 
  • Send out a READNEXT COMMAND, which really begins the reading process from the key that was supplied. 
  • Now, issue the READPREV command, and put this read prior into a loop, continuing the loop until the necessary condition is satisfied. 
  • Move LOW-VALUES into the Key field and continue with the same approach as described above if it is necessary to read the file in reverse order starting from the most recent entries and working backwards. 

In this phase, we are able to make use of SORTWK01, SORTWK02, and so on. There must be a minimum of three datasets, but in general, the number of sort datasets is related to the magnitude of a dataset that must be sorted. 

During the sorting process, the following files are utilised: -  

Input File: It is necessary to declare the FD entry for the input file in the FILE SECTION. 

Work-file: This is a temporary file that will be used while the sorting procedure is being carried out. It is recommended that the SD (Sort Description) entry for the file be specified in the FILE SECTION. 

Output File: File that will have sorted entries written to it; known as the output file. 

Basic syntax:  

KEY sd-rec-key-1 [,sd-rec-key-2]… 
USING input-file 
GIVING output-file. 
Format of SD entry: 
SD work-file 
[DATA RECORD IS file-rec-1] 

Using REDEFINES, the answer is that it is indeed feasible to do so. It is essential for us to bear in mind that redefining does nothing more than guarantee that fields begin in the same spot.

01 WS-TOP PIC X(100) 

During the SORT phase, we are able to specify the names of the SYSIN and SYSOUT datasets. In order to transfer data from one dataset to another, we need to make sure that the sort card includes the notation SORT FIELDS = COPY.

We are able to make advantage of RD parameter in the JOB/EXEC statement by stating in the SCHEDxx component of the parmlib library of the IBM system for what abend codes RESTART needs to be done.

The maximum amount of data that can be sent by COMMAREA is 32 kilobytes. If additional data has to be transmitted, then TSQs may be utilised to hold the data that is more than 32 kilobytes. Obtain storage by using the GETMAIN command, and then provide the address of the acquired data in the COMMAREA field. 

Using Channels and Containers, which is a novel strategy that was introduced in the CICS transaction Server version 3.1. This allows for more flexibility while transmitting bigger volumes of data.

COMMAREA is one of the essential functions that CICS offers, and it is used for the purpose of passing data between two programmes that are participating in a transaction or between two transactions that are taking place on the same terminal. It is a unique kind of user storage that can only hold up to 32 kilobytes of data at a time. However, 24k is the maximum suggested data size that may be sent.

I have the option of using the NOTIFY statement, which informs the user when the process has been finished and provides the return code. 

NOTIFY = userid to whom status needs to be notified  

In case my DB2 table has an identity column, then we can use the indexnum as follows: 

Update Table SET COLUMN_NAME=XXX where indexnum < 101 


Top Mainframe Interview Tips and Tricks

Here are the top tips to help you prepare for Mainframe interview questions for experienced, tricky Mainframe interview questions and answers from freshers to advanced levels. 

  • The issue that has to be answered during the interview is why mainframes call for such Language Support. This makes mainframe production support interview questions important. 
  • If one needs to clear a Mainframe interview, he or she must go through all the questions and clear the basics.  
  • You will be able to obtain your ideal job if you begin with the fundamental questions and devote enough time to the intermediate and advanced levels. 
  • Pay close attention to what the interviewer is attempting to ask, and maintain your composure while responding to their questions. 
  • Keep a sturdy pen and a compact notepad with you at all times. You may use it to describe the flowchart as well as the procedure. 
  • Make sure you leave the interview with a positive impression. 

How to Prepare for Mainframe Interview?

To prepare for Mainframe interview questions with answers, you need proper step by guidelines. Here, we have given the proper way to prepare for the Mainframe interview questions. 

  • Examine the description of the job. 
  • Think about whether or not you are qualified for the position, and research the organization. 
  • Make sure you have a list of anticipated questions for the interview. 
  • Perform simulated job interviews. 
  • Make sure that your responses are succinct and on point.

Job Roles

  • System programmers 
  • System administrators 
  • Application designers and programmers 
  • System operators 
  • Production control analysts

Top Companies

  • TCS 
  • Infosys 
  • IBM 
  • Capgemini 
  • Cognizant 
  • BNY Mellon 
  • Wipro 
  • DXC Technology 
  • Tech Mahindra 

What to Expect in Mainframe Interview?

Recruiters may often have preliminary conversations with potential candidates over the phone before inviting them for a more in-depth interview including Mainframe testing interview questions, and scenario-based Mainframe interview questions Following a series of inquiries, the person in charge of recruiting may ask if you have any questions for them on the firm or the role. Based on the years of experience interviewer asks questions ranging from basic to advanced.

You may have to answer questions of varying degrees of difficulty, depending on the organization and the job for which you are seeking. The interviewer will also ask Mainframe technical interview questions and Mainframe interview questions db2. Following the first inquiry, they will go on to question the applicant thoroughly, but you shouldn't allow yourself to get very nervous about it. Responding quickly and thoughtfully to questions is something that recruiters look for. You must have your fundamentals clear.


The word "mainframe" refers to a kind of exceptionally powerful computer that is able to handle extraordinarily huge volumes of data in a dependable and consistent way (a transaction is a discrete computer operation that must be completed in its entirety and cannot be subdivided into separate tasks). One of their key applications is in the administration of massive data flows in information systems used by businesses and governments, an area in which they are highly helpful.

The vast majority of academic and research organizations continue to make use of significant numbers of mainframes in their computing infrastructure. Mainframes provide a number of advantages, some of which include a significant quantity of processing power, severe access constraints, and the opportunity to make use of thin clients inside a host-terminal configuration. Nevertheless, this was at a period when the only means to reach the mainframe was via in-house terminals; thus, the situation has substantially evolved since then.

One of the core objectives of mainframe design is the processing of input and output (I/O) volumes. Mainframe design also places an emphasis on the amount of data that is handled. It is conceivable for a single mainframe to carry out the tasks of dozens or even hundreds of separate servers at the same time. Going for KnowledgeHut online Database course will give you real-world job-ready skills to work with a database. 

Read More