IQAC in North-Eastern Hill University

Constitution of IQAC in NEHU
NAAC SSR 2015
Addendm SSR 2014-15 & 2015-16
Minutes of IQAC Meetings
Revised IQAC Guidelines 2019

IQAC Guidelines 2007-12
IQAC Conference 2017 Application Forms
List of Affiliated Colleges of NEHU

NAAC Accreditation

News Updates

Workshops

Formats of Questionnaires and Report Writing

Assessment/Report Formats

 

 

 

 

 

Some Important & Related Hyperlinks

NEHU Official Website
NAAC Official Website

Links to IQAC in Some Other Universities/Institions

Andhra University
Assam University
Devi Ahilya Vishwavidyalaya
M.S. University, Baroda
Mizoram University
Vellore Institute of Technology
Vidyasagar University

 

 

 

 

 

 

 

 

Quality Parameters in Higher Education


What are the parameters of quality in higher education? This question is only partly answered in the UGC's document - Higher Education in India: Emerging Issues Related to Access, Inclusiveness and Quality. Apart from that, a number of measures of quality encompass the ratios of educational 'inputs' to the 'outputs' of (an institution of) higher education that qualify to be used as the 'indicator' ratios. Another set of measures of quality concentrates on the quality of inputs as well as that of outputs. Yet another set of measures of quality brings in the 'process' through which inputs are transformed into outputs. It goes without saying that the institutions of higher education are multi-input and multi-output processing units. Some researchers on quality in higher education stress on the input side, the others on the output side while yet others argue in favour of concentrationg upon the process. The Quality Research International provides a rich material on definition and measurement of quality and performance in higher education. Some abridged and suitably compiled views on these concepts are presented below.
 
Harvey and Green (1993) argued that there could be five discrete but interrelated ways of thinking about quality. Harvey (1995) provides the following brief overview of the five categories: (a) exceptional view of quality, (b) Quality as perfection, (c) Quality as fitness for purpose, (d) Quality as value for money, and (e) Quality as transformation.
 
The exceptional view of quality sees quality as something special. Traditionally, quality refers to something distinctive and élitist, and, in educational terms is linked to notions of excellence, of 'high quality' unattainable by most.
 
Quality as perfection sees quality as a consistent or flawless outcome. In a sense it 'democratises' the notion of quality and if consistency can be achieved then quality can be attained by all.
 
Quality as fitness for purpose sees quality in terms of fulfilling a customer's requirements, needs or desires. Theoretically, the customer specifies requirements. In education, fitness for purpose is usually based on the ability of an institution to fulfil its mission or a programme of study to fulfil its aims.
 
Quality as value for money sees quality in terms of return on investment. If the same outcome can be achieved at a lower cost, or a better outcome can be achieved at the same cost, then the 'customer' has a quality product or service. The growing tendency for governments to require accountability from higher education reflects a value-for-money approach. Increasingly students require value-for-money for the increasing cost to them of higher education.
 
Quality as transformation is a classic notion of quality that sees it in terms of change from one state to another. In educational terms, transformation refers to the enhancement and empowerment of students or the development of knew knowledge.
 
Based on these five views of quality, various external review indicators, performance indicators, quality statistics and benchmarks may be defined.
 
External review indicators: Operational variables referring to specific empirically measurable characteristics of higher education institutions or programmes on which evidence can be collected that allows for a determination of whether or not standards are being met. Indicators identify performance trends and signal areas in need for action and/or enable comparison of actual performance with established objectives. They are also used to translate theoretical aspects of quality, a process known as operationalization. An indicator must be distinguished from a measure, which is data used to determine the level of performance of an attribute of interest, and from a standard, which is the level of acceptable performance in terms of a specific numeric criterion. Another distinction is made between the different types of indicators: (i) indicators of economy (following and respecting budgets); (ii) indicators of efficiency (actual productivity or output per input unit); and (iii) indicators of effectiveness (degree of attainment of objectives). A third and relatively consequent distinction is made between: (i) context indicators, that relate to the specific environment of a higher education institution or programme (social, economic, political, geographical, etc.); (ii) input indicators, that relate to the logistical, human, and financial resources used by a higher education institution; (iii) process indicators, that refer to the use of resources by a higher education institution, to the management of the inputs, and to the functioning of the organization; and (iv) output indicators, that concern the actual achievements or products of the higher education institution. This latter framework is also known as the CIPO-model (i.e., Context, Inputs, Process, Outputs), frequently used in evaluation studies. (Vlãsceanu et al., 2004, pp. 38-39).
 
Performance Indicators: A range of statistical parameters representing a measure of the extent to which a higher education institution or a programme is performing in a certain quality dimension. They are qualitative and quantitative measures of the output (short-term measures of results) or of the outcome (long-term measures of outcomes and impacts) of a system or of a programme. They allow institutions to benchmark their own performances or allow comparison among higher education institutions. (Vlãsceanu et al., 2004, p. 39). Performance indicators work efficiently only when they are used as part of a coherent set of input, process, and output indicators. As higher education institutions are engaged in a variety of activities and target a number of different objectives, it is essential to be able to identify and to implement a large range of performance indicators in order to cover the entire field of activity. Examples of frequently used performance indicators, covering various institutional activities, include: the number of applications per place, the entry scores of candidates, the staff workload, the employability of graduates, research grants and contracts, the number of articles or studies published, staff/student ratio, institutional income and expenditure, and institutional and departmental equipment and furniture. Performance indicators are related to benchmarking exercises and are identified through a specific piloting exercise in order to best serve their use in a comparative or profiling analysis.
 
Statistical indicators may be collected on a regular and systematic basis by governments (especially where institutions of higher education are publicly funded) and these or other statistics may be included in quality review processes. Statistical indicators are sometimes used synonymously with performance indicators and sometimes are meant to imply a lesser evaluative status than embodied in quantitative performance indicators. West (1999) makes the following distinction between a statistic, an indicator and a performance indicator: Statistics unlike indicators are purely descriptive; so, for example, the total number of trainees enrolled on a programme is an example of a statistic. Indicators on the other hand are generally conceptualised as having some reference point. So for example, the percentage of a particular age group entering initial vocational education and training is an example of an indicator. Indicators unlike raw statistics can assist with making a range of different sorts of comparisons as a result of having a common point of reference. As Nuttall (1992) comments: 'An educational indicator tells us something about the performance or behaviour of an education system and can be used to inform decision-making. Not all education statistics qualify as indicators...To be an indicator, an education statistic must have a reference point against which it can be judged. Usually the reference point is some socially-agreed upon standard ..., a past value ..., or a comparison across schools, regions or nations' (Nuttall, 1992, p.14). Further work on the concept of an indicator has been undertaken by van den Berghe (1997) who distinguishes between four types of indicators - descriptive indicators, management and policy indicators, performance indicators and quality indicators (a subset of performance indicators). Indicators that are linked to the achievement of particular goals or objectives can be seen as a special category of performance indicators.
 
A benchmark is a point of reference against which something may be measured. In the higher education context a benchmark is usually either (1) a level of performance, resources, or outcome against which an institution or group might be compared, or (2) the specification or codification of comparable processes. Benchmarks may be (1) defined for an institution (or sub-institution unit) as targets, possibly on continuous basis (2) the basis of comparison between two or more institutions (or sub-institutional units) (3) specifications of processes that can be compared as a basis for identifying, for example, optimum effectiveness, efficiency or transparency. The UNESCO definition of benchmark is: A standard, a reference point, or a criterion against which the quality of something can be measured, judged, and evaluated, and against which outcomes of a specified activity can be measured. The term, benchmark, means a measure of best practice performance. The existence of a benchmark is one necessary step in the overall process of benchmarking. (Vlãsceanu, et al., 2004).
 
Operationilization of Quality Parameters: The Manual for Self-Studies for the universities (NAAC, 2008) has provided a detailed list of criteria that may be used for setting quality parameters - statistics, indicators and benchmarks. The criteria are grouped into seven groups, namely: (1) Curricular aspects, (2) Teaching, learning and evaluation, (3) Research, consultancy and extension, (4) Infrastructure and learning resources, (5) Student support and progression, (6) Governance and leadership, and (7) Innovative practices. Recently, the UGC has circulated the Regulations on Minimum Qualifications for Teachers and other Academic Staff governing selection, appointment and promotion of teachers, etc and maintenance of standards in the institutions of higher learning. Put together, these documents can be very helpful in setting the quality parameters for higher education.
 
References
 
Harvey, L. and Green, D., 1993, 'Defining quality', Assessment and Evaluation in Higher Education, 18(1). pp. 9-34.
 
Harvey, L., 1995, 'Editorial: The quality agenda', Quality in Higher Education, 1(1), pp. 5-12.
 
Nuttall, D., 1992, The OECD International Education Indicators (Paris, OECD).
 
Van den Berghe, W., 1998, Indicators in Perspective (Thessaloniki, Cedefop).
 
Vlãsceanu, L., Grünberg, L., and Pârlea, D., 2004, Quality Assurance and Accreditation: A Glossary of Basic Terms and Definitions (Bucharest, UNESCO-CEPES) Papers on Higher Education, ISBN 92-9069-178-6. download.
 
West, A., 1999, Vocational education and training indicators project EU priorities and objectives related to VET, November (European Commission, European Centre for the Development of Vocational Training (Cedefop)).