Parameters for image-map-2:{}
University of New Haven logo

September 12, 2013 - Summary of Survey Regarding ACPTF Criteria and Metrics

TO: UNH Faculty and Deans

FROM: Mike Collura, Tara L’Heureux, Co-chairs of Academic Prioritization Task Force

DATE: September 12, 2013  

RE: Summary of survey regarding ACPTF criteria and metrics

The Academic Prioritization Task Force thanks the 77 faculty who responded to the May 2013 survey that sought input around the criteria and metrics proposed for use in the Charging Forward prioritization process (view the survey here).  The task force met to review and summarize the results of the survey, and to identify areas of concern and clarity around the criteria and their associated metrics.   

Criteria Set Appropriateness. The first question concerned the appropriateness of the five criteria (Centrality and Essentiality, Demand, Quality, Productivity, Revenue and Resources, and Opportunity Analyses).  Specifically, we asked the extent to which application of the proposed criteria would produce a fair, holistic evaluation of academic programs.  Based on the percentage of those who responded a “moderate extent” (48.1%) or a “great extent” (41.6%); it appears that the criteria were largely seen as appropriate and fair.

Criteria Set Modifications.  We asked if additional criteria should be used in the prioritization process and if any of the five criteria should not be included in the process. In general, most respondents indicated that the proposed criteria would capture the information critical for effective program prioritization.  A more detailed breakdown of the response frequencies and percentages for this and the preceding question can be found here.

Availability and Quality of Criteria and Metrics. For each of the five criteria, we asked respondents to indicate: the extent to which the metrics reflected the criterion, and the level of confidence that sufficient data would be available for each metric. The percentage of respondents who responded affirmatively (i.e., “moderate extent” or a “great extent”) ranged from 87-96% for the first question on criterion relevance, and 85-92% responded that they were very confident or somewhat confident regarding data availability.   A more detailed breakdown of the quantitative survey data can be found here.

Some faculty expressed the following concerns about criteria and/or metrics:

  • Relevance. For all criteria, we deleted some questions to reflect faculty feedback regarding relevance, but retained other questions that we believe will provide the task force with the necessary context for our prioritization task. For example, we deleted a question about when the program was established that was seen by some as unrelated to the Centrality and Essentiality criterion.  For all criteria, we deleted some questions to reflect faculty feedback regarding relevance but retained other questions that we believe will provide the task force with the necessary context for our prioritization task.   
  • Subjective, qualitative data. Some respondents expressed concern that the narrative format for some questions will lead to embellished responses.  We agree that this is a possibility; however, we are committed to providing an opportunity for all respondents to “tell their story,” particularly as institutional or local data may not available for all programs.  If the task force deems necessary, we will ask respondents to provide information to support their statements.
  • Applicability of criteria. Criteria and their associated metrics will not be equally relevant to all programs. UNH houses a diverse group of academic programs, and the differences across disciplines can be substantial. Understanding a program’s primary function and the context in which it operates, as well as using multiple quantitative and qualitative indicators as reflections of the criteria, will facilitate a balanced and holistic review of each program.
  • Availability and quality of data. Some respondents indicated that a number of programs cannot provide the requested quantitative data.  We are aware that programs are in different places around their assessment processes, and some programs will have more outcome data than others. Our intent is to provide opportunities for programs to present information that they think will be of value to the task force’s prioritization task. A structured process will be created to answer any questions and reconcile discrepancies around the data.  
  • Program level data unavailable. Some data are collected and reported at the program level (e.g., enrollment and degree completion data) and others at the department level (e.g., budgets).  This is the case across most universities.  When disaggregation of data is not possible, we will view the data in its appropriate context.
  • Need clear definitions and explanations of key indicators.  Some of the indicators were confusing and/or ambiguous to several respondents. The task force is currently developing a data dictionary that describes each data source, as well as the data formulations used to produce the program profiles. Additionally, we will provide guidance around the relevance and meaning of the data and offer training workshops for respondents before the launch of the template. 

The content of the template was revised to directly address community feedback, as well as feedback from the program prioritization consultant, Larry Goldstein.  In addition, we changed the structure of the template to be suitable for data collection, and for viewing of data provided by other sources (e.g., institutional research).  The revised template is more clear and concise in its layout, and it shows the format that will be used for data input in the upcoming pilot study.  We anticipate that further modifications will be made to the template based on feedback from pilot participants.

We thank you for taking the time to provide the task force with your thoughtful suggestions and questions.  For more information about the Charging Forward initiative, please visit and contact us through email at