In Toward Convergence: A Technical Guide for the Postsecondary Metrics Framework, the Institute for Higher Education Policy (IHEP) outlined key metrics that can inform institutional leaders and policymakers about their institution’s performance across five specific categories: access, progression, completion, cost, and post-college outcomes (Table I.1, below). These metrics aim to measure student success throughout the postsecondary pipeline as accurately and comprehensively as possible while considering data availability. Institutions should be able to calculate the metrics using data from their existing information systems, the National Student Loan Data System (NSLDS), the College Scorecard, the National Student Clearinghouse1, and national or state workforce records (if available).

Table I.1: The Postsecondary Metrics Framework

ACCESS PROGRESSION COMPLETION COST POST-COLLEGE OUTCOMES

PERFORMANCE

Enrollment

Credit Accumulation, Credit Completion, Gateway Course Completion, Program of Study Selection, Retention, Persistence

Transfer, Graduation, Success, Completers

Net Price, Unmet Need, Cumulative Debt

Employment, Earnings, Loan Repayment, Graduate Education, Learning Outcomes

EFFICIENCY

Expenditures per Student

Cost of Uncompleted Credits, Gateway Completion, Costs,

Change in Revenue from Change in Retention

Time/Credits to Credential, Cost of Excess Credits, Completions per Student

Student Share of Cost, Expenditures per Completion

Earnings Threshold

EQUITY

Enrollment by at least Preparation, Income, Age, Race/Ethnicity

Progression Performance at least by Preparation, Income, Age, Race/Ethnicity

Completion Performance and Efficiency by at least Preparation, Income, Age, Race/Ethnicity

Net Price and Unmet Need by at least Income

Debt by at least Income, Age, Race/Ethnicity, Completion Status

Outcomes Performance and Efficiency by at least Income, Age, Race/Ethnicity, Completion Status

Key Student Characteristics

  • Enrollment Status
  • Attendance Pattern
  • Degree-Seeking Status
  • Program of Study
  • Academic Preparation
  • Economic Status
  • Race/Ethnicity
  • Age
  • Gender
  • First-Generation Status

Key Institutional Characteristics

  • Sector
  • Level
  • Degree/Program Mix
  • Size
  • Resources
  • Selectivity
  • Diversity
  • MSI Status
  • Nontraditional Populations
  • Modality
This Guidebook will focus primarily on the first four categories: access, progression, completion, and cost. Taken together, these categories represent the postsecondary pipeline, or the hurdles that all students must clear to attain a credential. The Guidebook includes six chapters. The first four detail each of the four core components of the postsecondary pipeline, and the fifth examines the pipeline as a whole. These chapters are guided by the following primary questions:
  • Access: Who enrolls at your institution? How does your institution’s student demographics compare with those of your state or region, especially with respect to racial and socioeconomic diversity?
  • Progression: Which students (disaggregated by race and socioeconomic status) are meeting the key progression milestones—showing signs of early momentum—and which students are not? What are the outcomes of students who do and do not meet key progression milestones and benchmarks each year?
  • Completion: Who completes college and what factors help or hurt students in their efforts to earn a credential? To what extent do completion gaps exist for key underrepresented student populations?
    • Note: While post-college and workforce outcomes are important, we recognize that most institutions have limited to nonexistent access to this kind of data. For this reason, we include a brief section on post-college outcomes in the completion chapter.
  • Cost: To what extent are institutions affordable for low-income and underserved students, and is the institution making progress toward affordable higher education for those who need it?
  • Cohort Analysis: How is a cohort of students moving through the institution and where are the barriers to student success?

In each chapter, we will explore key research questions, the potential benefits of conducting each analysis, the underlying metrics and definitions, the corresponding portions of the cohort analysis, and additional questions that can inform future decision making.

Data visualization is a key component of the Guidebook. We embed examples of data visualizations throughout the chapters and, at the conclusion of each chapter, we include downloadable Excel templates with instructions for use. These template charts and graphics will highlight essential patterns and trends for anyone wishing to conduct institutional research, analyze postsecondary data, and share those insights with other stakeholders on campus.

Interviewees

Throughout the guidebook, you’ll hear from these experts who have decades of experience with institutional research and postsecondary data analysis.

Colin Chellman

Colin is the University Dean for Institutional and Policy Research at the City University of New York and served as the founding director of the Office of Policy Research.

Jonathan Gagliardi

Jonathan is the Assistant Vice President for Strategy, Policy, and Analytics at Lehman College. Previously, he served as the associate director of the Center for Policy Research and Strategy at the American Council of Education (ACE) in Washington, DC.

Zun Tang

Zun is the Director of Institutional Research at the City University of New York. Previously, he served as a Senior Research Analyst in the same department.

The sixth and final chapter includes technical considerations for using the downloadable templates, as well as basic guidelines to ensure data privacy, security, and quality. When using student data, keeping data private and secure is paramount, and this section highlights ways to ensure that institutions proactively protect students from the improper disclosure of their personally identifiable information and identification in aggregate data sets. In an era where one’s digital identity is increasingly important for financial security, employment, and more, institutions must take great care in constructing their data systems to secure sensitive data. The data quality section shares best practices for data cleaning and troubleshooting data quality issues to minimize the privacy and security risks to students.

Almost all of the data we visualize throughout this Guidebook is illustrative of an actual institution.2 In partnership with the City University of New York (CUNY), our illustrations show what an implemented version of these analyses looks like, highlighting the value of these analyses in actual practice. However, we recognize that not all institutions have the same context. CUNY is a diverse urban system with its own unique successes and its own unique challenges. Even if your institution does not share these traits, the questions posed by this Guidebook are still worth asking. Indeed, institutional administrators, faculty, and staff need answers to the questions highlighted in these analyses because, without them, an accessible and equitable higher education system will remain out of reach.

Data can serve as a powerful tool for institutional evaluation and self-reflection. It should be used to help institutional leaders and administrators to make decisions based on the best available information. It should be used to assess institutional practices and to create an environment that fosters student success. It should not be used to blame students or to make excuses for poor outcomes. As Jonathan Gagliardi explains, “[data is] something that helps you identify structural flaws because if there is an equity gap, if differences exist between student groups, it suggests that something is off organizationally. Something isn’t being delivered effectively, or an underlying design flaw may exist. Implementing structural solutions can lead to changes that make a real difference in student outcomes.”3 The tools included in this Guidebook are intended to act as a flashlight for institutions, helping them to uncover these structural flaws, including institutional policies and procedures that can be barriers to student success by asking and answering critical questions about outcomes, particularly for low-income students, students of color, and other underrepresented students.

  1. The National Student Clearinghouse is also extensively involved with the Postsecondary Data Partnership (PDP), a program designed for institutions that are ready to scale up their data efforts. For more information see Sidebox 5.1, or click here.
  2. Because the City University of New York represents a full institutional system, as opposed to a single campus, cost and financial aid data was not easily accessible or uniform. On the one hand, this illustrates the complexities of navigating and coordinating numerous distinct offices for data collection, but on the other it illustrates how data and metrics are not one-size-fits-all. To address this problem, our cost analyses are based on national data representing public, 4-year institutions. The cost data illustrated is sourced from the Beginning Postsecondary Student Survey (04:09) sample.
  3. Gagliardi, J. (2019, March). Phone interview with A.J. Roberson and K. Mugglestone.

Getting Started

Before diving into the individual components of the Framework, we recommend taking a step back and considering the questions in the following checklist:

  1. How can data use inform or further institutional goals? – There is no more important question to answer before beginning any data collection or analysis. Is your goal to change institutional priorities? Is your goal to identify existing problem areas within the institutional pipeline? Maybe you want to diagnose the root causes of racial or socioeconomic equity gaps? Or, perhaps your goal is to find individual students and provide personalized interventions to ensure academic success? Before taking on a data project, it is most important to recognize where it falls in the bigger (or smaller) picture. Consider the following examples of how quality data use can impact institutional priorities at a macro or micro level:

    Data use at 30,000 feet: Miami Dade College developed and monitors key performance indicators (KPIs) to provide a high-level snapshot of its institutional performance to leadership, faculty, and staff. Data on performance metrics like retention rates, completion rates, transfer rates, and post-college outcomes have allowed the institution to predict success rates for students with shared characteristics, identify high-risk courses, and offer support interventions for students in those high-risk courses.

    Data use at 30 feet: Quality data use can impact students at a personal level. At California State University, Fullerton (CSUF), advisors use a constantly-updating dashboard to identify individual students who are off track for graduation and recommend interventions. For example, one student dropped a course during her last semester at Fullerton. The dashboard flagged her to an advisor, who found her a five-week course to fulfill the requirements for graduation, and she graduated on schedule.

    As these examples illustrate, institutions can analyze aggregate trends (disaggregated by student characteristics) and individual data to affect broad swaths of campus policy or to serve individual students through real-time interventions. Knowing your goals ensures you can design data collection, storage, and analysis in a way that meets your needs.

  2. Who needs to see, understand, and act on these data? – The target audience of an analysis determines how you will want to present your findings, as different audiences have different skillsets, preferences, and needs. Raw data, complex graphics, or data tables may resonate with the trained statistician but are likely to fail to achieve the desired effect among those with less experience—or less time to interpret the results. Most likely, audiences will be far less familiar with data sources, terminology, and other nuances than the person conducting the analysis. Furthermore, leadership may be more interested in big picture comparisons and will want to see data in a way that guides them toward action and solutions. Our downloadable templates are designed to provide a strong starting point for clear and accessible data visualizations and essential comparisons, but it may be necessary to develop additional graphics, depending on the audience or topic. Jonathan Gagliardi identifies just a few of the potential audiences for whom he prepares data at Lehman College:4

    1. State legislators want to see how public funding is being spent and how public institutions are serving students.
    2. Institutional leadership wants to see a macro perspective for advancing student success and closing equity gaps.
    3. Faculty members want to understand the balance between class size, space utilization, and course quality, and how grades may vary in and across their courses based on student traits.
    4. Department chairs want to build efficiency in course schedules and section design to ensure higher on-time success rates.
    5. Registrars want to monitor classroom space and efficiency.
    6. The Office of Advancement wants to monitor financial grants and partnerships.
    7. Enrollment Managers want more information about post-graduate outcomes to help students maximize the value of their degree and to set the stage for lifelong learning.

    These represent only a fraction of the potential audiences and purposes for your analyses, but each has unique needs and priorities. Furthermore, beyond the target audience for an analysis, you may need to consider who else will work with the data before presentation. If you are collaborating with other offices within an institution to prepare data (and you should!), we recommend establishing clear lines of communication to ensure that data are shared in the same format, that you are each using the same metric(s) and calculation(s), and that everyone understands what each data point means. From his experience with working in partnership with other departments, Zun Tang explains, “You really need to have someone who knows what those data points are […] [intended] to help you understand or help you interpret before you can start thinking about evaluation. Like, what is the metric to calculate? What are the fields I should be pulling in? How should I define the population for this purpose?”5 These considerations will ensure the most efficient data collection, sharing, analysis, and presentation.

  3. What data or tools do you already have? – In many cases, institutions are able to answer most questions using data that they already collect, even if it is not collected or housed by the institutional research office. For instance, cost data may be housed in the financial aid office, and admissions and enrollment data may be held by the admissions office or the registrar. More complex questions may require fostering partnerships with other entities to maintain, track, or provide data (e.g., The University of Texas System partnered with numerous state and federal agencies to develop the SeekUT system).6 Understanding what information an institution already collects and has access to enables better planning and resource management. It also reduces the likelihood of redundant work, provided departments and offices have mechanisms in place to securely share information.

Once the data is in hand, it is worth noting that many analyses are possible with tools already at most offices’ disposal. Institutions sometimes are intimidated about “data use,” worrying that they must invest in new software, but this is not the case. For example, Microsoft Excel is a low-cost and commonly available tool capable of data cleaning, analysis, and visualization for an experienced user, and as Jonathan notes, “you can generally get to about 70 to 80 percent of the ideal model of a [data-informed] institution […] using existing data sources and Excel. Who doesn't love Excel?”7 Indeed, for this reason, our downloadable templates are provided in a convenient Excel format.

  1. Which students should you count, and how should you count them? – If you want to understand an institution’s complete student population, then it is essential to count all students. To do this, we recommend that institutions define all cohorts—groups of students who enter the institution sharing a common trait—based on their 12-month enrollment population, instead of focusing only on the students who entered in the fall. This allows an analysis to capture the additional students who start at other times in the year. Institutions also should count full-time, part-time, first-time, and transfer students. In the table below, we offer recommendations for defining cohorts, and these same definitions are used throughout the Guidebook. We recognize that an institution may choose to place a particular focus on specific cohorts depending upon the students it serves, however many of these disaggregates are already required by Integrated Postsecondary Education Data System (IPEDS) reporting, making the lift for additional analyses smaller than it may seem on the surface.

    Cohort Definition

    Enrollment Status and Attendance Intensity

    In conjunction, institutions should consider enrollment status (first-time or transfer-in), and attendance intensity (part-time or full-time) in order to develop four distinct cohorts (i.e., first-time, full-time; first-time, part-time; transfer-in, full-time; transfer-in, part-time). Aside from ensuring that all students are counted, distinguishing these different categories of students is essential because they enter college with unique advantages and challenges.

    Credential Level Sought

    Different degree types have different expected time to completion. By distinguishing between non-credential-seeking, certificate-seeking, associate’s-seeking, and bachelor’s-seeking students, institutions can measure the varying persistence and completion challenges across these cohorts accurately.

    For a more detailed explanation of these cohorts, refer to pages 2.1-2.4 in the Framework.

  1. When should you conduct this research? – In order to ensure the most accurate and current data, it is important to consider when to collect data or take a measurement. Certain times within an academic year are better than others for data collection. For instance, we recommend measuring a 12-month enrollment population for access and enrollment analyses, which means it is likely that the best timing for up-to-date and complete enrollment measurement would be at the start of the summer term (after all students would have enrolled, but before the typical fall enrollment reset). Colin Chellman explains the need for extensive planning when deciding when to collect data, “For financial aid, what time of year is the best time to extract a file that most accurately represents the awards and disbursements students receive? Is that the first day of classes? Should you wait until the middle of the semester? Should you wait until the beginning of the second semester? You almost need a year in advance to plan the timing of your data collection activities.”9 For all data collected, you will want to consider when it is ‘ripe’ to be measured in order to get the results that capture your student body most effectively. While there is no perfect time to collect data, some opportunities or moments-in-time may be better than others.
  1. Gagliardi, J. (2019, March). Phone interview with A.J. Roberson and K. Mugglestone.
  2. Tang, Z. (2019, February). Phone interview with A.J. Roberson and K. Mugglestone.
  3. The University of Texas System. (2019), SeekUT: About the data. Retrieved from the SeekUT website: https://seekut.utsystem.edu/about-the-data.
  4. Gagliardi, J. (2019, March). Phone interview with A.J. Roberson and K. Mugglestone.
  5. For more information, see Sidebox 5.1, or click here.
  6. Chellman, C. (2019, February). Phone interview with A.J. Roberson and K. Mugglestone.

Post-Implementation Considerations (Reviewing and Reflecting)

While implementing the Metrics Framework requires intentional planning, it is essential to reflect and consider the collection, analysis, and overall impact of the project after the completion of each round of data work. This is especially important for projects that you expect to repeat annually to ensure you are building valuable longitudinal data for the analysis of trends. Periodically reviewing and reflecting upon each completed analysis allows you to refine your processes and better target data collection, time, and resources. Because you may wish to plan your review process before beginning the project, we outline below several essential questions that researchers should consider at the conclusion of a research project, as well as the rationale for considering each.

  1. What challenges did you face in data collection and analyses? – First, it is essential to reflect on process. Some challenges may be out of your control, but other challenges may simply be resolved with more communication or different tools. In some arenas, such as financial aid or admissions, data might need to be linked across several offices. In other arenas, such as post-college outcomes, missing data may hinder your analysis. Understanding where difficulties occurred in previous work allows you to make methodological adjustments to smooth the data collection, sharing, or analysis in future efforts.
  2. What did you learn from conducting these analyses? What surprised you? – This second question is essential for understanding the purpose of the research. If nothing is learned from the analysis, then little is gained from executing it, suggesting that the analysis may unnecessary in the future. However, when you discover patterns, then you can identify opportunities for future action. To effectively serve all students, institutions must, at the very least, identify which students are not enrolling, which students are not progressing, which students cannot afford to attend, and which students are not completing their education.

    Then, once patterns (or non-patterns) are established, one can ask: Why do these patterns exist (or not exist)? This is essential to consider so that your institution can design interventions. For example, at Colorado State University (CSU), an internal assessment found a correlation between failure rates in foundational coursework and a significantly lower likelihood of graduation. By identifying foundational coursework as linked to repressed graduation rates, the institution was able to make informed changes in how the coursework was taught and alter the supports provided to students taking these classes. Furthermore, CSU developed an early warning system to target interventions toward struggling students, which led to much higher success rates.10

  3. Did the institution do something differently as a result of the analysis? And, did outcomes for students improve? – This final question is the most important: it measures impact. Data is most valuable when it leads to real, tangible improvements for students. When equity gaps, high failure rates, or other problems are found but little action is taken, or poorly-designed interventions do not improve outcomes, then institutional researchers may need to consider how to expand or rework the analysis to identify the problem more explicitly or persuasively. The ultimate goal of most institutional data systems should not be simply to understand, but to change and improve.

Depending on institutional goals or the scale of the project, you may want to consider many other questions as well. Further, you should collaborate with other offices to avoid siloing these questions and conversations. It is not enough for the institutional research, institutional effectiveness, or another single office to reflect on these questions—all offices which contribute to and use the data, along with offices that directly serve students, should be part of the conversation on how to use data as a catalyst for student success and streamline how data are shared, stored, and analyzed on campus. Frank discussions about how to most effectively use and share data on campus create opportunities for data champions to emerge and deepen a culture of data-informed decision-making.

The following chapters will walk through each of the major sections of the Metrics Framework—access, progression, completion, and cost. Together, they represent a complete postsecondary pipeline—the story of any student seeking to enter college and leave with a degree in hand. For all analyses, we recommend that researchers consider where an analysis falls within this pipeline in order to contextualize how the findings may affect the same students as they progress through college. For instance, if research finds significant disparities across race groups in progression metrics, then it is important to consider those disparities and how they impact completion rates or time-to-degree for students in the lower-performing groups—which may then have implications for cost as well. Considering the sequential impact of each finding will help to define the questions to ask next as you work to improve how your institution serves current and future students.

  1. Association of Public & Land Grant Universities. (2017), Turning student data into actionable information. Retrieved from the Association of Public & Land Grant Universities website: http://www.aplu.org/projects-and-initiatives/accountability-and-transparency/using-data-to-increase-student-success/APLU_WhitePaper_COLORADO_C.pdf