A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


Annual Data Review: Starting Fall 2017, academic units will review various data sets typically including FTE trends, student demographics, course completion rates, certificate or program completion, course capacity, and cost per FTE. The data can be used to guide the program review process.

AQIP (Academic Quality Improvement Process): One of the three pathways to accreditation through the Higher Learning Commission. This is the current pathway that RRCC is on, and it is sunsetting, meaning it will no longer be a pathway option.

Assessment: The process of observing learning; describing, collecting, recording, scoring, and interpreting information about courses/programs/services undertaken for the purpose of improving the institution, services, programs, and student learning and development.

Assessment cycle: A series of recurring steps in which information is gathered and actions are taken in order to improve outcomes; at the most basic level, this cycle can be described as plan-do-check-act.


Baseline data: Data collected to provide an information base against which to monitor and assess an activity's progress and effectiveness during implementation and after the activity is completed.

Benchmark: A description of a specific level of expected performance. Benchmarks for student learning are often represented by student work. 


Cohort: A group of people who move through a process or series of activities together; in ILEARN this refers to the group of people beginning the process at the same time (Cohort 1 -- Fall 2017, Cohort 2 -- Fall 2018).

Common Learning Competencies (CLC): The Common Learning Competencies are the broad learning goals RRCC shares as an institution; they were developed by a committee of faculty with staff representation, and have been adopted as those skills that all RRCC graduates should share regardless of degree program; the Common Learning Competencies are taught both through classroom instruction and co-curricular programming and include: critical thinking, technological literacy, effective communication, global awareness and respect for diversity, ethical and professional behavior, and quantitative reasoning. While the Common Learning Competencies are broad statements about learning, they are measured through specific AAC&U LEAP learning outcomes and the associated VALUE rubrics.

Common Learning Outcomes (CLO): Common Learning Outcomes are the AAC&U LEAP learning outcomes which align with the RRCC Common Learning Competencies.

Comparative data: Data collected after an activity or intervention which is compared to baseline data in order to assess the success of the improvement strategy.

Continuous Improvement Plan (CIP): A report that contains five short sections: (1) focus, (2) data, (3) measures of success, (4) comparative data, and (5) conclusions and future action. These are utilized in the program review process of continuous quality improvement. The CIP is a part of the ILEARN assessment process at RRCC.

Continuous Quality Improvement (CQI): A process that RRCC engages in to demonstrate ongoing work to move the college forward based on data and evidence.

Course Learning Outcomes (CLO): Measurable statements that describe specific student behaviors that provide of evidence acquisition of desired knowledge, skills, or attitudes; learning outcomes are most often attached to specific activities, assignments, or courses. See Student Learning Outcomes (SLO).


Direct Measures: Those that measure student learning by evaluating examples of student work, such as oral presentations, writing assignments, theses or dissertations and exams. See Indirect Measures.


Evaluation: The use of qualitative and quantitative descriptions to judge individual, course, program and institutional effectiveness. Depending on the level, evaluation information is used for making decisions about individual performance review, student grades and course, program and institutional changes for improvement.

Evidence: Evidence an institution provides to demonstrate that it complies with HLC's criteria should do the following: substantiate the facts and arguments presented in its institutional narrative; response to the prior peer review team's concerns and recommendations; explain any nuances specific to the institution; strengthen the institution's overall record of compliance with HLC's requirements; affirm the institution's overall academic quality, financial sustainability, and integrity.  


Formative Assessment: An assessment used during the course of instruction to provide feedback to the teacher and learner about the learner's progress toward desired educational outcomes; the results of formative assessments are often used in planning subsequent instruction.


General Education Learning Outcomes: See Common Learning Competencies.

Goal: An end result written in broad terms; goals need not be measurable on their own, but the associated objectives would be.


Higher Learning Commission (HLC): An independent corporation that was founded in 1895 as one of the six regional institutional accreditors in the United States. HLC accredits degree-granting post-secondary educational institutions in the North Central region, which includes 19 states.


Indirect Measure: Those that measure student learning by assessing opinions or ideas about knowledge, skills, attitudes, and perceptions. See Direct Measures.


Objective: A specific step intended to assist in the achievement of a goal.

Outcome: The result of an action or intended action; outcomes are specific, measurable, achievable, realistic, and timely.


Program: A program at RRCC is classified by the Assessment Council as a 'prefix.'  It can be a degree with designation, a series of classes that transfer, a certificate, or a series of courses (or course) in a department. All require a curriculum map and a Student Learning Assessment Plan.

Program Learning Outcomes (PLO): Outcomes that encompass a vision of "the ideal graduate" of your program, and outcomes that accommodate pre-existing goals set by other entities, such as professional guidelines. PLOs represent broad statements that incorporate many areas of interrelated knowledge and skills developed over the duration of the program through a wide range of courses and experiences.  They represent the big picture, describe broad aspects of behavior, and encompass multiple learning experiences.

Program Review: A process that provides the opportunity for operational units to reflect on their work to re-align with the college's mission and strategic plan, engage in dialogue to determine strengths, areas of improvement, aspirations, current research, and budgetary needs, and to establish a continuous improvement plan.  Program reviews should be completed every four years and focus on both operational and student learning assessment.  ILEARN is RRCC's take on program review.


Standards: Refers to an established level of accomplishment that all students are expected to meet or exceed.  Standards do not imply standardization of a program or of testing. Performance or learning standards may be met through multiple pathways and demonstrated in various ways. (Definition borrowed from Carnegie Mellon University Assessment Terms.)

Student Learning Assessment Plan (SLAP): A document that supports the continuous quality improvement at the academic program level demonstrating common learning competencies and program learning outcomes are being met and evaluated based on data and evidence. The SLAP is a part of the ILEARN assessment process at RRCC.

Student Learning Outcomes (SLO): Measurable statements that describe specific student behaviors that provide evidence acquisition of desired knowledge, skills, or attitudes; learning outcomes are most often attached to specific activities, assignments, or course. See Common Learning Outcomes.

Summative Assessment: Outcome-based use of assessments, often for decisions such as grading, program evaluation, tracking, or accountability.


Target: Like benchmarks, targets are measurable, and referenced in the ILEARN manual as developed based on internal data; if you have your own data or are starting from scratch, set an internal target (ILEARN manual, 13).

Test development: Process of creating a test; steps of test development (Hughes, 2003): (1) State the goals of the test, (2) Write test specifications, (3) Write and revise items, (4) Try items with native speakers and accept/reject items, (5) Pilot with non-native speakers necessary revisions, (7) Calibrate scales, (8) Validate, (9) Write test administrator handbook, test materials, (10) Train staff as appropriate.

Additional terminology resources and glossary definitions adapted and/or borrowed from: