MSC DY FAQs

Frequently Asked Questions 

Download Word document here

This FAQ document has been created to help interested faculty members understand the MSC and the components of the model. It provides information of particular interest to potential faculty participants and addresses what we expect to be faculty concerns. The goal is to emphasize how critical faculty involvement and ownership are in the success of this Collaborative.

 

What is the Multi-State Collaborative to Advance Learning Outcomes Assessment and who is involved?

With the active support of the Association of State Higher Education Executive Officers (SHEEO) and the Association of American Colleges and Universities (AAC&U), 12 states—Connecticut, Indiana, Hawaii, Kentucky, Maine, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island, Texas, and Utah—have agreed to collaborate in the Demonstration Year of the MSC. This model is rooted in campus/system collaboration and in faculty curriculum development, teaching activity, and assessment of authentic student work. It is based on the use of Essential Learning Outcomes and associated VALUE rubrics developed by faculty members under the auspices of AAC&U’s LEAP initiative. 

How were the VALUE rubrics developed?

Teams of faculty and other academic and student affairs professionals from all sectors of higher education across the United States gathered, analyzed, and synthesized institution-level rubrics (and related materials) to create rubrics for sixteen specific areas of learning. Each rubric underwent multiple revisions based upon feedback provided through campus testing of rubrics against samples of student work. Since 2009, over 32,000 people have accessed the rubrics, hundreds of campuses have used them to assist with institutional or program-level assessment, and the VSA (Voluntary System of Accountability) has approved the VALUE rubrics for voluntary student learning reports. Campus examples of use of the rubrics can be found at: http://www.aacu.org/value/casestudies/.

Back to the list

How did the MSC evolve and what is the push for multistate assessment?

The MSC is based on the “Vision Project,” which started in Massachusetts. There, a broadly collaborative multi-campus leadership group worked to conceptualize a model for state system learning outcomes assessment, based on the LEAP Essential Learning Outcomes and using VALUE rubrics. Once Massachusetts achieved status as a LEAP state, state-level and campus leaders created a Massachusetts model and pilot test and began to develop the Multi-State Collaborative as their primary LEAP States Initiative. Massachusetts wanted to see if other states would join the effort to assess learning outcomes and voluntarily share results, and reached out to the State Higher Education Executive Officers Association (SHEEO) to help bring other states into the MSC.  SHEEO now functions as the coordinating body for the MSC work. 

The goal from the beginning has been to create a program of learning outcomes assessment that builds on faculty-based and campus-based formative assessment while adding features that provide for public reporting of results for sectors of
institutions (e.g., community colleges and state universities) and for comparisons across states. Financial support comes through a subcontract to SHEEO from the Bill & Melinda Gates Foundation-supported AAC&U GEMS/VALUE Project and from in-kind and financial contributions from participating states, institutions, and other sources.

Back to the list

What was the Pilot Year and what happened?

The Pilot Year (2014-2015) was a major undertaking to test whether participating individual campuses and states could develop the capacity to use the protocols and guidelines that had been developed for identifying, sampling, collecting, uploading, scoring and reporting results for assessing student artifacts using the VALUE rubrics. All nine states and approximately 60 institutions actively participated in the pilot year. A press release on the initial findings, limitations and successes is available at http://sheeo.org/msc-pilot-study-results

What is the timeline for the MSC Demonstration Year?

The Demonstration Year work goes from September 2015 to September 2016 with plans for Fall and/or Spring collection of student work to be scored in the Summer of 2016, with data analysis and final reporting on the Demonstration Year work in September 2016. Institutions participating in the Demonstration Year have the option to collect student work in the Fall or Spring. Uploading of student work will begin in November 2015 and continue through May 2016. 

What is the purpose of the Demonstration Year?

The MSC Demonstration Year is designed to advance our understanding of the feasibility and sustainability of a common statewide model of assessment using actual student work. The ability to scale up and to sustain campus effort will be examined. We will continue to test the methodology used in sampling and collecting student work, including examination of the ability to create a representative sample of student work at the campus, state, and multistate levels, with an appropriate degree of randomization. We will also continue to evaluate the ability to produce useful assessment data for institutional use, to organize aggregated data for interstate comparison by sector, and to measure student learning using VALUE rubrics. And finally, we will continue to test the reliability of using the VALUE rubrics in the assessment of student work.


Back to the list

How were the 12 states selected?

Originally, nine states self-selected into the Collaborative. In 2011, SHEEO on behalf of Massachusetts sent out a proposal for a multi-state collaborative group and asked for expressions of interest. A meeting with 16 interested states and follow-up interactions resulted in a nine-state collaborative. State commissioners of higher education in these states volunteered to participate in a pilot after several months of discussion within their own states and in several conversations about the collaborative during 2011 and 2012. Maine, Texas, and Hawaii have since joined the MSC to participate in the Demonstration Year work.

What role will faculty play during the Demonstration Year?

The Multi-State Collaborative is organized to allow each state to determine how best to organize this work, based on interactions with institutional staff and faculty, and existing organizational structures and practices of interaction within each state.  That said, faculty may choose to participate in a number of different ways.

• Faculty may choose to form a community of practice within their state to begin an inquiry into statewide student learning issues.
• Faculty may choose to engage in professional development activities focused on creating, revising, and selecting assignments designed to generate student products demonstrating particular learning outcomes. Experts from NILOA (National Institute on Learning Outcomes Assessment) will be available to lead assignment design workshops in the MSC states.
• Faculty may choose to provide examples of student work (artifacts) to be uploaded into the Taskstream VALUE database to be scored by other faculty (Faculty Scorers). 
• Faculty may choose to participate in a face-to-face scoring norming session with other faculty from the other MSC states, as well as virtual scoring.
• Faculty may choose to be involved in analyzing their institutional data and the Demonstration Year aggregate data to provide their perspective on what the data suggest.
• Faculty may choose to share their perspective on the work with their local states and institutions, and be instrumental in curricular redesign and closing the loop.


Back to the list

What are the Student Learning Outcomes (SLOs) being assessed?

During the Demonstration Year, the MSC will collect student work (artifacts) for the LEAP Essential Learning Outcomes of Written Communication, Critical Thinking, and Quantitative Literacy. Institutions that collect the benchmark number of rubrics for each of these three rubrics have the option of also collecting student work for Civic Engagement.

Written Communication (WC)—the development and expression of ideas in writing.  Written communication involves learning to work in many genres and styles. It can involve working with many different writing technologies, and mixing texts, data, and images. Written communication abilities develop through iterative experiences across the curriculum.

Quantitative Literacy (QL)—also known as Numeracy or Quantitative Reasoning— is a "habit of mind" which includes competency and comfort in working with numerical data. Individuals with strong QL skills possess the ability to reason and solve quantitative problems from a wide array of authentic contexts and everyday life situations. They understand and can create sophisticated arguments supported by quantitative evidence and they can clearly communicate those arguments in a variety of formats (using words, tables, graphs, mathematical equations, etc., as appropriate).

Critical Thinking (CT)—a “habit of mind” characterized by the comprehensive exploration of issues, ideas, artifacts, and events before accepting or formulating an opinion or conclusion.

Back to the list

How will the Student Learning Outcomes be assessed?

The Demonstration Year requires Written Communication, Quantitative Literacy, and Critical Thinking LEAP Essential Learning Outcomes to be used, assessed using the LEAP VALUE rubrics developed by teams of faculty members and other experts from public and private institutions of higher education across the United States from 2007-2010.

MSC assessment will involve four-year and two-year institutions within each MSC state.

Involved faculty will develop/adapt/select an assignment as needed to allow students to demonstrate competency in relation to campus learning outcomes and VALUE rubrics and provide student work to the MSC to be assessed. For the Demonstration Year, the student work must align to the WC, QL, and CT rubrics described above.

The institution/department may choose to assess the same student work as it relates to their degree program within their institution; however, MSC requires a separate evaluation to be conducted by trained faculty evaluators using the VALUE rubrics indicated above.


Back to the list

The student work will be evaluated both holistically (one overall score) and analytically (one score for each dimension) against each learning outcome and corresponding rubric.

How will the results from the Demonstration Year be used?

Institutions may use results however they choose. Assessment results from the Demonstration Year will be aggregated and reported out by segment for all dimensions of the rubric associated with each learning outcome. Demonstration Year analysis will look at identified trends in the data, feasibility, scalability, validity, and reliability of the MSC model.

But what if my institution has adapted the VALUE rubrics to be more appropriate to our campus?

Many institutions have modified the VALUE rubrics to reflect their own unique needs. However, in order to place the findings from this project within a broader national context, it is essential that the student work samples selected to be scored at a multistate level are evaluated using unmodified VALUE rubrics. Institutions could, of course, evaluate their student work using modified rubrics for their own purposes.

How is the MSC process different from standardized assessments that are already being conducted at the state level?

The statewide assessment model is intended to deter mandated standardized tests through more rigorous demonstration of what students are learning in college. The aggregated rubric scores will be capable of providing normed evidence of the quality of student learning in state systems for external stakeholders while also giving faculty helpful information for improving teaching and student learning.  Many state-level leaders acknowledge that standardized quantitative tests are not very useful for state policy purposes because they are difficult to interpret for lay audiences and do not lead to clear policy choices and decisions. Institutional leaders often argue that state-mandated standardized tests are of little help for their efforts to use learning outcomes assessment in a formative way to improve the quality of student learning.


Back to the list

What will the Demonstration Year process consist of and who is involved?

The Demonstration Year will involve the collection of a sample of student work during Fall 2015 and Spring 2016 terms, and the subsequent assessment of the student work in June 2016.

Assignments:  Assignments and corresponding student work that provide students with the opportunity to demonstrate written communication, quantitative literacy, and critical thinking skills will be collected. Institutions have the option of submitting additional assignments that address civic engagement. Faculty will be asked to submit the assignment instructions and the corresponding completed student work. Last, faculty will be asked to indicate which, if not all, of the dimensions of the appropriate VALUE rubric the assignment addresses.

Number of Student Artifacts:  Institutions will collect a targeted minimum of 75-100 samples of written student work per outcome per institution.

Sampling:  Institutions will be provided with sampling guidelines that provide each institution with the flexibility to account for local constraints or preferences. (In many cases, the sampling guidelines created during the Pilot Year will be used.) Examples of parameters that will be part of the sampling guidelines are: requiring submitted student work to be drawn from more than one course, from across a range of disciplines, and from students nearing graduation.

De-identification:  Assignment instructions and student work will go through three levels of de-identification to ensure confidentiality at the institution level and anonymity at the state and multistate level.

Scoring:  Faculty scorers will be selected from public institutions across the participating states.  Faculty scorers will be blind to the institution and state the student work originates from and efforts will be taken to prevent scorers from assessing artifacts from their own institution or course.

Results:  Individual institution results will be returned to each participating institution so that campus leaders can determine who should have access to them. No SHEEO agencies will have access to individual institution results unless the institution agrees to release the results to that entity.

Professional Development: The MSC will offer faculty professional development activities and networking to support the work.


Back to the list

What is the time commitment and what are the expectations for faculty if I am interested in participating?

Exact time commitment is dependent on the number of institutions participating, number of assignments (student work) submitted to be evaluated through the MSC, and the number of faculty evaluators.

The time commitment will vary among faculty. Volunteer faculty will need to select one assignment (student work) that directly aligns to one of the VALUE rubrics indicated above. If faculty are familiar with the VALUE rubrics, the time needed is minimal. If, however, faculty are not familiar with the VALUE rubrics or dimensions identified in each, there may be more of a time commitment to ensure the student work they select directly aligns. The decision on whether to become a faculty scorer will also increase the time commitment needed.

If I choose to participate, are there any resources available?

Some resources will be made available to institutions to help in this process and institutions should be able to provide opportunities to gain experience with LEAP-based assessment and assignment design. Compensation will be provided for faculty members who volunteer to score student artifacts in the national scoring session.


Back to the list

What are the benefits for faculty and students participating?

  • Opportunity to work with other faculty to develop an assessment process based on authentic student work
  • Learn more about using rubrics to assess learning, specifically the VALUE rubrics
  • Opportunity to learn how assessment is conducted
  • Based on this knowledge, faculty may come across new ideas on how to improve their own teaching, and faculty may receive a clearer understanding of what they want to accomplish in their own courses and assignments.
  • Ability to view and evaluate student work from other states
  • Opportunity to share knowledge with colleagues and peers
  • Sharing the faculty perspective on what the data suggest
  • Being instrumental in curricular redesign and closing the loop in their states and local campuses

Back to the list

What is an assessable artifact and what types of assignments can be used as the assessable artifact?

An assessable artifact is simply a graded assignment in the course that addresses the WC, QL, CT or CE VALUE rubrics. By using assignments—and the student work to fulfill the assignments—from the class for assessment, the instructor does not have to design any other kind of assessable material. However, the instructor will need to ensure the selected student work directly aligns with the appropriate rubric. Designing an aligned assignment takes practice and requires using the selected rubric to guide the design.

• This graded assignment ensures that students take it seriously and allows instructors to really know if students can perform the SLO.
• By using materials designed by faculty for assessing individual performance in the class, the artifact can also be used in program assessment.
• The assessable artifact is any assignment that a faculty member believes will best demonstrate a student’s ability to meet the LEAP Learning Outcome that the course addresses.
• The assessable assignment in this project will be one from a student who is nearing completion of the degree in a 2-year or 4-year institution.
• Over time, the MSC will allow for a variety of formats to be submitted. For the Demonstration Year (and previous Pilot Year) and immediate future, only written assignments will be used.


Back to the list

How will an appropriate sample be ensured?

For reasons of scalability, the limited sample size from individual institutions may not provide meaningful results at the institution level.  Faculty who are engaged in the Demonstration Year work will be instrumental in shaping the future of this project.  Institutions desiring institution-level data will need to increase their sample size.

Where can I find the rubrics that will be used to assess the assignments?

Please go to the AAC&U website [www.aacu.org/value] or search AAC&U’s main web page for “VALUE rubrics.”

I prefer confidentiality. How can you ensure confidentiality?

Each institution will be responsible for ensuring confidentiality. Once the faculty member submits the student work, the institution will be asked to scrub all identifying information including student name, faculty name, and course section/id, if needed. The institution will provide a “code” for each piece of student work. When a faculty scorer evaluates the student work against the rubric, all they will see is the “code” given by the institution. The institution may choose to keep identifying information for internal assessment purposes and/or to run demographic and institutional data in aggregate form only.

How can comparing states and institutions be a good thing? That sounds dangerous to me.

The project leadership has taken this concern seriously. Keeping the results aggregated by sector for the entire MSC will protect individual institutions. Public presentations of results will be managed by the participating states and will use aggregated rather than individual institutional data. State comparisons overall and by sector will be at the discretion of the individual state and should prompt policy-level questions that are helpful—questions having to do with state-level investment in higher education, for example. Discussion among states about investment in higher education should be a good thing for institutions.


Back to the list

I am concerned that the results of the assessment may be used against faculty. Who will see the results?

Student work will be stripped of all identifying information about students and faculty members at the institution level before being forwarded the MSC. Institutions will have the ability to maintain demographic and institutional data which can provide trend data over time as with any student data. All assessment data will be in aggregate form only, at the institution, state, and MSC levels.

If the institution chooses to participate, is there any paper work that will need to be completed (IRB, Student Informed Consent, etc.)?

Based on IRB guidelines, the Demonstration Year work in particular would qualify for exemption or, at a minimum, expedited review given that there is minimal if any risk to the students. Student identification will be protected (the de-identification process ensures results cannot be traced back to an individual student, institution, faculty member, or course).  Should the model be implemented by participating states, the IRB may waive the requirement for informed consent or may grant waivers of written consent. 


Back to the list

Who will be scoring the collected student work (artifacts)?

The evaluators will be faculty members from the institutions within the 12 states listed above. They may or may not be the instructors who volunteered to submit student work to be evaluated; however, preference will be given to faculty who have submitted student work for the Demonstration Year. The evaluators will not be asked to assess an assignment from students in their classes or from their home institution.

If I am interested in being a faculty scorer, whom should I contact?

As soon as institutions are identified for participation in MSC assessments, faculty will be notified and asked for expressions of interest in training/development activities, artifact submission, or becoming an evaluator (Faculty Scorer).
Please contact your local State Higher Education Executive Officer (commissioner) or MSC state lead for further information. 

Will there be any formalized training for faculty who are either participating in the MSC or evaluating the assignments?

Each participating state will develop processes for training. Training will either be offered through the faculty member’s home institution or the participating state will hold a training session for all institutions within their home state.
A faculty handbook entitled Getting Started: A Guide for Faculty Participants in the Multi-State Collaborative was drafted in 2014-15 and will be available by November 6, 2015 on the MSC Demonstration Year web pages at http://www.sheeo.org/projects/msc_dy

If I need assistance regarding the MSC, whom do I contact?

Please contact your local State Higher Education Executive Officer (commissioner) or MSC State Point Person for further information. Also, please feel free to contact Gloria Auer at SHEEO who can direct your questions to the appropriate person.  gauer@sheeo.org  (303) 541-1625

Back to the list

Link to Steering Committee/State Point Persons

Return to MSC DY main page

Project Status: 
Attachments: