History of Assessment in the Student Affairs Setting
Although it has come to the forefront of practice in recent years, formalized assessment in Student Affairs is a relatively new concept. It wasn’t until the 1980’s that discussion of the importance of assessment left the periphery of practice and literature. In the 21st century, assessment in Student Affairs has become “a way of life,” allowing practitioners to demonstrate anything from utilization to effectiveness of a given program or initiative.
Please find below more information of the history of assessment as detailed by John H. Schuh in his article entitled, “How Did We Get Here”? in the reference section.
“Assessment has become a commonly accepted practice according to Sandeen and Barr (2006), two leading student affairs educators of my generation. They observed, “Assessment is now the most powerful movement in American higher education” (p. 154). Cooper & Terrell (2013, p. 4) add, “There is no doubt that assessment work is becoming woven throughout most aspects of college campuses across the country.” But such has not always been the case. Ewell (2009, p. 5), for example, asserts that the assessment movement can be traced as follows, “in 1987 the so-called ‘assessment movement’ in U.S. higher education was less than five years old.” Exactly when and why assessment in student affairs began is difficult to determine, but there are certain historical documents that mark developments in the assessment movement that can be used to inform contemporary student affairs educators as to why the contemporary landscape of student affairs practice includes an emphasis on assessment. This piece has been designed to identify some of the developments that, in the opinion of the author, contribute to the development of assessment as an essential element of student affairs practice. It will highlight a few historical documents that have helped to provide a foundation for assessment in student affairs.”
The Early Years: Led y the Student Personnel Point of View
The Student Personnel Point of View (SPPV) is seen as a foundational document of student affairs practice (National Association of Student Personnel Administrators [NASPA], 1989) and was published originally by the American Council on Education. “Assessment” does not appear in the document though it does refer to “evaluation” once, in the context of institutional effectiveness. The SPPV was updated in 1949 and, again, the term “assessment” did not appear in the document. The document did identify “A continuing program of evaluation of student personnel services and of the educational program to ensure the achievement by students of the objectives for which this program is designed” as an element of the student personnel program (NASPA, 1989, p. 37). This assertion, one of 17 elements of a student personnel program (the language of the day), frames what we would term in contemporary lexicon as assessment for accountability purposes. That is, student affairs practice, in the opinion of the authors, included an element dedicated to determining if the goals of the student affairs program were met.
Works published in the 1950’s and 1960’s also referred to evaluating the extent to which program goals had been achieved. For example, Woolf and Woolf (1953, p. 8) asserted, “Continuous evaluation is needed to show us where our efforts have succeeded and where improvement can be made.” A decade later, Mueller wrote of action research as it applied to the student personnel program with this observation: “The evaluation of personnel work, both if its total programs and its various specialties, is best planned as action research” (1961, p. 551). Whether or not these books capture widely-held perspectives on evaluation in the middle of the 20th century is unknown, but it is safe to assert that the role of evaluation (as a proxy for assessment) was less central than it would become.
Assessment in the 1970s, 1980s and 1990s: The Assessment Movement Begins
Often overlooked in the literature is contribution to the assessment movement made by Luann Aulepp and Ursula Delworth through their publication Training manual for an ecosystem model that was released in 1976 and focused on a process by which they asserted the campus environment could be assessed. They concluded, “The ecosystem model’s design process is utilized to identify environmental-shaping properties in order to eliminate dysfunctional features and to incorporate features that facilitate student academic and personal growth” (p. ix). The heart of the model was to measure student perceptions of their environments and then learn about student/environment fit and design better environments. Particularly intriguing about this approach was the incorporation of environmental referents (ER’s) in the process (Aulepp and Delworth, 1976, p. 40). The ER’s were a central element in redesigning campus environments to fit with student needs. Environmental assessment was discussed in other volumes (for example, Huebner, 1979; Morrill, Hurst Oetting and others, 1980) but this approach never appeared to be adopted widely. I used it for nine consecutive years at Indiana University and several times at Wichita State (for example, Schuh & Veltman, 1991), but otherwise, environmental assessment projects were not report widely in the literature.
Other important reports that evaluated the student experience were released in the 1970’s such as Four Critical Years (Astin, 1977), but widespread adoption of assessment practices was absent from the higher education landscape. A notable exception to this assertion was the continued work of the Cooperative Institutional Research Program at UCLA that administers the Freshman Survey to students before they start classes. This program has been in operation since 1966 and is widely cited.
A significant impetus to the assessment movement was the work of the Study Group on the Conditions of Excellence in American Higher Education (1984) that released the report Involvement in Learning. In this report the authors identified assessment and feedback as conditions of excellence in higher education. The authors asserted, “The use of assessment information to redirect effort is an essential ingredient in effective learning and serves as a powerful lever for involvement” (p. 21). They also provided five recommendations for assessment and feedback in their report. One of their recommendations has direct implications to the work of student affairs educators. They observed, “Faculty and academic deans should design and implement a systematic program to assess the knowledge, capacities, and skills developed in students by academic and co-curricular program,” (p. 55). Assuming that student affairs educators have primary responsibility for the co-curricular program, this recommendation suggests an accountability dimension related to student learning that cannot be ignored. Faculty and deans are unlikely to directly assess learning from the co-curricular program; they are much more likely to ask student affairs educators for evidence of student learning from co-curricular programs. Another important recommendation of the study group was that accrediting agencies hold institutions accountable for student learning (p. 69). This recommendation largely has been adopted by the regional accrediting agencies.
Alexander Astin, a member of the study group, built on this report with his volume Achieving Educational Excellence (1985). In his view of talent development, Professor Astin observed, “Assessing its (the institution’s) success in developing the talents of its students is a more difficult task, one that requires information on change or improvements in students’ performance over time” (p. 61). He added, “I believe that any good college or university assessment program must satisfy two fundamental criteria: it must be consistent with a clearly articulated philosophy of institutional mission and it should be consistent with some (italics in original) theory of pedagogy or learning” (p. 167). This book was followed by Assessment for Excellence (Astin, 1991) that was “…designed for use by anyone who is involved in or interested in the practical uses of assessment: faculty, administrators, researchers, policy analysts, and governmental officials” (p. x). Focusing mostly on student learning and development outside the classroom, Kuh, Schuh, Whitt & Associates (1991) advocated using an “auditing” process to be used in “…assessing the quality of the out-of-class experience” of students (p. 264). This publication provided a systematic approach to determine the extent to which the out-of-class experiences and learning of undergraduates were compatible with the educational purposes of the institution they attended. Their work paralleled that of Astin in that the authors emphasized that student learning needs to be consistent with an institution’s mission and that it could vary from college to college depending on the goals and purposes of the institution.
The American College Personnel Association released The Student Learning Imperative (1996) in a special issue of the Journal of College Student Development and this document extended the work of assessment in measuring student learning. The Student Learning Imperative recommended “…student affairs staff should participate in institution-wide efforts to assess student learning and personal development and periodically audit institutional environment to reinforce those factors that enhance, and eliminate those that inhibit, student involvement in educationally-purposeful activities” (p. 121). Immediately following this document, Alexander Astin provided an update of Involvement in Learning. He asserted, “Assessment is a potentially powerful tool for assisting us in building a more efficient and effective educational program” (1996, p. 133).
Lee Upcraft and I published our book Assessment in Student Affairs also in 1996. This book focused on the reasons for assessment and how to do assessment, focusing very specifically on student affairs practice. Previous work had been more widespread in scope, in our view, and we were interested in student affairs practice. Between the two of us we had over 50 years of practical experience in student affairs at several universities and we thought we could provide a service to our profession to share some of our insights about assessment practice. Both of us had been engaged in numerous assessment projects and believed that the higher education environment had evolved to a point where demonstrating effectiveness in student affairs was mandatory.
The Current Century
Whether we were prescient or not is open to debate, but in this century additional reports have been released that emphasize, at least in part, how important assessment is in higher education in general, and in student affairs in particular. The National Association of Student Personnel Administrators and the American College Personnel Association published Learning Reconsidered (2004), a document whose purpose was “to re-examine some widely accepted ideas about conventional teaching and learning, and to question whether current organizational patterns in higher education support student learning and development in today’s environment” (p. 1). In this document the authors asserted, “Assessment must be a way of life—part of the institutional culture” (p. 26). The authors also urged campuses to “…focus primarily on student learning (italics in original) rather than on student satisfaction” (p. 27). The document was updated in 2006 (Keeling, 2006) and includes a heavy emphasis on assessment, including a self-assessment template for a student affairs practitioner (Bonfiglio, Hanson, Fried, Roberts & Skinner, 2006, p. 50) that asks these important questions that are particularly relevant in our current environment:
• How do I contribute to student learning at my institution?
• How to I contribute to integrated learning at my institution?
• Is integrated learning one of my top daily priorities?
Additionally, Project DEEP (Documenting Effective Educational Practices), a study of 20 institutions with higher than predicted graduation rates and scores on the National Survey of Student Engagement, found that assessment played a central role in the life of these institutions (Kuh, Kinzie, Schuh, Whitt, and Associates, 2005/2010). The institutions were committed, according to the study, to continuous improvement. The authors concluded, “Most DEEP schools systematically collect information about various aspects of student performance and use it to inform policy and decision making” (p. 156).
The report A Test of Leadership (The Commission appointed by Secretary of Education Margaret Spellings, 2006, p. 21) also provides a useful marker with respect to the assessment movement in this century. One of the Commission’s recommendations was as follows:
“To meet the challenges of the 21st century, higher education must change from a system primarily based on reputation to one based on performance. We urge the creation of a robust culture of accountability and transparency throughout higher education. Every one of our goals, from improving access and affordability to enhancing quality and innovation, will be more easily achieved if higher education institutions embrace and implement serious accountability measures.”
While this federal report did not focus specifically on student affairs practice, it illustrates the environment in which higher education finds itself, and the seriousness with which accountability measures are being suggested for colleges and universities.
The conclusions of these reports support the assertion that assessment has become an increasingly important activity in organizational life in higher education. Over time, assessment has become an increasingly central element in higher education in general and in student affairs practice in particular. These persuasive documents assert that student affairs practitioners cannot afford to ignore, obfuscate or refuse to be engaged in assessment activities, and that these assessment activities, ultimately, lead to enhanced student learning. Ewell (2010, p. ix) summarized the contemporary assessment environment this way: “These accountability demands are real, they are justified, and they are likely to be permanent.”
Conclusion