Sunday, March 18, 2007

Department of Ed/our friends' snake

The other night, I had the opportunity to watch our friends Judith, Kathleen, and Emma's snake (a Mexican something or other) eat its once-weekly dinner. I won't go into too much detail here about the nature of the dinner (it was kind of gross, I thought), but the process was, shall we say, interesting. Basically, the snake (which is some kind of boa) "strangled" said dinner (even though it was already dead) and then swallowed it whole.

Not to be too dramatic here, but I think about the ways that the Department of Ed is padding around the accreditation process on tiny feet and the snake thing comes to mind pretty quickly. I've written about this padding around in the first (introductory) chapter of the OMB - here's part of what's there:
...if one listens closely to the steady drumbeat around the issue of accreditation that has sounded since the appearance of the report from the Spellings Commission on the Future of Higher Education (which is closely analyzed in chapter four), it is possible to detect the tiny pitter patter of impending of federal control. The accrediting process, an Inside Higher Education story notes, can be “a wedge” for “measur[ing] and report[ing] how much students learn … because changes in accrediting standards … have the potential to directly influence hundreds or thousands of colleges” (1/29/07). Since the appointment of Undersecretary for Higher Education Sarah Martinez Tucker (also a member of the Spellings Commission) in January, the Department of Education has begun to speak publicly about changes to the DOE’s relationship with accrediting agencies. Traditionally, these agencies have urged institutions to establish outcomes and assessment methodologies for assessing those outcomes that make sense for the institution. As another Inside Higher Education story noted, “accreditors have primarily focused their judgment of institutions’ quality on whether an individual college is showing progress” (2/23/07), and have emphasized that long-term gains in the areas of process and professional development are as important (if not more important) than showing the agencies the results of any assessment. But the Spellings Report noted that this focus on process, not product, was not producing reliable evidence attesting to institutional accountability.

In early January 2007, the DOE official who oversaw accreditation agencies left his position. In mid-January 2007, the DOE initiated a process make changes to the rules governing the higher education accreditation process that would enable the DOE to legally regulate that process through accreditation agencies. Particularly alarming is the DOE’s desire to have institutions to institute norm-referenced assessments across similar colleges and universities (using criteria that are not yet determined) – in other words, “to judge how well individual college are educating their students by comparing them to similar institutions...” ( IHE 2/22/07). Second (and related), the DOE wants accrediting agencies to work with the institutions under their auspices to “agree to a core set of student achievement measures, both quantitative and qualitative, focused on those things the institutions have in common, and also on an acceptable level of performance for certain of those measures” (DOE white paper qtd in IHE, 2/22/07). The DOE has already taken steps of their own to initiate this kind of data collection, as well – they are on their way to developing a system called “Huge IPEDS” (or Integrated Postsecondary Education Data System), an online system that would cull data about how colleges and universities gather data about “accountability” on their campuses (e.g., whether they use the National Survey of Student Engagement, the Collegiate Learning Assessment, or other national surveys administered locally on college/university campuses), and then would potentially make that data nationally available. The sound of footsteps is certainly there – and while accreditation agency officials such as Stephen Crow from the North Central Association/Higher Learning Commission and George Kuh from Indiana University are laying out clear and cogent issues with this kind of assessment process, their objections are largely being ignored.
A footnote to the bit above (linked to the part about the Collegiate Writing Assessment) is this:
Of particular interest to writing instructors about the Collegiate Learning Assessment, incidentally, is the small print at the bottom of the page describing the CLA’s “sample performance task” writing prompts: “Scoring of writing prompts is powered by E-Rater,an automated scoring technology developed and patented by the Educational Testing Service and licensed to CAE” (http://www.cae.org/content/pro_collegiate_sample_measures.htm).

Of course, the OMB is intended, in part, to help WPAs develop strategies to change (or get ahold of) stories about writing on their campuses. At the national level, there are a whole bunch of super smart people/agencies working on this, from the accreditation agencies (like ours, the North Central Association/Higher Learning Commission) to NCTE. But still, it's kind of like the snake thing. Fortunately, we educators are not like the object of the snake's attention. But still.

Take away message: if the tens of readers here who are connected with composition hear anything about accreditation (since that seems to be the first "action point" here) on your campus, see what's going on. And let's all be attuned to those seemingly not-so-important things that happen, say, around boring DOE processes (like the rule thing). It might be more than we think...

0 Comments:

Post a Comment

<< Home