Qualitative Vs. Quantitative Assessments


Note: This section was added to address an issue raised by the evaluators a short while before the visit. The question had to do with quantitative, as against qualitative, measures of the outcomes and how the results of these quantitative measures are used for program improvements.

Several of our assessment methods include both qualitative and quantitative components. We first consider the various assessment instruments and then briefly list some recent program improvements based on the results of the assessments. For each assessment mechanism, we briefly describe its quantitative as well as its qualitative components.

  1. The Exit Survey of graduating seniors is mostly quantitative. It first asks the respondent to evaluate the importance of each of our objectives and the associated outcomes on a scale of very important to very unimportant; and a set of questions that for each objective and outcome asks the respondent to rank, on a scale of strongly agree to strongly disagree, his/her agreement with the statement, "the program prepared me for this objective/achieved this outcome". The responses from the seniors for each year are then averaged as explained in the results page. A second set of questions asks the respondent to rank the importance of faculty advising and staff advising on the same scale of importance, and the quality of faculty and staff advising that the individual student received on a scale of very satisfactory to very unsatisfactory. These responses are also averaged in the same manner.

    The last part of the survey seeks qualitative responses. It asks the respondent to provide free-form answers to the questions, "What single aspect of the CSE program did you find most helpful? Explain briefly", and "What single change in the CSE program would you most like to see? Explain briefly".

    Results of the survey over the last several years are available. Note that, in that page, the results for 2004-'05 are specified first followed by the results for the earlier years. Note also that responses to the free-form questions are in a separate page to which there is a link in that page.

  2. The Alumni Survey, sent to alumni who graduated from the program two/six years ago similarly includes a portion that asks the respondent to rank the importance as well as the degree to which the program developed the respondent's abilities related to, a set of general educational outcomes including EAC Criterion 3 outcomes; a portion that asks the respondent to rate various aspects of his/her educational experience at Ohio State on a scale of excellent to unsatisfactory; a portion that asks the respondent to rate the importance of each our program objectives and the degree to which the program prepared the respondent with respect to that objective, using the same scales as in the exit survey. The responses from alumni (our return rate for these surveys is about 20% which seems to be the norm for such surveys) for any given year are then averaged to obtain the results reported in the results page. The qualitative portion of the survey asks for free-form feedback from the respondent on any aspect of the program or changes he or she would like to see in the program.

    In addition, each year the survey includes a targeted component whose topic varies among such items as the importance of lifelong learning to that of business skills among our graduates; this portion seeks both qualitative and quantitative feedback from the respondent on the particular topic targeted in that year's survey.

    Results of the survey over the last several years are available. (The results of the targeted components from various years are not available on-line.)

  3. The Supervisor Survey, sent to alumni who graduated from the program fifteen years ago is similar to the alumni survey but does not include all of its components.

    Results of the survey over the last several years are available.

  4. The Advising Office Survey asks current students to rank various aspects of the advising that they receive, on an appropriate scale. It also includes a qualitative portion that asks for free-form feedback on changes that the respondent would like to see in the advising he or she receives. (The results of this survey are not available on-line; but Peg Steele should be able to address questions concerning this survey.)

  5. The Student Evaluation of Instruction administered near the end of each quarter, asks students in each course to rate his or her agreement, on a scale of agree strongly through disagree strongly, with statements concerning various aspects of the course. The qualitative portion of the survey asks the respondent qualitative responses to such questions as, "what single aspect of the course did you find most helpful? why?" and "what single change in the course would you most like to see? why?". Since the SIEs are confidential and are accessible only to the instructor in question and the department chair, results from this survey are not available to the Undergraduate Studies Committee. However, individual faculty are expected to address any problems identified in their respective SEIs. Bruce Weide (as well as Stu Zweben) should be able to address questions concerning the SEI.

  6. Student performance in assignments, projects, and examinations in individual courses: One issue with the assessment mechanisms described (with the exception of the supervisor survey) is that they are self-evaluations. Student performance in individual courses is one of the most reliable direct ways to evaluate the extent to which the various intended learning outcomes of the particular course are actually achieved. Individual faculty teaching the various courses use the information from student performance in their respective courses when preparing the Course Group Report for the group that the particular course belongs to. The individual course syllabi list the degree (on a scale of mastery/ familiarity/ exposure) to which each of the learning outcomes of the course is expected to be achieved. Note that in evaluating the actual average achievement level of students with respect to any particular outcome, the faculty consider the performance of the students in activities (such as particular homeworks, particular exam questions, particular project activities, etc.) that are concerned with the specific topics related to that particular outcome, not their ovrall performance in the course. This is an important distinction because the overall performance of students in a course would not give us information about the student achievement level with respect to particular intended outcomes of the course.

    If the average actual level of achievement for a given outcome, as estimated by the various faculty involved in teaching the course, in recent offerings of the course differs from the expected level by more than one level, the faculty, as a group, have to analyze the causes and possible solutions. The causes can vary over such things as inadequate background of students taking the course (which, in turn, suggests a potential problem in the prerequisite course(s)), outdated learning outcome in the course syllabus (perhaps the field has evolved and the particular outcome is no longer as relevant as it was when the syllabus was created), particular faculty favoring one topic at the expense of another with the result that the learning outcome associated with the latter suffers, etc. The solutions would similarly vary from case to case. But in all these cases, the faculty group has to document, in the CGR, the actual level of achievement of each learning outcome of each course in the group, analyze any differences with the expected levels, and propose suitable changes to address any problems. The course syllabi also specify the level of contribution (on an appropriate scale) that the course makes to achieving various program outcomes as well as EC Criterion 3 outcomes. And the faculty preparing the CGR analyze the stated levels of contribution and the actual levels, and offer changes to reduce or eliminate differences.

    In addition, the CGR includes detailed narrative about such things as the direction the field is moving in, any relevant feedback about the course material that the faculty may have received from such sources as colleagues at other universities, feedback from industry experts that they may happen to have interacted with, etc. In addition, the discussion at the Curriculum Committee when the draft CGR is presented, often results in further refinements of the faculty group's proposals.

    It is this combination of assessment based on actual student performance in specific components of the courses, narrative based on faculty knowledge, expertise, and active engagement in the particular community, and the additional feedback from faculty in related areas, that makes the mechanism a valuable tool for both assessment and feedback.

    There is a natural mix of qualitative and quantitative aspects in this approach. On the one hand, the performance of students in specific graded activities dealing with specific portions of the course is a key quantitative component; on the other hand, faculty judgment about where the field is headed, etc. iincludes a natural qualitative aspect to it. The discussions within the faculty group, followed by the broader discussions in the Curriculum Committee takes account of all of this in arriving at suitable improvements.

  7. Assessment of Capstone Design Courses: Given the importance of the capstone design course in a student's curriculum, and given that we have six different courses designated as capstone courses, we have established a clear set of requirements that all our capstone courses are evaluated against. The course coordinators for these courses are responsible for presenting a detailed assessment of their respective courses against these criteria and propose appropriate improvements in the courses. This is somewhat similar to the CGRs but assesses all the capstone courses against a common set of requirements.

  8. The Undergraduate Forums held once a year provide valuable qualitative information about various aspects of the program. The forums are attended by a number of faculty as well as some of the advisors from the Advising Office. The attendance at the forums varies from year to year and has been somewhat of an issue; but the students who do attend tend to be very interested in the welfare of the program and come with many ideas for changes. The forum provides qualitative data and suggests possible improvements.

  9. Feedback Processes: Before briefly describing a few recent improvements based on the results of the various assessment mechanisms described above, it may be useful to list the main feedback processes that we use to analyze the results of the assessments and arrive at improvements. The main feedback processes are:

  10. Some Recent Program Improvements Based on Assessment Results:
    The improvements listed below, the assessment results that they are based on, and the discussion/analysis in the Undergraduate Studies Committee meetings and/or Curriculum Committee meetings are documented in the minutes of the committee meetings. For each improvement, there is also an indication of whether it was based mostly on quantitative (Qt) data from the assessments, or qualitative (Ql) data, or both (Qtl).
    1. Addition of Communication 321 (course on public speaking) as a required (general education) course (Qt): This was based on results from both the alumni and supervisor surveys as well as the exit survey over several years concerning the importance of oral communication skills.
    2. Addition of Econ 200/201 as a required (general education) course (Qt): This was based on results from the targeted component of the alumni survey from three years ago concerning the importance of basic business knowledge for our graduates.
    3. Additional flexibility in the science requirements (Ql): Until some years ago, we required our students to take a specified set of physics and chemistry courses. Bases on the results of alumni surveys, we made this more flexible, allowing the last of these courses to be a physics course, a chemistry course, or a biology course.
    4. Change in how credit for CSE 201/202 is awarded based on AP credit (Ql): At the Undergraduate Forum of last year, several students commented that awarding credit for CSE 201/202 based on a minimum score of 2 in the AP-CS exam was not reasonable; and that such a score did not indicate an adequate skill-level. Based on this, and based on further analysis by members of the Undergrad Studies Committee, we revised this so now students must have a minimum score of 3 in AP-CS to get credit for CSE 201/202.
    5. Changes in CSE 321 (Qtl): Based on analysis in the recent Software Spine CGR, and discussions in the Curriculum Committee when the report was presented, CSE 321 now requires students to read a current article or classic research paper, discuss it in class, and write an analysis of the article or paper. This should help develop students' communication skills and should also help see the importance of the topics discussed in the class. This change was also motivated by the results from the exit, alumni, and supervisor surveys concerning the importance of both of these factors. Depending on how successful the change in CSE 321 is, we will consider extending it to some other, perhaps 600-level, classes.
    6. Changes in CSE 655 (Qt): Based on analysis in the recent Programming Languages CGR, the faculty realized that, there was a substantial difference between the degree, as stated in the course syllabus, to which the outcome related to Algol-like scope rules were expected to be achieved, and the actual degree to which this outcome was achieved in recent offerings of the class. Following discussion of the CGR in the Curriculum Committee, the involved faculty decided to make changes in both the syllabus and in the way the course is taught to achieve middle-ground.
    7. Interviewing workshops (Ql): Based on results of the exit survey (minutes of the Undergrad Studies meeting April 15, '04), we have arranged for the Engineering Career Services to offer its Interviewing workshops in some of our capstone courses. (The minutes of the April 22, and April 29 meetings which are both on the same page describe further detailed analysis of the exit survey results. Note that both qualitative and quantitative results are discussed.)
    8. Improvements in Advising Office services (Qt): The Advising Office Survey results have been generally satisfactory. One recent improvement based on the results of this survey has been a deliberate attempt has been made to increase the per student number of contacts has increased this year, especially for those not making satisfactory progress in the major courses.
    9. Changes in CSE 778 (Qtl): During the assessment of CSE 778 against our capstone course criteria, one question was to what extent it met the requirement of oral communications. All students were indeed making presentations in the class but the coordinator felt that the students' didn't have their hearts, so to speak, in the presentations. The question was, how to improve the quality of the presentations. Discussion in the committee (followed by e-mail discussions) led to the idea that rather than requiring all students to talk about their design projects which tended to result in very similar and, hence, somewhat listless presentations (since all student teams worked on the same design task), that students be allowed to talk about a recent VLSI technology or tool. This has greatly improved the quality of the presentations.
    10. Software practitioners as faculty (Qtl): Data from several assessment instruments over the years have stressed the importance of exposing our students to current industrial practices. The recent hiring of software professionals with many years industrial experience as faculty in the program, was partly in response to this data.
Conclusion: Based on the information provided above and in more detail elsewhere, the program seems to meet the CAC and EAC criteria requirements with respect to assessments and program improvements based on the results of the assessments.