Introductory Materials

The Distributed Mentor Project (DMP) Evaluation Report #2 is the second in a series of three reports as part of a three year evaluation project (1995 - 1998) conducted by the UW-Madisonâs Learning through Evaluation, Adaptation, and Dissemination (LEAD) Center. The report is based on interviews with, and surveys of, the mentors and students who participated in the Computing Research Association (CRA)-DMP during 1994, 1995 and 1996.

This report represents the second phase of an ongoing evaluation of the DMP. It builds upon the findings presented in the first report and presents new findings on issues which were not fully explored during the first phase of the evaluation. In the third phase of the evaluation we will gather more data and conduct more extensive analysis in order to address these issues in more depth. In addition, we will seek to better understand which of these issues appear to apply to multiple program years and specific implementations and which appear to be context dependent or ungeneralizable. Unlike the first report, this report does not provide a separate section discussing the mentorsâ experiences in the DMP; instead, we use mentor data to provide further illumination on issues raised by students.

The evaluation was conducted by staff of the LEAD Center. The evaluation team consisted of Baine B. Alexander, the Associate Director of the LEAD Center, as project director, Debra Penberthy (August 1996-present), a full-time LEAD Center researcher, and Sue Daffinrud (July 1995- August 1996) and Heather A. Lewis (July 1995-August 1995), both graduate students in the UW-Madison Department of Mathematics, as project researchers.

I. Purpose

One purpose of this evaluation is to provide the CRA-DMP with formative feedback information while the DMP is being implemented. Feedback information is "formative" when it is used by decision makers to reflect on and analyze the project's goals and processes and then make any needed mid-course corrections. In this case, the "feedback" is literal in that much of the report consists of excerpts from interviews with DMP participants. These excerpts are carefully selected through an analysis process of all of the interview data. Those selected are viewed as particularly good articulations of significant themes that emerged from our analysis. A second purpose is to inform and prepare prospective mentors and student participants about the program.

II. Research Questions

The following are the research questions, developed with Professor Anne Condon, Principal Investigator of the CRA-DMP, that informed the evaluationâs design:

(1)?Are there measurable effects, positive and/or negative, resulting from the Distributed Mentoring Project? More specifically,

Do undergraduates who participate in the DMP enroll in graduate school at higher rates than a matched comparison sample?

(2)?If the answer to question "#1" is "yes," what kinds of qualitative effects are experienced by the DMP students and can patterns in mentee/mentor interactions be ascertained and associated with the measurable effects of the program? More specifically,

(a) What, if any, relationship is there between student response to the program and: various characteristics of the mentor's research project and/or methods of involving the student in that project

(b) Did the mentee and/or mentor believe that the DMP program helped effect changes in the student's:

(c) Did other factors in the DMP students' experience have an effect on their decision to attend graduate school, including:

(3)?What, if any, special problems and/or satisfactions do faculty mentors experience as mentors in this program?

III. Guide to the Reader?

The main body of this report (Tab 3) presents qualitative data on the studentsâ and mentorsâ perspectives on the DMP. This section contains four sub-sections. In the first sub-section, we provide extensive contextual information regarding the DMP participantsâ undergraduate experiences. This provides important background information for understanding the studentsâ expectations of , and experiences in, the DMP. In the second sub-section we provide a comprehensive discussion of the participantsâ perceptions of the benefits of the program. (Readers who are primarily interested in program outcomes may wish to read this section only.) In the third section, we discuss the essential elements of the program that brought about these program benefits. (Readers who are primarily interested in the function of program components, such as the research project, the female faculty mentor, and the research university setting, may wish to read only this section.) In the fourth sub-section, we address particular program implementation issues which were raised by the mentors through surveys and qualitative interviews.

In Appendix A we present the results of the surveys that were administered to program participants, mentors and a matched comparison group of students. (The purpose of the surveys was to allow us to triangulate across a range of different data sources during the analysis process. A complete analysis of the survey results will be conducted during the third year of the evaluation.) Appendix B contains the interview protocols for students and mentors, and Appendix C consists of copies of the surveys.

Intended Audience

This report is intended for a varied audience that includes: former and prospective student participants of the DMP, faculty who participated as mentors in the program, faculty who are considering participating in the program, and other interested individuals.

IV. Methods

We have thus far pursued these research questions through structured, open-ended interviews and through surveys. The reader should note that qualitative and quantitative research methods differ not only with respect to data collection but with respect to analysis. Individual interviews allow the researchers to "get inside of" the experiences of these diverse participants. Data collection methods are as open-ended and subject-responsive as feasible to ensure that the experiences of the study participants, not the researchers, are reported. Likewise, analysis processes are fundamentally inductive to ensure that the participants' experiences shape the findings. In practice, this means that the researchers make every effort to at least temporarily suspend the ideas that structured their interview protocols. The analysis of interview transcripts is focused on determining what is most important to the participants. The primary analytical categories that emerge as the researchers process the transcripts are apparent in the table of contents. In contrast to survey methods, these methods do not yield precise, quantitative assessments of the proportion of participants holding pre-specified opinions. However, these methods provide extraordinarily rich information expressing the complexity of the experiences of the study participants.

Open-ended Interviews

We interviewed both mentor and student participants from the 1994, 1995 and 1996 programs. The structured open-ended interviews were conducted individually and lasted approximately one hour. The interview protocols for the students and mentors appear in Appendix B. All interviews were recorded and transcribed; an average transcription was twenty single-spaced pages.

We interviewed 10 out of the 28 total 1995 student participants in the summer of 1995. Each student participated in three interviews: one at the beginning of her program, another upon the completion of her program, and a third the summer following her participation in the program. Interviewing the students at the beginning and end of the program allowed us to observe if and how the students' experiences and attitudes towards graduate school and research in CS&E changed throughout the program. The third, or "year-out" interviews, allowed us to examine the long-term effects of the program on their career choices. We conducted one interview each with 10 out of the 25 total 1994 student participants. (One of these students participated in 1994 and 1995.) The purpose of these interviews was to develop an understanding of their experience in the DMP and also to assess the impact of the program on their career decisions. We interviewed 10 of the 21 1996 students prior to their participation in the program and 9 of these same students after the program (we were unable to contact one). We will be interviewing these same students in the Spring of 1997.

In the fall of 1995, we conducted a single interview with each of 9 out of the 25 1995 mentors and 10 out of the 24 1994 mentors. In 1996 we conducted a single interview with each of 9of the 19 participating mentors. The purpose of these interviews was to understand the faculty's experiences and attitudes toward mentoring in the DMP and to provide a valuable perspective on the studentsâ experience in the program.

Email Surveys

We have gathered extensive survey data from student participants and mentors. In addition, we surveyed a "matched" comparison group of students. This group was matched by gender, GPA, and class standing. The names of the comparison group students were obtained from chairpersons of departments (or other appropriate individuals) that had hosted DMP students and from applicants who met the selection criteria but who were either not accepted or declined acceptance into the program.

For the purposes of this report, the surveys were used to determine whether the findings from the qualitative interviews were representative of the experience of the mentor and student participants as a whole. So although we have not included the quantitative data in most of the main body of this report, it has informed our findings. In Appendix A, we present the survey data in raw form. We plan to conduct a full analysis of the quantitative data during the third-year of the evaluation. At this time we will fully integrate the qualitative and quantitative findings. To some extent this wait will be advantageous because the survey sample will be larger at the end of three years. This is particularly important for the matched comparison group. Below we provide necessary information on each of the surveys. See Appendix C for copies of each. All surveys were administered by email, unless otherwise noted.

Participants:

Comprehensive Student Participant Survey on Program Experience and Issues Relating to Undergraduate Experience. A survey was distributed via email to the 1994 and 1995 student and mentor participants in the fall of 1995. The survey response rate is as follows: Twenty-two of the 28 1995 student participants and 11 of the 25 1994 student participants responded to the survey. Two of the students who responded to the survey participated in 1994 and 1995, and we included their responses only in the 1995 survey results. The low response rate among the 1994 students results from the difficulty of contacting students who had graduated and did not have email addresses. In the fall of 1996 we administered a survey to the 1996 participants. Twenty out of 21 of the 1996 student participants responded.

Tracking Survey. In the summer of 1996, we attempted to track all of the 1994 and 1995 participants of the DMP to gather information on their current activities and future plans. Through email, postal mail, and phone surveys we were able to track 16 of the 25 1994 participants and 26 of the 28 1995 student participants. See Appendix A, page 6 for this information.

"Matched" Comparison Group:

Survey on Undergraduate Experience and Future Plans. In the Fall of 1995 we emailed this survey to 28 students who were matched with the 1994-95 participants; Fifteen responded. In the Fall of 1996, 27 students who were matched with the 1996 participants were emailed with a modified survey -- 13 responded. Every effort was made to increase the survey response rate.

Mentors:

Survey on DMP Experience. In the fall of 1995 the 1994 and 1995 mentors were surveyed. Twenty-one of the 38 total mentors responded to the survey. Of the 13 mentors who participated in 1994 only, 4 responded. Of the 11 mentors who participated in 1994 and 1995, 7 responded. Of the 14 mentors who participated in 1995 only, 10 responded. In the Fall of 1996 we surveyed the 1996 mentors. Fourteen out of 19 responded.

V. Notes on the Use of Verbal Quantifiers

Specific verbal quantifiers are used to denote the relative size of a group of participants who presented particular perspectives or described particular experiences in interviews. It is important to note that due to the nature of qualitative interviews, the size of a group who discussed a particular type of experience does not indicate the size of the group who had this type of experience. Although the same interview protocol was used in each interview, respondents' answers often prompted discussion in a particular area that may not have emerged in other interviews.

The verbal quantifiers used in this report are:

used when up to 30% of those interviewed presented the perspective under consideration

used when 30 to 70% of those interviewed presented the perspective under consideration

used when 70 to 90% of those interviewed presented the perspective under consideration

used when 90 to 100% of those interviewed presented the perspective under consideration  

 

Next

Main Page