Skip to main content

Using collaborative approaches with a multi-method, multi-site, multi-target intervention: evaluating the National Research Mentoring Network

Abstract

Background and purpose

The NIH-funded National Research Mentoring Network (NRMN) aims to increase the representation and success of underrepresented groups (URGs) in biomedical research by enhancing the training and career development of individuals from diverse backgrounds, communities, and cultures. The national scope of NRMN, its wide array of innovative programs in mentor and mentee matching and training across the career spectrum, requires a collaborative evaluation strategy that addresses both internal and external evaluation needs. Due to the variety of programs implemented for each target group, the NRMN program is responsible for its own process and short-term outcome evaluations and the national Coordination and Evaluation Center (CEC) is responsible for assessing the medium and long-term effectiveness of the implemented strategies and program sustainability. Using a collaborative, utilization-focused evaluation framework, both internal NRMN evaluators and the CEC are working to translate findings into information that can be used to make both short term and long-term decisions about the efficacy and reach of the NRMN model. This important information can then inform efforts to institutionalize the current programs and potentially replicate them elsewhere.

Program and key highlights

The overall evaluation of NRMN is guided by both outcome and process questions that are tailored for each target group. The different target groups include faculty and others who serve as mentors, mentees across academic training and career stages, and researchers without a history of independently funded research. NRMN is also building the capacity for training those pursuing biomedical careers by developing “master trainers” for both mentoring and grantsmanship programs in organizations and institutions that can support expanded training efforts aimed at diversifying the biomedical workforce.

Implications

Results of this evaluation will be used to inform the design and implementation of sustainable, effective, and comprehensive mentoring and career development initiatives that promote diversity in the biomedical research workforce. Our collaborative evaluation design, theoretically-derived measurement instruments, efficient data systems, and timely reporting serve as an example of how to put evaluation principles described into practice for large, multi-site, and multi-dimensional research training programs like NRMN.

Background

The overall goal of the National Research Mentoring Network (NRMN) is to increase the diversity of the biomedical research by enhancing mentorship and career development of individuals from diverse backgrounds, communities, and cultures. The NRMN implements a complex set of interventions and delivery mechanisms to reach trainees, educators and researchers in biomedical disciplines across the United States to improve their skills as mentors and mentees, and their ability to be effective grant proposal writers. The long-term goal is to increase the number of individuals from underrepresented groups (URGs) who become NIH funded researchers [1]. We have adopted the URG terminology used by NRMN since the NIH priorities for diversifying the research workforce include individuals from underrepresented racial/ethnic minorities, persons with disabilities, and individuals from other disadvantaged backgrounds, such as low-income families [2]. Evaluating the efficacy and reach of NRMN programing in the short and long term is critical as it informs which interventions NIH and other research organizations can use in their efforts to diversify of the research workforce.

The innovative format, complexity of activities, and national scope of NRMN activities require a coordinated evaluation to determine the effectiveness of strategies and sustainability of the efforts. This evaluation is especially important given the limited empirical evidence on the value of mentoring programs specifically in higher education and the challenge of measuring theory-informed short-term outcomes [3]. This article describes the early-stage development and implementation of the collaborative evaluation plan for the NRMN, the results of which will be critical to understanding how to improve mentoring and professional development for URGs, and, in turn, support their sustained biomedical research career success.

The National Institutes of Health Diversity Program Consortium (NIH DPC) is the national collaborative under which the NRMN, as well as the Coordination and Evaluation Center (CEC) at UCLA, were established. The overall Consortium-Wide Evaluation Plan (CWEP) Hallmarks of Success (both short and longer-term outcomes of the project) [4], and NRMN and CEC logic models were used to guide the evaluation plan for NRMN. NRMN program evaluators and CEC evaluators developed this plan collaboratively over time. NRMN principal investigators and their internal evaluation team worked together with the CEC to ensure that the CEC plan and its implementation reflected the interests and focus of both the NRMN and CEC, covered both shorter-term and longer-term outcomes, and that relevant evaluation activities were incorporated into the broad Consortium-wide evaluation plan. The nationwide scope of NRMN, its organization into multiple cores with diverse activities, and the high level of research expertise on specific activities in each NRMN core led the consortium to design the external evaluation as a sequential process rather than a more traditional overlapping process. That is to say, conventionally an external evaluator would examine a project simultaneously with any internal program evaluation, with each having different foci. The CEC’s evaluation of NRMN, on the other hand, is tightly coordinated over time. The CEC has worked with NRMN to assure that their short-term evaluation metrics include participant data needed for longer-term follow-up and at the completion of their program the NRMN hands off participants’ baseline data to the CEC for the longer-term component of the evaluation.

The evaluation of the NRMN outcomes is guided by both process and short-term outcome questions. Representative short-term questions to be answered by NRMN evaluators include: What strategies are used to promote the value of mentoring and the NRM network nationwide? What immediate and short-term changes in participants result from exposure to NRMN programs (e.g. access to new mentors, improved mentoring relationships, improved grantsmanship skills, increased self-efficacy, etc.)? Does the reach of NRMN meet, exceed, or fall short of delivering outcomes necessary to make significant contributions to the broader, long-term NRMN goal of diversifying the biomedical workforce?

Longer-term questions to be answered by CEC evaluators include: What is the effect of NRMN participation on medium-term predictors of long-term success as NIH-funded researchers (e.g. grant submissions, published research articles, career advancement), especially for those from URGs? Does the effect of NRMN programs on participants (e.g., mentoring and grant writing skills) continue over time? How well do the various programmatic components of NRMN interact with each other and how does this impact program capacity, success, and sustainability? These evaluation questions were drawn from an integrated logic model developed by NRMN (Table 1) and the relevant DPC Hallmarks of Success, as well as the required outcomes in the NIH Request for Application.

Table 1 NRMN Integrated Logic Model

Evaluation approach

The NRMN evaluation is characterized by its utilization-focused approach [5, 6], fulfilling one of the CEC’s fundamental roles in turning data into information that can be used to inform decisions across the Diversity Program Consortium, as well as being an important source of information for NRMN program leadership and NIH (i.e., decision makers) [7]. This approach offers a set of principles for conducting evaluations that support data-based decisions and promote individual and organizational learning. To foster meaningful use of data within NRMN, a concerted effort is made to engage users of evaluation data and findings.

To facilitate organizational learning, NRMN established an evaluation team, which is comprised of approximately 12 to 15 representative members from across the NRMN network plus members of the CEC evaluation team. This team is highly collaborative and brings together a community of researchers who listen, contribute to, critique, and advance the evaluation work including developing strategies to promote use of the evaluation and its findings.

To support active use of information by program leaders and implementers, the evaluation is focused on addressing questions related to mentor and mentee skills, research self-efficacy, trainee experiences and early-career research outcomes. A common set of shared measures was identified for use by all program components so that data are used to inform both the short and long term evaluation questions. These shared measures are compiled in an NRMN Measurement Library developed by members of the NRMN Evaluation Team. To promote use, performance metrics are not only reported and/or made available to all stakeholders (i.e., NRMN program leaders, CEC evaluation team members and the DPC Executive Steering Committee), but are the basis for evaluation project data gathering and analysis discussions. Presentations and discussions regarding both the short term and long term evaluation activities and emerging findings are embedded within annual consortium-wide meetings, biannual NRMN Key Personnel Meetings, and bimonthly NRMN Evaluation Team meetings, as well as a variety of program-level meetings that occur across NRMN’s national network. NIH program representatives participate in many of these meetings and events, providing continuous input concerning performance metrics that are important to NIH stakeholders. The NRMN and CEC receive funding through a cooperative agreement with NIH, which provides much greater sponsor involvement in program implementation and evaluation than in traditional research or training grants. This coordinated effort assures that measured and tracked evidence and outcomes meet the needs of program administrators, as well as external stakeholders including NIH.

Alongside a collaborative, utilization-focused approach, the national NRMN evaluation intentionally looks at context for the evaluation. In this way, the NRMN evaluation recognizes the central role of contextually-defined values and beliefs held by program stakeholders and their program evaluators [8]. By paying attention to the context in which the evaluation will be conducted (for example, in asking what are the stories of communities and people participating in NRMN, and who is telling them), the NRMN evaluation is designed to yield an accurate, valid, and grounded understanding of what is and who are being evaluated [8]. This contextually-responsive position, coupled with the utilization focused evaluation approach complement overall Consortium goals, ensuring that evaluation stakeholders and participants see themselves as appropriately reflected within the evaluation and central as the ultimate users of its findings.

Description of the collaborative evaluation design and methods

The NRMN evaluation employs a multi-and mixed methods process and outcome design that includes standard assessment surveys, observations, quasi-experimental, and longitudinal methods. Basic participant demographics and background information include information collected through the NRMNet portal and registration system. Administrative and process outputs, and participant short-term outcome measures are tracked by NRMN program staff and/or evaluators across NRMN in a variety of data systems created during the program’s start-up stage. These measures cover some of the following outputs for programs and participants: program created (descriptions) and events offered (types, description, location, start/end dates, expected/actual enrollment counts, and, for many, participant rosters); pre−/post-intervention metrics related to satisfaction, skills, self-efficacy, planned or actual self-reported changes in participant behavior (where applicable), and, for early career researchers in the grant writing programs, measures of change over time over up to 18 months post training to identify grants written, submitted, revised, abandoned, awarded, and resulting research publications within that timeframe.

Longer-term outcomes that are included among the Hallmarks of Success will be tracked over time by the CEC. Hallmarks that apply to NRMN include increased research productivity (as measured by research grant submissions and peer-reviewed publications of early-career faculty and post-doctoral trainees), the improved and sustained perceptions of self-efficacy of trained mentors and early career researchers, and the increased availability of trained mentors for mentees across the country. Additionally, several longer-term outcomes address NRMN’s impact on institutionalization including indicators of program sustainability and the formation of durable networks and partnerships. These data will initially be collected in conjunction with the NRMN’s process evaluation and then the CEC will continue to collect relevant data for long-term outcomes.

Sample

NRMN’s primary targets are early career professionals (post-doctoral and junior faculty) as well as postsecondary students (undergraduate through doctoral) who are training for, or are engaged in, biomedical research studies or careers, as well as those mentoring these students and early career researchers. Secondary targets are those who can serve as trainers and coaches to the primary targets. Hence, NRMN’s primary participants engage in activities as either trainees / mentees or mentors, or both, and NRMN secondary participants are those who train or coach the primary participants.

Mentorship has been defined as a reciprocal, dynamic relationships, between mentors and mentees that promote the satisfaction and/or development of both [9]. A mentor has also been defined as a person who is assigned to work with you on research or who is responsible for providing direction to you, supervising you, helping you, answering your questions, etc. For the CEC, a mentor is someone who provides guidance, assistance, and encouragement on professional and academic issues. For NRMN, mentors are also those advising or coaching mentees in grant writing development and/or career planning. Mentors and coaches can be engaged in a variety of relationships with mentees, including as peers, informally and informally-linked and formally defined relationships. The CEC defines mentees as someone who receives guidance or assistance from a mentor. NRMN defines mentees as those seeking and receiving the “(1) personal and professional competencies necessary to define their career goals, (2) experience needed for realizing their career goals, and (3) ability and opportunity to progress toward their chosen career goal” [10]. The CEC and NRMN use slightly different definitions for mentors and mentees for accountability and reporting purposes, as the NRMN short-term evaluation needs to account for participation of individuals in designated roles and the CEC evaluation is more focused on the long-term effects of the programs.

NRMN participants are identified from those who register on the NRMNet web portal, which is the common registration site for all NRMN programs. Registrant lists include contact information and basic demographics for the wide range of persons interested in NRMN and its events and activities. Where it is gathered, baseline and immediate post-program assessments for event participants is provided by NRMN for CEC’s follow-up analysis. As part of the collaborative process, some of the individual programs modified their initial data collection protocols to enable the CEC to conduct an individual-level longitudinal follow-up.

The CEC will maintain a sample of program participants who are primary targets, with the periodic addition of new registrants. The evaluation challenge is to identify similar persons matched across different types of institutions nationally who did not participate in the NRMN programs. In educational programs, in particular, it is important to distinguish changes in skills and career standing that follow an intervention from changes that would have occurred without the intervention. To best construct a quasi-experimental control group for the self-selected, heterogeneous participants in NRMN programs, we sample others who registered on the NRMN web portal whose career stage and discipline are similar but who, for whatever reason, did not participate in the target NRMN programs and events (see Sorkness et al., this volume for a full description of NRMN activities). This group is likely to have a similar level of interest in biomedical career development as NRMN participants, but were unable to participate or discouraged from participating in the trainings. Along with those who only participated in one NRMN program or event (e.g. early career researchers who completed mentor training but not grantsmanship training), this approach yields several samples of unexposed individuals from a range of institutions that can be compared to individuals who engaged in multiple specific NRMN activities.

In addition, the CEC has also innovated by asking the NRMN respondents to nominate two of their mentees for a brief follow-up on mentoring. These nominees are kept in a separate file and retained only if they agree to participate in a survey, at which point they become part of a mentor-mentee dyad for tracking over time. This pairing allows the CEC to track career trajectories of mentees with NRMN trained versus those with similar mentors without NRMN training to identify the value added by NRMN versus other training versus no mentor training.

Measures

The CEC and NRMN identified short and long-term outcomes, indicators and measures critical to the evaluation of network activities, and observable changes in participant experiences with mentoring, such as improved access to mentoring relationships, increased confidence in pursuing biomedical careers and in preparing and submitting grant applications. Measures were based on theories, evidence-based models, and validated instruments to explain and describe biomedical research career persistence across diverse groups with an emphasis on measures most likely to be affected by the interventions, such as satisfaction with mentoring, increased science identity, higher self-efficacy and enhanced research productivity (e.g. published articles, grants). A measurement library, developed and maintained by the NRMN evaluation team for use across NRMN activities, is being used to facilitate the use of common, evidence-based metrics across the program [11]. To collaboratively guide the evaluation process, NRMN activities and outcomes were mapped to Consortium Wide Evaluation Plan (CWEP) outcomes as a way to refine its own articulated goals, identify key performance and outcome metrics, and prioritize data collection and analysis activities. Table 2 illustrates the alignment of a sample of outcomes from a list of NRMN Outputs and Outcomes with CWEP Outcomes. While they differ somewhat in scope and emphasis, the elements of the long-term evaluation of NRMN (and other Consortium-wide activities) build on the short-term evaluation of NRMN activities.

Table 2 Sample of Integrated NRMN Outputs, Outcomes and CWEP Outcomes

Data systems to track participation

To facilitate NRMN-wide data collection efforts, NRMN developed a data management system to assemble in one place: information about program delivery, participant tracking, and assessments needed for evaluation. The CEC has a compatible but separate system to receive selected data from NRMN to facilitate long-term follow-up and analysis of participant outcomes. The selected data provided to the CEC, including detailed participant contact information and flags regarding participation in activities, facilitate the creation of longitudinal sample groups and provide a means of monitoring participation in CWEP data collection efforts. Simultaneous development of the systems facilitates this exchange of information.

Data collection methods

NRMN participants

As stated previously, process, output and participant short-term outcome measures are gathered and tracked by NRMN program staff and/or evaluators across NRMN programs. In an effort to gain a longer, broader view of NRMN participant experiences, as well as to gauge the impact of NRMN program participation on the outcomes specified by the CWEP, the CEC conducts an annual follow-up survey of NRMN participants after they leave the internal-NRMN evaluation, which occurs anywhere from immediately after a training to 18 months post-training depending on the activity. The CEC survey collects information on career progress, participation in other professional development activities, research skills and activities, involvement in mentoring relationships and programs, mentor/ mentee competency across a range of skills, and perceptions of self-efficacy. This sequential evaluation (internal NRMN then CEC) strategy reduces respondent burden since the two evaluations do not simultaneously collect information from the same participants.

NRMN as an organization

Complementing follow-up survey data from participants, the CEC also collects qualitative data through semi-structured interviews with NRMN leadership, including principal investigators, core leaders, and program management staff (approximately 20 individuals per year), and participant observation of NRMN planning meetings to assess organizational development sustainability. This approach informs the quantitative data collection efforts to understand how NRMN programs implement their vision for advancing URG biomedical research training, how the various parts of NRMN interact and collaborate with one another and other Consortium sites and the perceived value of those interactions, and the extent to which NRMN strategies enhance student, faculty, and institutional engagement in biomedical research training for URGs.

More specifically, interviews focus on details surrounding NRMN implementation as a consortium, the perceived strengths and needs of NRMN participants by program staff, network integration, and sustainability efforts. Semi-structured observations of NRMN Key Personnel Meetings (KPM) will hone in on how these events contribute to overarching NRMN goals, cultivate potential synergies across NRMN and Consortium-wide program activities, and eliminate or reduce barriers to effective collaborations and programmatic needs. Similar observations will be conducted of NRMN activities as a way of documenting network structure, context, and achievements. Combined, interviews and observations will inform a deeper, contextually driven understanding of process and activity integrity.

Data analysis

NRMN participants

Short-term outcome evaluation data are necessarily descriptive in the early stages of NRMN development as a network. For instance, change-over-time metrics in participant improvements in awareness, knowledge, skills, self-efficacy, confidence, and professional advancement are limited to measures collected during participation in the programs. Several of these findings will be published to inform the community of what activities show promise for broader dissemination. These data are shared with the CEC for longitudinal studies as described.

CEC survey findings will first be used to present descriptive statistics of longitudinal participant outcomes. Comparative analyses will then be performed among different groups of NRMN registrants, including assessing differences in specific domain areas, such as mentor-mentee relationships, research productivity, and college/career persistence. To control for maturation and other potential threats to validity, we will compare those who participated in specific trainings with those in the database at similar career levels who did not. For example, we will look at the number of people mentored and the mentor’s self-assessed skills for NMRN-trained mentors compared to NRMNet web portal registrants who do not have NRMN training.

Differences will also be investigated among trainees working with NRMN coaches through the grants writing training programs and trainees not participating in the coaching program. We will conduct comparative analyses between mentees engaged in other NRMN programming, such as the virtual guided mentorship program, with those not participating in those or other NRMN mentorship programs. These analyses will involve multivariable methods that control for other career-success predictors, such as discipline, institutional status of graduate degree, prior research experience/grant/publication record, and current institutional context.

Using annual follow-up surveys, cumulative data will be examined in a longitudinal analysis of trends and patterns (i.e., difference in differences) during the course of the project. For example, if the mentor training stimulates senior faculty to increase their number of mentees, does this effect decay over several years and return to the mean? For those who have been coached in grant writing skills, does early mentored success in funding become a sustained record of funded projects after they leave the program. These analyses will also be examined to determine if there is any interaction effect by URG status. These are key issues in promoting the long-term goal of impacting the diversity of the NIH-funded workforce.

NRMN as an organization

Observation and qualitative interview data from NRMN leadership and program staff will be analyzed in 4 cycles [12]. Through pattern coding we will synthesize findings into more meaningful units of analysis [13]. By grouping similarly coded passages together and assessing the groupings for thematic similarity, difference, frequency, sequence and correspondence, the final coding scheme will be established. Finally, through elaborative coding, we will examine the data with an eye toward the Consortium-wide Evaluation Plan. The coding team will write analytic memos throughout the coding process that will both document the process of the emerging analysis and also serve as guideposts for the collaborative process.

Conclusion

Distinctive features and implications for the evaluation of mentoring

The evaluation of NMRN is informed by a utilization-focused evaluation approach that, by design, engages several critical stakeholder groups. As a collaborative agreement with NIH, we solicit and receive regular feedback concerning the measures and outcomes used to evidence program achievements and challenges in ways that are useful for NIH decision making. The NRMN evaluation team and the CEC work in concert with NRMN leadership to provide continuous feedback in the formative development of their initiatives as they are implemented – data that will later be used to summatively assess NRMN programming in ways that can guide future development and sustainable replication. To facilitate these collaborative actions, deliberate efforts were made to ensure alignment between NRMN program goals with desired Consortium-wide outcomes, and that intermittent results are regularly reviewed to reinforce utility, accuracy, feasibility, propriety, and appropriateness.

The close collaborative working relationship of the CEC and NRMN evaluation teams will continually produce strong and clear plans for coordinated evaluation processes that efficiently and effectively leverage resources (human, financial, technical), knowledge and expertise - that exists across/between both teams. The CEC’s collection of both quantitative and qualitative data imbue the NRMN evaluation with an essential understanding of the context in which NRMN was initiated and implemented, as well as to ensure the many communities of stakeholders contributing to, and benefiting from its programs are both honored and integral to its findings.

The results of this NRMN evaluation will be used to inform national policy on how to best design and implement comprehensive mentoring and career development initiatives for diverse populations pursuing careers in biomedical research. Understanding the complexity of implementing a nationwide mentoring network, as well as the attributes that most support its targeted participants will contribute substantially to the ways in which we consider the role of mentoring to promote researcher diversity in the biomedical sciences.

References

  1. Tabak LA, Collins FS. Weaving a richer tapestry in biomedical science: NIH leadership discusses the need for renewed efforts to increase diversity in the US biomedical research workforce. Science. 2011;333(6045):940. (New York, NY)

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Notice of NIH’s Interest in Diversity [https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-053.html] Accessed Sep 14, 2017.

  3. Seymour E, Hunter A-B, Laursen SL, DeAntoni T. Establishing the benefits of research experiences for undergraduates in the sciences: first findings from a three-year study. Sci Educ. 2004;88(4):493–534.

    Article  Google Scholar 

  4. McCreath H, Norris K, Calderon N, Purnell D, Maccalla N, Seeman T. Evaluating efforts to diversify the biomedical workforce: the role and function of the coordination and evaluation Center of the Diversity Program Consortium. BMC Proc. 2017;11(Suppl 12) [S2 this supplement].

  5. Patton MQ. Utilization-focused evaluation: the new century text. Thousand Oaks: Sage Publications; 1997.

  6. Patton MQ. A utilization-focused approach to contribution analysis. Evaluation. 2012;18(3):364–77.

    Article  Google Scholar 

  7. Patton, M. Q. (2008). Utilization-focused evaluation (4. ed.). Thousand Oaks: Sage Publications.

  8. Hood S, Hopson RK, Kirkhart KE. Culturally responsive evaluation. Handb Pract Program Eval. 2015;281.

  9. McGee R. Biomedical workforce diversity: the context for mentoring to develop talents and foster success within the ‘pipeline’. AIDS Behav. 2016;20(2):231–7.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Pfund C, Byars-Winston A, Branchaw J, Hurtado S, Eagan K. Defining attributes and metrics of effective research mentoring relationships. AIDS Behav. 2016:1–11.

  11. Johnson J, Rogers J: Personal Communication. In.; 2016.

  12. Saldaña J. The coding manual for qualitative researchers. Los Angeles: SAGE Publications; 2013.

  13. Miles MB, Huberman AM. Qualitative data analysis: an expanded sourcebook: Sage; 1994.

Download references

Acknowledgements

We want to thank Mercedes Rubio of the National Institutes of Health, for her contributions to this manuscript and for her efforts to make the NRMN and its evaluation a success.

Funding

Work reported in this publication was supported by the National Institutes of Health Common Fund and Office of Scientific Workforce Diversity (USA). Publication of this article was funded by the CEC awards U54GM119024 and U54GM119024–03:S1 administered by the National Institute of General Medical Sciences (NIGMS). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Availability of data and materials

Not a data-based article.

About this supplement

This article has been published as part of BMC Proceedings Volume 11 Supplement 12, 2017: The Diversity Program Consortium: Innovating Educational Practice and Evaluation Along the Biomedical Research Pathways. The full contents of the supplement are available online at https://bmcproc.biomedcentral.com/articles/supplements/volume-11-supplement-12.

Author information

Authors and Affiliations

Authors

Contributions

This manuscript included extensive work by all of the authors. The authors have read and consented to the authorship on this version of the manuscript.

Corresponding author

Correspondence to Lourdes R. Guerrero.

Ethics declarations

Authors’ information

Lourdes R. Guerrero (CEC, Evaluation Core) is in the Division of General Internal Medicine/Health Services Research at the David Geffen School of Medicine at UCLA. Teresa Seeman (CEC Co-PI, Data Core) and Heather McCreath (CEC, Data Core) are in the Division of Geriatrics at the David Geffen School of Medicine at UCLA. Steven P. Wallace (CEC, Evaluation Core) is chair of the Department of Community Health Science at School of Public Health at UCLA. Christina Christie (CEC, Evaluation Core) is chair of the Department of Education and Jenn Ho (CEC, Evaluation) is in the Division of Social Research Methodology, both at the Graduate School of Education and Information Science at UCLA. Eileen Harwood (NRMN, Evaluation Core) is in the Division of Epidemiology and Community Health at the School of Public Health at the University of Minnesota and Christine Pfund (NRMN PI, Mentor Training Core) is in the Department of Medicine at the University of Wisconsin-Madison.

Ethics approval and consent to participate

All evaluation activities of the CEC have been approved by the UCLA IRB.

Consent for publication

Not applicable.

Competing interests

All authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guerrero, L.R., Ho, J., Christie, C. et al. Using collaborative approaches with a multi-method, multi-site, multi-target intervention: evaluating the National Research Mentoring Network. BMC Proc 11 (Suppl 12), 14 (2017). https://doi.org/10.1186/s12919-017-0085-6

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s12919-017-0085-6

Keywords