Contents

We have found that monitoring faculty and trainee engagement is key to maintaining successful engagement with SIMPL. Monthly SIMPL Engagement Reports are sent to your program and are designed to help figure out how to improve your program's use of SIMPL. 

For this report an "Evaluation" is the act of either an attending or trainee filling out the three SIMPL Questions (attending autonomy given (Zwisch), trainee performance, and case complexity)- regardless of which party initiated the evaluation. 

Overview

An Overview of the evaluations created by users

  1. Table 1:
    1. N = number of times each subgroup completed the 3 SIMPL questions (evaluations) in current time period
    2. Mean = Average number of evaluations per Attending or Trainee in current time period
    3. Median = Median number of evaluations per Attending or Trainee in current time period
    4. IQR = Interquartile range for evaluations per Attending or Trainee in current time period
    5. Max = Maximum number of evaluations per Attending or Trainee in current time period
  2. Table 2: 
    1. N = number of times each subgroup completed the 3 SIMPL questions (evaluations) in current time period
    2. Mean = Average number of evaluations per Attending or Trainee in current time period
    3. Median = Median number of evaluations per Attending or Trainee in current time period
    4. IQR = Interquartile range for evaluations per Attending or Trainee in current time period
    5. Max = Maximum number of evaluations per Attending or Trainee in current time period

Trends

Cumulative trends for your program

  1. Figure 1:
    1. Running total of evaluations since launch for your program (top).
    2. weekly points for total evaluations by all users with trend line for your program (bottom). 

Ratings

An all-time look at the way that faculty rate trainees at your program

  1. Figure 2: Zwisch- all zwisch ratings crated by attendings about trainees since launch

  1. Figure 3: Performance- all performance ratings created by attendings about trainees since launch. Note: each PGY 'N' will likely be lower than the N for the same PGY in Figure 2, as evaluations that are scored as "Show and tell" do not get trainee performance ratings. 

Engagement:

This section breaks down who is (and isn't) engaging with SIMPL.

  1. Table 3: n should be approximately equal to the number of faculty and trainees in your program, respectively. 
  2. Table 4: 
    1. Logged in = number of attendings or trainees who logged in during the current time period
    2. At least 1 evaluation = number of attendings or trainees who completed at least one evaluation during the current time period
    3. At least 5 evaluation = number of attendings or trainees who completed at least five evaluation during the current time period
    4. Response rate % = percentage of evaluations that were sent to that group (ie. inititated by their counterpart) that were responded to during the current time period- for the attending line, 50% of the evals that were initiated by trainees were responded to by faculty.
    5. Dictated feedback rate = percentage of evaluations that attendings completed where they also dictated feedback for the trainee during the current time period 
  3. Table 5:
    1. Logged in = number of attendings or trainees who logged in since launch
    2. At least 1 evaluation = number of attendings or trainees who completed at least one evaluation since launch
    3. At least 5 evaluation = number of attendings or trainees who completed at least five evaluation since launch
    4. Response rate % = percentage of evaluations that were sent to that group (ie. inititated by their counterpart) that were responded to since launch- for the attending line, 50% of the evals that were initiated by trainees were responded to by faculty.
    5. Dictated feedback rate = percentage of evaluations that attendings completed where they also dictated feedback for the trainee since launch

  1. Table 6:
    1. N evaluations = the number of times that this resident completed an evaluation (the 3 SIMPL questions- zwisch, performance, and case complexity)during the current time period, regardless of who initiated the evaluation.
    2. N times evaluated = the number of times that an attending completed an evaluation of this trainee during the current time period, regardless of who initiated the evaluation 
    3. N requests = the number of times an attending initiated the eval request to this trainee during the current time period
    4. N responded to = the number of times this trainee responded to evaluations that were initiated by attendings during the current time period
      1. The number in 'N responded' to is included in 'N evaluations'
    5. Response rate = N requests / N responded to %

  1. Table 7:
    1. N evaluations = the number of times that this faculty completed an evaluation (the 3 SIMPL questions- zwisch, performance, and case complexity) during the current time period, regardless of who initiated the evaluation.
    2. N times evaluated = the number of times that a trainee completed an evaluation of this attending during the current time period, regardless of who initiated the evaluation 
    3. N requests = the number of times a trainee initiated the eval request to this attending during the current time period
    4. N responded to = the number of times this attending responded to evaluations that were initiated by trainees during the current time period
      1. The number in 'N responded' to is included in 'N evaluations'
    5. Response rate = N requests/N responded to %
    6. N dictations = the number of times this attending dictated feedback during the current time period
    7. Dictation rate =N dictations / N evaluations
  2. Table 8: same as table 6, but since launch
  3. Table 9: same as table 7 but since launch


Don't forget: SIMPL evaluations can be initiated by either faculty or trainees and are completed by both parties.













This page has no comments.