Introduction and Background
According to the Accreditation Council of Graduate Medical Education (ACGME) competency is defined by focusing on six major areas. These are patient care, practice-based learning, professionalism, medical knowledge, communication and interpersonal skills and a system-based practice (Accreditation Council on Graduate Medical Education, 2002). This definition of competence captures competence in its entirety and thus can be used as a basis for measuring performance and also improving performance. The six core competencies are a reflection of virtue in the learner and achieving them prepares the resident for the challenges of internal medicine practice.
For these virtues to be developed repetition, practice and encouragement are required(Larkin et al, 2005 pp 491). This then explains the importance of regular evaluation for improvement purposes and more so by other professionals in addition to the attending faculty. The resident’s performance can be evaluated by examining his/her interactions with various people in the clinical setting. These include the attending physician, nursing staff patients and the residents themselves.
Commonly used performance tools however may not be adequate in measuring the competencies effectively. This is because often the evaluator’s use tools that are either too subjective or that generate limited information resulting in evaluation results that do not provide motivation for change. A combination of objective and subjective tools is more likely to generate useful information. This combined with a decreased reliance on the attending physicians for all evaluations may improve evaluation and consequently performance.
Often the attending physician bears the sole responsibility of evaluating the resident. This should not be so as even shown by the six competencies which require that the resident have interpersonal and communication skills and system based practice. These two competencies mainly require an ability to work well with other professionals. Generally, the competencies are interrelated and improved performance in every one of them will require an ability to interact well with people. The people the resident is working with should therefore be involved in the resident’s evaluation whenever possible.
- Design: A randomized control study
- Participants: residents, nursing staff, patients and attending physicians
- Setting: Inpatient clinical setting, a university hospital and a community based hospital
- Intervention: Preparation of specific target questions in the six core competencies to be used by the attending physicians, nurses, residents and patients in evaluating the residents weekly
The Liaison Committee on Medical Education (LCME) requires that assessment of competence include skills, attitude and behavior assessment required for medical practice in addition to assessment for factual knowledge(Swing , 2002 pp 1280). Improvement of performance should also focus on all these to ensure holistic patient care is delivered. Performance assessment tools differ with most residency programmes with different amounts of objective and subjective information.
The most common tool is the global ratings tool which has easy to use forms but is limited in that it provides very little information and has been criticized as being inaccurate. The evaluator makes conclusions regarding the attitude, behavior, skills and knowledge of a resident at the end of every rotation. Studies have shown global ratings to have poor reliability and content validity with raters showing varying degrees of severity and leniency and a tendency for the halo effect (Hays, 1990, pp 112).
Another tool often used is the checklist which has structured and specific measures that direct improvement especially in oral examinations and patient examinations that are directly observed. They however do not distinguish expert performance and their content validity is influenced by whether there is a relation between the desired outcome and the score (Swick et al, 2006, pp 333). 360-degree assessments have been reported to be useful in assessing and thus improving performance.
They involve evaluation from various people found within the resident’s sphere of work. These include paraprofessionals such as staff who are in administration, other professionals like nurses, supervisors, peers, patients and the residents themselves. Though there is risk of bias in generating subjective information, the advantage of 360-degree assessment lies in the fact that the varied feedback from different professionals makes it a more credible source of information as far as determination of strengths and weaknesses is concerned. This provides for increased motivation for self-improvement (Swick et al, 2006, pp 334; Collins et al, 2002, pp 816).
Other performance assessment tools include standardized clinical examinations, standardized written and/or oral examinations, portfolio assessments. strategic management simulations, direct and video observations as well as computer based evaluations. The standardized written and oral examinations and the standardized clinical examinations are especially useful for measuring professionalism, interpersonal communication and medical knowledge (Hobgood et al, 2002 pp 1258; Epstein, 2002 PP 229). Portfolios include research work and/ or published material s well as treatment plans, contents of a round and patient evaluations. These reflect the resident’s growth and development as they train (Arnold, 2002 pp504).
Attending physicians are often the ones with the responsibility of evaluating residents. Feedback is necessary to improve performance of the residents. Research has shown that faculty who have undergone a focused educational intervention aimed at improving the quality of written evaluations give better feedback promoting better performance (Hombloe et al, 2001 pp 429). The study also points out the weaknesses of numeric types of rating scales and proposes that written comments are important in providing specific information about the performance of a resident ( Hombloe et al,
The study however uses only attending faculty to evaluate the residents making it less comprehensive.
Most internal medicine programs, about 70 per cent rely heavily on rating forms for evaluation of residents. According to research most residency programs use varying tools in compliance with the ACGME recommendations. This is especially the case where the ratio of support staff to residents is high (Chaudhry et al, 2008). This shows that the use of other people in the resident’s work sphere
other than attending faculty is beneficial in the evaluation and performance improvement of residents. The new model for accreditation of residency programs is based on competency and outcomes (Goroll et al, 2004 pp 906; Larkin et al, 2005 pp492). This means that improvement in patient care in internal medicine will require increased accountability and improvement of the evaluation process of residents. Accountability can be increased by involving as many people within the resident’s work environment as is practically possible. For most residents those in the immediate environment are attending physicians, nursing staff and most importantly the patient. Their involvement then in evaluation of residents may beneficial to the improvement of performance.
Can we improve the performance in the six core competencies of the resident physicians in a residency program by intensive and focused evaluation by attending physician, nursing staff, and patients and residents feed back?
Evaluation forms with specific target questions will be given administered to nurses and attending physicians. The questionnaires will have questions grouped according to each of the six core competencies. The forms have a rating scale which ranges from one to nine with one being a poor score and nine being an excellent score.
The evaluation forms for the patients will have ten questions with a rating scale of one to nine where one and two are poor, three to four is marginal, five to seven is good and eight to nine is excellent. The residents will be evaluated every week in the clinic by the attending physicians, nursing staff and patients.
Patients, nursing staff and attending physicians will be selected randomly to either the intervention group or the control group. The residents will remain uninformed about the selection. Residents who spend less than three weeks with the study will be excluded. Two inpatient settings will be included, a university hospital and a community based hospital
To ensure resident confidentiality all identifying information will be removed, the evaluations will be identified using a code number and the control group and intervention group will each be coded differently for analysis purposes. A resident survey will be conducted regarding improved performance. The survey will ask if the residents received feedback from the nursing staff, patients and attending physicians, how often they received this feedback and if the feedback received led to adoption of new reading habits and management of clinical problems. The residents will also be asked how feedback from attending physicians alone compared to feedback from patients and other nursing staff.
Demographic information and scores from rating scales will be entered into a database for analysis. Differences between the scores from the rating scales for the various competency areas will be measured using the Wilcoxon rank sum test. The results of analysis will be presented in tables for the evaluation form ratings and for answers to the resident survey
ACGME Outcome project. Web.
Arnold L, 2002, Assessment of professional behavior: yesterday, today and tomorrow, Academia and Medicine volume 77 pp 502-51.
Chaudhry SI, Hombloe E and Brent BW, 2004, The state of evaluation in Internal Medicine Residency, Journal of Internal Medicine, pp 1010-1015.
Collins J, Gray L and Hyde C et al, 2002, Radiology resident evaluation: a form that addresses the six competencies of the ACGME, Academia Radiology volume 9 pp 815-816.
Epstein R and Hundert M, 2002, Defining and assessing professional competence, Journal of American Medical Association volume 287 pp. 226-235.
Goroll AH, Duffy DF, LeBlond RF, Sirio C et al, 2004, A new model for accreditation of residency programs in Internal Medicine, Annals of Internal Medicine, volume 140 number 11 pp. 902-910.
Hays BR, 1990, Assessment of general practice consultations: content validity of a rating scale, Medical Education vol 2 pp. 110-116.
Hobgod C, Jourilles N, Riviello J et al, 2002, Assessment of communication and interpersonal skills competencies, Academia Emergency Medicine volume 9 pp. 1257-1269.
Hombloe SE, Fiebach NH, Galaty LA and Huot S, 2001, Effectiveness of a focused Educational Intervention on Resident Evaluations from Faculty, Journal of Internal Medicine, volume 16 pp. 427-434.
Larkin G, Houry D and Binder L et al, 2002, Defining and evaluating professionalism: a core competency for graduate emergency medicine education, Academia Emergency Medicine volume 9 pp. 1249-1256.
Swick S, Hall S and Eugene B, 2006, Assessing the ACGME Competencies in Psychiatry Training programs, Academic Psychiatry volume 30 pp. 330-351.
Swing S, 2002, Assessment of ACGME General Competencies: general considerations and assessment methods, Academia Emergency Medicine vol 9 pp. 1278-1288.