Home The Quarterly 2011 Senior Medical Performance Review

The Quarterly


Senior Medical Performance Review: Making it Happen - The Queensland Experience Print E-mail
The Quarterly 2011


This article was written by Dr. Susan Keam, derived from material presented by Dr Andrew Johnson on Monday 6 September 2010 at RACMA/HKCCM 2010.

An online discussion on "Models of managing senior medical staff performance" gave participants an opportunity to ask questions and exchange experiences with the authors Drs Grant Phelps and Andrew Johnson. Listen to the audio recording here.

Performance appraisal has come of age and is a part of our working lives. For most people this is not an issue. However, how performance is measured is more problematic. If we accept that we can measure medical performance, it is a statement of the obvious that 50% of people will be below the average performance of the group being evaluated. So, what we have to decide is: how good is good enough – where on a normal distribution curve does the cut-off point between acceptable and unacceptable performance sit?

To establish this, we first need to identify the entry level of competence. If we set a high entry level of competence through appropriate selection for medical training, excellent training programmes and defined standards to be achieved, then the acceptable level of performance may well be at the bottom fifth, tenth or twentieth percentile. If we have less confidence in the entry level, and bring in consideration of the need to maintain competence, this becomes an even more complicated evaluation.

The next questions to ask, and these should be asked by each of us, are ‘where do I sit on the competence curve, how do I know this, and if I do know, what do I then do about it’.

Before we consider these questions, we need to establish why we should do performance appraisal and development assessments, and why so often we don’t do it.

Why should we do performance and development appraisal?

  • To maintain accreditation standards
  • It is a Health Quality and Complaints Commission standard in Queensland (this requirement generally applies in other jurisdictions as well)
  • It is standard Human Resources practice
  • It helps develop the performance of valued staff
  • It helps to identify things that could go bad before they happen, avoiding lawyers asking nasty, embarrassing, intrusive, ugly, horrible questions about what has been done about clinician performance
  • Done well, most people really enjoy the process, gain from the opportunity for reflection, and gain from developing plans with their supervisor.

Why is performance appraisal so hard to implement?

  • Different jobs
  • Small groups make statistical relevance difficult
  • Poorly defined standards
  • Fear of public humiliation (discovery that we are not as good as we think we are!)
  • Community expectations (what does the community expect us to do with underperformers?)
  • Competition amongst practitioners (how many people are willing to admit that they fall below the acceptable competence level?)
  • People being appraised don’t see value in the process
  • It may stand between the person and their goal (power, prestige, money, perceived status…)

What is the key issue?
Nevertheless there is a key issue that cannot be ignored – there has been no appropriate performance appraisal process for senior medical staff, and yet, for those of us who work in Queensland, there is a recommendation by the Queensland Health Systems Review for the development of “standardised processes for evaluating the appropriateness of staff participation in benchmarking activities” and “to compare individual clinician performance through quality assurance mechanisms”.

This recommendation arose from the finding of the Bundaberg Hospital Commission of Inquiry (which was expanded to become the Queensland Public Hospital Commission of Inquiry) and the systems review that went alongside it, that within our hospital system we don’t have any systematic approach to performance appraisal for senior medical staff.

In response, in 2007, the Queensland Directors of Medical Services Advisory committee, with support of hospital senior management, obtained permission and funding to run a small project to develop a tool for performance review, which is now discussed in more detail.

The Senior Medical Performance Review Tool
The project objective was to develop a performance assessment tool that would be accepted by clinicians and regulatory bodies as a reliable demonstration of baseline competence. The underpinning concepts of the SMPR were that the tool was:

  • Multifaceted
  • Reliable and valid
  • Impartial
  • Confidential
  • Minimal workload for clinicians, making it easy to implement

The Process:
The first step in the process was consultation with all stakeholders. This involved initial dialogue and buy-in from:

  • Queensland Health
  • Australian Medical Association
  • Health Quality and Complaints Commission
  • Medical Colleges
  • Directors of Medical Services Advisory Committee

And local consultation with:

  • Hospital directors
  • Senior medical staff of each clinical department.

The Concept:
A review in four modules broken down into components that occur once every three years and components that occur annually.

Triennial SMPR components

Module 1: Clinician profile and outcomes A clinician profile is created from the following:

  • Qualifications
  • Medical Board of Australia registration
  • Credentials and scope of practice
  • Benchmarking activities (e.g. VLAD, HRT, Clinical Audit)
  • Customised clinical review (e.g. DRGs, LOS, mortality rate, Complication rate, infection rate)
  • Formal complaints
  • Medico-legal issues
  • Critical incidents
  • Morbidity and Mortality attendance

Most of this is existing information that is already collected and can be found in existing systems, structures and reports.

Confidentiality of the profile is important. In Queensland, the information is collated in a single-copy electronic document that is stored in a secure database.

Module 2: Peer review tool

  • Adapted from a validated tool developed by Ramsay et al (Ramsey et al. Use of Peer ratings to evaluate physician performance. JAMA 1993;269 (13:1655-60)
  • 20 questions that use a Likert scale
  • Tailored to the individual specialty after consultation
  • Confidential and anonymous
  • Once questions/parameters are agreed, a panel of reviewers constituting people who will be used for the entire cohort of senior medical officers within the department, are identified/agreed
  • Wide range of inputs, including reviews by nominated colleagues (peers, registrars, senior nursing staff, multidisciplinary team)
  • Minimum of 11 assessments required for validity
  • Individual and departmental area scores are aggregated; inadequate performance may be indicated by a score that was significantly different from the control (i.e. below the lower reference limit of 3 standard deviations from the mean). This is not considered to be a definitive and validated assessment, but rather an indicator that there may be a problem and this is then “triangulated” with other information.

The peer review tool looks at technical clinical practice, but importantly, also looks at “soft” skills e.g. communication skills, the way that people engage, the way that they teach and their contribution to research.

Annual SMPR components

Module 3: CME and Professional Development
The aim is to work through the following items with the clinician:

  • Contribution to training and education
  • Professional development
  • Research activities
  • Supervisor comments
  • Clinician comments

This component is what has generally been done for performance review prior to the implementation of SMPR.

Module 4: Mandatory training
This component differs from unit to unit and includes

  • Fire
  • Child safety
  • Code of Conduct
  • Manual Handling

Many of these elements can be done via DVD or on line, and we can record compliance, but it’s not clear if it truly adds value.

The Review Cycle
The recommended approach for new starters has been to do modules 3 and 4 in years 1 and 2 then all four modules in year 3 (figure 1). However, for existing staff in the first round, we have found that the system works better the other way round – in Year 1 all four modules are completed so that baseline data (e.g. clinician profiles) are available, and so that we have validity in the scores. Over time it is intended to link this with the accreditation triennium to minimise doubling up of activities.

Results of the performance review process: the Queensland experience
The Queensland programme now has now been through about 11,000 peer reviews. We have a huge amount of data, of which peer review data are the most interesting. There has been some variation in outcomes, which is to be expected – and this has helped us to identify the departmental areas and the individual clinicians we needed to focus on.

For instance, we had been getting anecdotal feedback that one of the 3 hospitals we were assessing (hospital 2) had some problems in one department. To see if the peer review process could help us identify whether there really was a problem, we analysed the same department data across 3 hospitals. Data shown in figure 2 are the aggregated scores for each hospital across 5 of the domains within the clinical assessment tool, and indeed they showed a definite pattern confirming that hospital 2 appeared to have a problem.

We then looked at individual practitioners in hospital 2 (there were 11 practitioners, all SMOs), and once data had been disaggregated, there was statistically significant variation from the control group normal scores in a range of different parameters for several practitioners (figure 3). For instance, practitioner 11 has significantly different scores from the control (i.e. below the lower reference limit of 3 standard deviations from the mean) in 53% of the 20 domains (i.e.11 of the domains); practitioner 10 had significantly different scores for 47% of the parameters, practitioner 1 for 37% of parameters and practitioner 8 for 26% of parameters.

Most value from these results comes from being able to triangulate down to performance of individual practitioners and identify those who require additional support.

How does the information gathered in the survey contribute to the performance appraisal process?

Example 1
The poorly performing Senior Medical Officer (SMO)
This SMO was rated at more than 3 standard deviations below peers in 10 of the 20 domains assessed in the questionnaire. Having identified these issues, the appraiser was able to show the practitioner that they were at a significant variance from the norm. The practitioner had never had this performance gap drawn to their attention before. Interestingly, the gaps identified by the survey fitted with the gaps that the practitioner themselves had perceived, especially around clinical management and assessment. The performance appraisal process enabled a performance plan to be put in place for audit, retraining and supervision. Thus, a difficult conversation in a sticky department became a positive learning experience for an underperforming SMO, who was able to discuss concerns about clinical acumen in a neutral environment.

Example 2
Study leave issue
Documentation of Continuing Medical Education (CME) and professional development undertaken indicated that conferences attended were only distantly related to the speciality in which the SMO was employed, and were selected for their exotic locations, rather than relevance to day-to-day work. The performance appraisal looked at professional development and provided an opportunity for a conversation about the appropriateness of the type of conferences; performance data could be challenged, and a more relevant study CME plan could be put in place.

Example 3
The poorly performing Unit Director
The Director of a large unit received consistently poor outcomes in the domain of emotional intelligence and team leadership; however, in the past, this had never been brought to his attention. The performance appraisal process enabled a conversation to start about these issues. While initial discussion challenged the validity of these outcomes, eventually the director conceded that the strength of evidence was apparent. As part of the development plan, the Director agreed to attend a commercial training programme to develop emotional intelligence skills. Despite initial scepticism, the Director completed the programme, and gave very positive feedback about what had been learnt. The Director was also provided with some immediate leadership guidance by the Deputy Director of Medical Services. An interim review of performance is planned for 6 months after starting the development plan, to determine if outcomes have improved. Thus, the performance tool not only helps in evaluating technical skills, but also helps in assessment and management of gaps in “soft” domains.

The value of peer comments
The comments provided in the survey are helpful in providing real feedback about strengths and areas for development. It also showed people being appraised how valued they were – everyone surveyed received a lot of positive feedback, including those with some issues for development. The positive feedback given is useful, as it gives a positive opportunity to start the performance conversation. What we need to remember is that we are usually coaching high performers to stretch and achieve very high performance. Very few people each year are underperforming to the extent that they are unsafe.

Comments also help to:

  • Identify the elephant in the room
  • Coach a high performer into a very high performer
  • Spotlight issues, such as work/life balance
  • Open an avenue for addressing negative issues
  • Provide an opportunity to highlight exceptional performance that has previously gone unrecognised

ssue of polarised results is worth mentioning. In a dysfunctional department you commonly see a polarised environment, where some people score a practitioner highly, others score very low. This needs to be recognised, because in a polarised environment, reaction is more extreme and people may feel much more exposed. The focus on self improvement and management of perceptions is greater, and people need to be aware that reactions to performance may be exaggerated.

Critical elements
Senior medical ownership and involvement is critical to the success of the SMPR, and this is an important outcome for medical administrators to achieve when consulting with stakeholders. If there is no buy-in from senior clinicians, the programme will struggle. Other critical elements include:

  • Flexibility of the programme and those implementing it
  • Awareness of the internal conflicts that may influence the process
  • Clinical expertise
  • Adequate resourcing. Ideally each facility should have adequate resourcing to implement the SMPR in the same way that it was implemented at Princess Alexandra hospital, with access to biostatisticians and a team in a clinical evaluation setting. This is of course, not the reality in most facilities.

We have shown that the performance management system is transferrable from relatively resource rich hospitals, such as the Queensland Princess Alexandra Hospital or Royal Brisbane and Women’s Hospital, to relatively resource-stretched hospitals (i.e. the ‘real world’ setting), such as the Townsville Hospital, Queen Elizabeth II Hospital and Logan Hospital, without losing validity.

Implementing the Survey Component
We have found that the easiest way to implement the peer survey component of performance appraisal is through Survey Monkey (see figure 4 for an example). This tool allows easy, rapid collation of data and good security, and those analysing the data can download data into Excel spreadsheets to facilitate comparison between departments.

Our experience in Queensland has shown that the SMPR process works, is transferable between hospital units, regardless of size and resources, and has good acceptance by appraisers and appraisees. Importantly, the SMPR generated few surprises and was not only of benefit for those who are performing well, but also for those whose performance was below expectations, especially as it facilitated difficult performance conversations.

Performance management is here to stay. The good news is that performance review is no longer too hard!

{mosloadposition _myarticle}