Evaluation of Health Care Technology

Subject: Healthcare Research
Pages: 5
Words: 1246
Reading time:
6 min
Study level: PhD

Introduction

Clinical Decision Support Systems (CDSS) are a modern health care technology that is employed to help users with clinical decisions (Musen, Middleton, & Greenes, 2014). CDSS design requires the consideration of human factors, and their adoption and usability are highly dependent on the development of an appropriate interface (Miller et al., 2015). Consequently, the functionality of CDSS is also connected to interface elements. The purpose of the present paper is to evaluate CDSS using the elements of its interface to determine its functionality with the help of human factors methods and make suggestions about future improvements in the area. The analysis indicates that some problems are present in the most crucial interface elements of modern CDSS, but the human factors approach can rectify them.

The List of Elements and Evaluation Plan

The technology of CDSS is highly diverse, but its key function is to assist in decision-making (Musen et al., 2014). CDSS are a highly advanced version of paper guidelines in their electronic form (Kilsdonk, Peute, Riezebos, Kremer, & Jaspers, 2016). In order to succeed in their role, CDSS typically require the entry of patient data, which is connected to the databases of CDSS and used to produce recommendations that are designed to help in clinical decision-making (Miller et al., 2017; Musen et al., 2014). The mentioned aspects can be used to build the list of interface elements that are critically important to CDSS and its functionality.

  1. Informational elements are the ones that perform the key function of CDSS (Ancker et al., 2017; Musen et al., 2014): they offer recommendations for a user to consider.
  2. Input forms are the elements that are required to input data (for example, that about a patient). Their importance is explained by the specifics of user interaction with CDSS (Miller et al., 2017).
  3. User controls are the elements that exist to enable user-technology interaction (Miller et al., 2015). They are a significant part of the interface of any program, and CDSS is not an exception.

Miller et al. (2017) point out that there is a lack of guidelines on CDSS design, especially when human factors are concerned. At the same time, the human factors methods are of use when analyzing the functionality of a product (Patel & Kaufman, 2014; Sætren, Hogenboom, & Laumann, 2016). As a result, this paper will apply some of the human factors methods to the evaluation of the mentioned elements. The methods that are of interest include user analysis and training analysis (Sætren et al., 2016, p. 598-599). As for the main measure that will be applied to the elements, based on the analysis of relevant literature, it appears logical to focus on the feedback that has been gathered from the users of CDSS with special attention paid to the issues that they encounter with respect to each element. This approach appears to reflect the needs of users, which should eventually assist in improving CDSS.

Assessment and Evaluation of the Technology

The topic of the informational elements of CDSS (especially alerts) being problematic emerges from the modern literature (Ancker et al., 2017; Musen et al., 2014). First, there appears to be a poor fit between the needs of end users and alerts, which results in the latter being distracting or bothering and, at times, not informative or unintelligible (Miller et al., 2015). Informational elements can be redundant or repetitive as well (Ancker et al., 2017; Miller et al., 2015). Furthermore, Miller et al. (2015) note the reports of unpleasant or inappropriate signals associated with the alerts, as well as the problem of them being ineffective.

The issue of distracting alerts results in mistakes, especially those related to alert fatigue, which implies the automatic rejection of alerts when there are too many of them or when they are repetitive (Ancker et al., 2017). All these issues indicate that human factors need to be taken into account, and a user analysis suggests that the presently available CDSS informational elements are not very functional because they do not correspond to the needs of providers. Given the significance of alerts to CDSS (Musen et al., 2014), the fact that they are associated with so many issues suggests the need for improvement.

The alert problems were also connected to other issues as demonstrated by Miller et al. (2015). In particular, user control of said alerts was found to be problematic partially because users found it difficult to manage them appropriately, especially delay or turn them off. This problem may indicate the need for training analysis: apparently, the educational needs of end users are not fully addressed by the currently present CDSS (Sætren et al., 2016). Furthermore, a user analysis suggests that the providers who employ CDSS may need simpler controls. In summary, there are issues with controls functionality that require future improvement.

Furthermore, Miller et al. (2015) report that input forms were also found to be problematic: users complained about repetitive and mouse-heavy data entries. When applied to this element, user analysis would also suggest that there are some problems with its functionality; apparently, providers would rather use simpler and better-designed forms with lesser redundancy. In summary, the analysis of the three crucial elements of CDSS with the help of user feedback and human factors methods indicates the issues which prevent CDSS from being appropriately functioning and require solutions.

Improvement Suggestions

One of the most important recommendations that are currently suggested for CDSS is to make their interface simple (Kilsdonk et al., 2016; Miller et al., 2017). Based on the above-presented analysis, it appears that user controls may be the ones that require the most simplification, but input forms also need this improvement. Furthermore, for informational elements, this advice can be transformed into the recommendation for a reduced density of information, which should make the alerts less redundant and distracting. Miller et al. (2017) propose the use of various graphical elements like graphs, tables, iconic language, and so on as one of the solutions to the problem. Ancker et al. (2017) also suggest working on alert redundancy to avoid alert fatigue. An improved structure may help with the issue of unintelligible alerts (Kilsdonk et al., 2016; Miller et al., 2017). As for controls, they should also be augmented with appropriate training considerations. In the end, the presented recommendations highlight the need for human factors analyses.

Indeed, Miller et al. (2015) and Musen et al. (2014) point out that CDSS are still an emerging technology that requires an improved understanding of human factors and their employment in its design. User-centered approaches to CDSS development are employed in many cases (Brunner et al., 2017), and they clearly need to be used for improving the user-technology interface elements that are considered in this paper. Thus, the final recommendation is the application of a user-centered methodology that would take into account human factors to CDSS development. The presented analysis illustrates the applicability of such approaches to the technology of CDSS.

Conclusion

Based on the evaluation of three crucial CDSS interface elements with the help of human factors methods (user and training analyses), the following conclusions can be made. CDSS informational elements (alerts), controls, and input forms demonstrate issues that prevent CDSS from being fully functional. A more user-centered and human-factors-based approach to CDSS design should resolve the problems. Based on modern research, it appears that CDSS requires simpler controls and forms and less informationally dense or redundant alerts. Also, appropriate training considerations are needed to help end users make CDSS fully functional.

References

Ancker, J., Edwards, A., Nosal, S., Hauser, D., Mauer, E., & Kaushal, R. (2017). Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Medical Informatics and Decision Making, 17(1), 1-9. Web.

Brunner, J., Chuang, E., Goldzweig, C., Cain, C., Sugar, C., & Yano, E. (2017). User-centered design to improve clinical decision support in primary care. International Journal of Medical Informatics, 104, 56-64. Web.

Kilsdonk, E., Peute, L. W., Riezebos, R. J., Kremer, L. C., & Jaspers, M. W. M. (2016). Uncovering healthcare practitioners’ information processing using the think-aloud method: From paper-based guideline to clinical decision support system. International Journal of Medical Informatics, 86, 10-19. Web.

Miller, A., Moon, B., Anders, S., Walden, R., Brown, S., & Montella, D. (2015). Integrating computerized clinical decision support systems into clinical work: A meta-synthesis of qualitative research. International Journal of Medical Informatics, 84(12), 1009-1018. Web.

Miller, K., Mosby, D., Capan, M., Kowalski, R., Ratwani, R., Noaiseh, Y.,… Arnold, R. (2017). Interface, information, interaction: A narrative review of design and functional requirements for clinical decision support. Journal of the American Medical Informatics Association, 25(5), 585-592. Web.

Musen, M. A., Middleton, B., & Greenes, R. A. (2014). Clinical decision-support systems. In E. H. Shortliffe & J. J. Cimino (Eds.), Biomedical informatics (pp. 643-674). London, England: Springer.

Patel, V. L., & Kaufman, D. R. (2014). Cognitive science and biomedical informatics. In E. H. Shortliffe & J. J. Cimino (Eds.), Biomedical informatics (pp. 109-148). London, England: Springer.

Sætren, G., Hogenboom, S., & Laumann, K. (2016). A study of a technological development process: Human factors—the forgotten factors? Cognition, Technology & Work, 18(3), 595-611. Web.