Log in to save to my catalogue

Evaluating Large Language Models in extracting cognitive exam dates and scores

Evaluating Large Language Models in extracting cognitive exam dates and scores

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_doaj_primary_oai_doaj_org_article_a75d72ec94b64755a5f737a01e709bd9

Evaluating Large Language Models in extracting cognitive exam dates and scores

Publication information

Publisher

United States: Public Library of Science

More information

Scope and Contents

Contents

Ensuring reliability of Large Language Models (LLMs) in clinical tasks is crucial. Our study assesses two state-of-the-art LLMs (ChatGPT and LlaMA-2) for extracting clinical information, focusing on cognitive tests like MMSE and CDR. Our data consisted of 135,307 clinical notes (Jan 12th, 2010 to May 24th, 2023) mentioning MMSE, CDR, or MoCA. After...

Alternative Titles

Full title

Evaluating Large Language Models in extracting cognitive exam dates and scores

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_doaj_primary_oai_doaj_org_article_a75d72ec94b64755a5f737a01e709bd9

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_doaj_primary_oai_doaj_org_article_a75d72ec94b64755a5f737a01e709bd9

Other Identifiers

ISSN

2767-3170

E-ISSN

2767-3170

DOI

10.1371/journal.pdig.0000685

How to access this item