ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 191 - Professional Machine Learning Engineer discussion

Report
Export

You work at a leading healthcare firm developing state-of-the-art algorithms for various use cases You have unstructured textual data with custom labels You need to extract and classify various medical phrases with these labels What should you do?

A.
Use the Healthcare Natural Language API to extract medical entities.
Answers
A.
Use the Healthcare Natural Language API to extract medical entities.
B.
Use a BERT-based model to fine-tune a medical entity extraction model.
Answers
B.
Use a BERT-based model to fine-tune a medical entity extraction model.
C.
Use AutoML Entity Extraction to train a medical entity extraction model.
Answers
C.
Use AutoML Entity Extraction to train a medical entity extraction model.
D.
Use TensorFlow to build a custom medical entity extraction model.
Answers
D.
Use TensorFlow to build a custom medical entity extraction model.
Suggested answer: B

Explanation:

Medical entity extraction is a task that involves identifying and classifying medical terms or concepts from unstructured textual data, such as electronic health records, clinical notes, or research papers.Medical entity extraction can help with various use cases, such as information retrieval, knowledge discovery, decision support, and data analysis1.

One possible approach to perform medical entity extraction is to use a BERT-based model to fine-tune a medical entity extraction model.BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained language model that can capture the contextual information from both left and right sides of a given token2.BERT can be fine-tuned on a specific downstream task, such as medical entity extraction, by adding a task-specific layer on top of the pre-trained model and updating the model parameters with a small amount of labeled data3.

A BERT-based model can achieve high performance on medical entity extraction by leveraging the large-scale pre-training on general-domain corpora and the fine-tuning on domain-specific data.For example, Nesterov and Umerenkov4proposed a novel method of doing medical entity extraction from electronic health records as a single-step multi-label classification task by fine-tuning a transformer model pre-trained on a large EHR dataset. They showed that their model can achieve human-level quality for most frequent entities.

1: Medical Named Entity Recognition from Un-labelled Medical Records based on Pre-trained Language Models and Domain Dictionary | Data Intelligence | MIT Press

2: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

3: Fine-tuning BERT for Medical Entity Extraction

4: Distantly supervised end-to-end medical entity extraction from electronic health records with human-level quality

asked 18/09/2024
Lietuvis Kau
32 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first