ExamGecko
Question list
Search
Search

Question 16 - D-GAI-F-01 discussion

Report
Export

What is Transfer Learning in the context of Language Model (LLM) customization?

A.
It is where you can adjust prompts to shape the model's output without modifying its underlying weights.
Answers
A.
It is where you can adjust prompts to shape the model's output without modifying its underlying weights.
B.
It is a process where the model is additionally trained on something like human feedback.
Answers
B.
It is a process where the model is additionally trained on something like human feedback.
C.
It is a type of model training that occurs when you take a base LLM that has been trained and then train it on a different task while using all its existing base weights.
Answers
C.
It is a type of model training that occurs when you take a base LLM that has been trained and then train it on a different task while using all its existing base weights.
D.
It is where purposefully malicious inputs are provided to the model to make the model more resistant to adversarial attacks.
Answers
D.
It is where purposefully malicious inputs are provided to the model to make the model more resistant to adversarial attacks.
Suggested answer: C

Explanation:

Transfer learning is a technique in AI where a pre-trained model is adapted for a different but related task. Here's a detailed explanation:

Transfer Learning: This involves taking a base model that has been pre-trained on a large dataset and fine-tuning it on a smaller, task-specific dataset.

Base Weights: The existing base weights from the pre-trained model are reused and adjusted slightly to fit the new task, which makes the process more efficient than training a model from scratch.

Benefits: This approach leverages the knowledge the model has already acquired, reducing the amount of data and computational resources needed for training on the new task.

Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018). A Survey on Deep Transfer Learning. In International Conference on Artificial Neural Networks.

Howard, J., & Ruder, S. (2018). Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).

asked 16/09/2024
Chan Park
36 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first