The Era of Large Language Models


Yeşilkaya Koç S., Kotan H., Çelik S.

Creating A Human-Focused Future, Serra Çelik,Sevinç Gülseçen,Meltem Eryılmaz, Editör, Istanbul University Press, İstanbul, ss.1-11, 2025

  • Yayın Türü: Kitapta Bölüm / Araştırma Kitabı
  • Basım Tarihi: 2025
  • Yayınevi: Istanbul University Press
  • Basıldığı Şehir: İstanbul
  • Sayfa Sayıları: ss.1-11
  • Editörler: Serra Çelik,Sevinç Gülseçen,Meltem Eryılmaz, Editör
  • İstanbul Üniversitesi Adresli: Evet

Özet

In recent years, the fields of artificial intelligence (AI) and natural language processing (NLP) have undergone a significant transformation with the rise of large language models (LLMs). AI was first defined as an academic discipline in
1956 and has since evolved from rule*based systems to machine learning and deep learning. Today, LLMs are capable of understanding complex linguistic relationships by leveraging deep learning techniques on massive datasets,
successfully performing tasks such as text generation, translation, and summarization. The core functioning of LLMs involves data collection, preprocessing, modeling, evaluation, and fine*tuning. These models are typically built using
Transformer architecture and, due to their vast number of parameters, enable them to better grasp the context of language. LLMs are used not only in textual applications but also across various domains such as healthcare, law,
and software development. However, they also face ethical challenges, including hallucination, lack of reasoning, and biases. In the future, more advanced techniques and approaches will need to be developed to overcome these
challenges.