Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Machine Learning

Åström, Kalle LU orcid (2022) In Series in Medical Physics and Biomedical Engineering 1.
Abstract
Machine learning, a sub-field of artificial intelligence, is the study of computer algorithms that improve automatically through experience. Although the term was coined in 1959, machine learning builds on questions/methods that were developed earlier in linear algebra, mathematical analysis, optimization, and mathematical statistics. In this chapter we will give a brief overview of machine learning. First a few basic regression and classification techniques are introduced with examples. Then we introduce a few key concepts, such as Bayes theorem and prior and posterior distributions. Some machine-learning algorithms are based on estimating the likelihood function and on using Bayes theorem to obtain the posterior distribution, but most... (More)
Machine learning, a sub-field of artificial intelligence, is the study of computer algorithms that improve automatically through experience. Although the term was coined in 1959, machine learning builds on questions/methods that were developed earlier in linear algebra, mathematical analysis, optimization, and mathematical statistics. In this chapter we will give a brief overview of machine learning. First a few basic regression and classification techniques are introduced with examples. Then we introduce a few key concepts, such as Bayes theorem and prior and posterior distributions. Some machine-learning algorithms are based on estimating the likelihood function and on using Bayes theorem to obtain the posterior distribution, but most algorithms can be viewed as trying to model the posterior distribution directly. This leads to linear logistic regressions, the perceptron, shallow neural networks, artificial neural networks and, finally, convolutional neural networks. Machine-learning methods involve estimation of parameters, often using optimization. Determining what method to use and determining the so-called hyper-parameters require additional consideration in terms of over- and under-fitting and the need to divide data into three parts denoted as training, validation, and test. The progress within machine learning has been swift during the last decade.

The chapter ends with a few examples of computational architectures for solving different types of machine-learning problems and also some dimensionality-reduction techniques. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Handbook of Nuclear Medicine and Molecular Imaging for Physicists : Instrumentation and Imaging Procedures - Instrumentation and Imaging Procedures
series title
Series in Medical Physics and Biomedical Engineering
editor
Ljungberg, Michael
volume
1
edition
1
pages
16 pages
publisher
CRC Press
ISBN
9781138593268
9780429489556
DOI
10.1201/9780429489556-11
language
English
LU publication?
yes
id
ac5a4f84-3e18-4d77-9133-9a3f192fd618
alternative location
https://www.taylorfrancis.com/chapters/edit/10.1201/9780429489556-11/machine-learning-karl-%C3%A5str%C3%B6m?context=ubx&refId=d54af691-b598-4117-9d72-f362c3010c70
date added to LUP
2023-05-26 10:41:17
date last changed
2023-07-24 11:16:07
@inbook{ac5a4f84-3e18-4d77-9133-9a3f192fd618,
  abstract     = {{Machine learning, a sub-field of artificial intelligence, is the study of computer algorithms that improve automatically through experience. Although the term was coined in 1959, machine learning builds on questions/methods that were developed earlier in linear algebra, mathematical analysis, optimization, and mathematical statistics. In this chapter we will give a brief overview of machine learning. First a few basic regression and classification techniques are introduced with examples. Then we introduce a few key concepts, such as Bayes theorem and prior and posterior distributions. Some machine-learning algorithms are based on estimating the likelihood function and on using Bayes theorem to obtain the posterior distribution, but most algorithms can be viewed as trying to model the posterior distribution directly. This leads to linear logistic regressions, the perceptron, shallow neural networks, artificial neural networks and, finally, convolutional neural networks. Machine-learning methods involve estimation of parameters, often using optimization. Determining what method to use and determining the so-called hyper-parameters require additional consideration in terms of over- and under-fitting and the need to divide data into three parts denoted as training, validation, and test. The progress within machine learning has been swift during the last decade.<br/><br/>The chapter ends with a few examples of computational architectures for solving different types of machine-learning problems and also some dimensionality-reduction techniques.}},
  author       = {{Åström, Kalle}},
  booktitle    = {{Handbook of Nuclear Medicine and Molecular Imaging for Physicists : Instrumentation and Imaging Procedures}},
  editor       = {{Ljungberg, Michael}},
  isbn         = {{9781138593268}},
  language     = {{eng}},
  month        = {{04}},
  publisher    = {{CRC Press}},
  series       = {{Series in Medical Physics and Biomedical Engineering}},
  title        = {{Machine Learning}},
  url          = {{http://dx.doi.org/10.1201/9780429489556-11}},
  doi          = {{10.1201/9780429489556-11}},
  volume       = {{1}},
  year         = {{2022}},
}